Skip to main content
These tools connect MyAi to your existing enterprise stack — CRMs, ERPs, databases, file stores, and any system with a REST API or SQL interface.

api_client — REST/HTTP

The primary tool for outbound integrations. Use it to fetch data, submit information, or trigger actions in any third-party system.

Key Parameters

ParameterRequiredDescription
credentials_artifactYes (for auth)The artifact_id of an integration_credentials artifact storing base URL and auth config.
methodYesHTTP method: GET, POST, PUT, PATCH, DELETE.
endpointOne of theseRelative path appended to the base URL in your credentials (e.g., /api/customers).
urlOne of theseFull absolute URL for ad-hoc calls not tied to a credentials artifact.
bodyNoRequest body. Pass a Python dict for JSON — auto-serialized with correct headers.
query_paramsNoDictionary of URL query parameters (e.g., {"limit": 10, "status": "active"}).
headersNoAdditional HTTP headers (e.g., {"Accept": "application/xml"}).
filesNoList of file parts for multipart/form-data uploads. Each needs field_name, bytes, filename, content_type.

Authentication Patterns

The credentials_artifact supports:
  • Basic Auth — username/password
  • API Key — header or query parameter
  • Bearer Token — OAuth tokens
Never embed API keys directly in function code. Always reference an integration_credentials artifact.

Example

def main(customer_id: str):
    response = default_api.api_client(
        credentials_artifact="crm-api-creds",
        method="GET",
        endpoint=f"/customers/{customer_id}",
        headers={"Accept": "application/json"}
    )
    if response and response.get("status") == "success":
        return response.get("data")
    return None

sql_client — BigQuery, MySQL, PostgreSQL

Direct database interaction for “Query-in-Place” workflows. MyAi does not ingest your database — it executes queries and returns structured JSON.

Key Parameters

ParameterRequiredDescription
credentialsYes (MySQL/PG)The artifact_id of an integration_credentials artifact with connection details. May be omitted for system BigQuery with service account access.
operationYesexecute_sql, list_tables (schema discovery), or test_connection.
sqlYes (for execute)The SQL query string.
datasetNoDatabase name (MySQL) or dataset name (BigQuery).
max_rowsNoMaximum rows to return (default: 50, max: 500).
Use list_tables and test_connection operations to discover schema and verify connectivity before writing queries.

Example

def main(user_id: str):
    result = default_api.sql_client(
        operation="execute_sql",
        credentials="warehouse-creds",
        dataset="user_management_db",
        sql=f"SELECT username, email, created_at FROM users WHERE id = '{user_id}'"
    )
    if result and result.get("status") == "success":
        return result.get("data")
    return None

file_processor — Ingestion & Conversion

Handles the “unstructured-to-structured” pipeline — pulling files from external sources and converting them into MyAi Artifacts.

Key Parameters

ParameterRequiredDescription
operationYesdownload or process — fetches a file from a URL and creates an artifact.
urlYeshttps:// or gs:// URL. For SharePoint, use the Microsoft Graph API download URL.
credentials_artifactNoFor authenticated downloads (e.g., secure SharePoint documents).
source_idNoUnique identifier for deduplication. If the same source_id was processed before, returns the cached artifact.
filenameNoOverride for filename. Useful for MIME type detection.
artifact_idNoTo retrieve the raw bytes of an already-processed artifact (for re-upload via api_client).

Supported Formats

PDF, XLSX, DOCX, images, and more — automatically converted into MyAi Artifacts.

Example: Download, Then Re-Upload

def main(report_url: str, upload_creds_id: str):
    # 1. Download external file as a MyAi artifact
    download = default_api.file_processor(
        operation="download",
        url=report_url,
        filename="monthly_report.pdf"
    )
    artifact_id = download.get("artifact_id")

    # 2. Retrieve the bytes
    file_data = default_api.file_processor(
        operation="download",
        artifact_id=artifact_id
    )

    # 3. Upload to another system
    default_api.api_client(
        credentials_artifact=upload_creds_id,
        method="POST",
        endpoint="/upload-reports",
        files=[{
            "field_name": "report_file",
            "bytes": file_data.get("bytes"),
            "filename": "monthly_report.pdf",
            "content_type": "application/pdf"
        }]
    )

Learn More

Functions

Write custom Python logic that orchestrates these data tools.

Integrations

See the full connector catalog and supported systems.