Skip to main content
This page covers how to deploy, run, and manage your custom connectors across different environments in the iPaaS platform.

Overview

Connectors run on Kubernetes using Apache Camel-K, providing:
  • Automatic scaling based on load
  • High availability with multiple replicas
  • Environment isolation (dev, staging, test, UAT)
  • API Management through Azure APIM
  • Service mesh integration for secure communication

Environments

The platform provides multiple runtime environments:
EnvironmentPurposeAccess Level
devDevelopment and testingRead-write for developers
stgStaging and integration testingRead-write for QA teams
testSystem testingRead-write for testers
uatUser acceptance testingRead-write for business users

Environment access

To access runtime environments, you need appropriate roles assigned in self-service.tfvars:
team_members = {
  "[email protected]" = {
    grandcentral = {
      roles = [
        "dev-rw",      # Full access to dev runtime
        "stg-ro",      # Read-only access to staging
        "test-rw"      # Full access to test environment
      ]
    }
  }
}
Available Roles:
  • {env}-rw: Full access (read-write) to the environment
  • {env}-ro: Read-only access to the environment
  • {env}-apim-rw: API Management contributor access
  • {env}-apim-ro: API Management reader access
  • {env}-apim-subs-rw: Subscription management access
  • {env}-apim-cred-manager: Credential manager access

Accessing Azure resources

Access Azure resources to manage and monitor your connector deployments.

Azure portal access

  1. Navigate to Azure Portal
  2. Sign in with your Azure AD credentials
  3. Select your subscription/resource group
  4. Access resources like:
    • Kubernetes clusters
    • API Management instances
    • Storage accounts
    • Key Vaults

Set up kubectl for Azure

# Install Azure CLI (if not installed)
# macOS: brew install azure-cli
# Windows: Download from https://aka.ms/installazurecliwindows
# Linux: https://docs.microsoft.com/en-us/cli/azure/install-azure-cli

# Login to Azure
az login

# Get credentials for your cluster
az aks get-credentials --resource-group <resource-group> --name <cluster-name>

# Verify connection
kubectl get nodes
Resources:

Deployment process

Deploy your connectors using the GitOps workflow through the applications-live repository.

Applications-live repository

Connectors are deployed through the applications-live repository, which manages runtime configurations.

Access requirements

To deploy connectors, you need:
  • GitHub team membership: applications-live team
  • Runtime access: Appropriate Grand Central roles

Deployment workflow

Prepare your connector

Ensure your connector is:
  • ✅ Built successfully
  • ✅ Tested locally
  • ✅ Merged to main branch
  • ✅ Version tagged

Configure runtime settings

In the applications-live repository, configure your connector:
# Example connector configuration
connectors:
  my-custom-connector:
    image: <container-registry>/my-custom-connector:1.0.0
    replicas: 2
    environment: dev
    resources:
      requests:
        memory: "512Mi"
        cpu: "500m"
      limits:
        memory: "1Gi"
        cpu: "1000m"
    env:
      - name: EXTERNAL_API_URL
        value: "https://api.example.com"
      - name: LOG_LEVEL
        value: "INFO"

Create deployment pull request

  1. Create a branch in applications-live repository
  2. Add or update connector configuration
  3. Follow PR title format: feat(GC-123): Deploy my-custom-connector to dev
  4. Create Pull Request targeting main branch

Review and approve

  • Default reviewers: All applications-live team members are automatically assigned
  • Custom reviewers: Can be configured per runtime in self-service.tfvars
  • Auto-approval: Available for specific runtimes (e.g., dev)

Deployment execution

After PR merge:
  1. Pipeline validates configuration
  2. Connector is deployed to Kubernetes
  3. Health checks verify successful deployment
  4. Connector becomes available in the target environment

Runtime configuration

Environment variables

Configure connector behavior using environment variables:
env:
  # API Configuration
  - name: API_BASE_URL
    value: "https://api.example.com"
  
  # Authentication
  - name: API_KEY
    valueFrom:
      secretKeyRef:
        name: connector-secrets
        key: api-key
  
  # Logging
  - name: LOG_LEVEL
    value: "INFO"
  
  # Performance
  - name: CONNECTION_POOL_SIZE
    value: "10"

Secrets management

Sensitive data is managed through:
  • GitHub Secrets: Managed in self-service repository
  • Kubernetes Secrets: Injected at runtime
  • Azure Key Vault: For production secrets

Resource limits

Configure resource allocation:
resources:
  requests:
    memory: "512Mi"    # Minimum memory
    cpu: "500m"        # Minimum CPU
  limits:
    memory: "2Gi"      # Maximum memory
    cpu: "2000m"       # Maximum CPU

Container registry

Connector images are stored in:
  • Customer ACR: GC_CUSTOMER_ACR_BASE_URL (available as GitHub secret)
  • Enterprise ACR: GC_ENTERPRISE_ACR_BASE_URL (available as GitHub secret)
Accessing Images:
  • Images are automatically built and pushed during CI/CD
  • Use the image reference in your deployment configuration
  • Format: <registry>/<image-name>:<tag>
Viewing Images:

API management integration

APIM access

Connectors can be exposed through Azure API Management (APIM): APIM Roles:
  • {env}-apim-ro: Read API definitions and subscriptions
  • {env}-apim-rw: Create and modify APIs
  • {env}-apim-subs-rw: Manage subscriptions and keys
  • {env}-apim-cred-manager: Manage OAuth credentials

Publish your API

  1. Access APIM Portal for your environment
  2. Create or import API definition
  3. Configure backend to point to your connector
  4. Set up policies (rate limiting, authentication, etc.)
  5. Publish API for consumers
Resources:

Subscription management

  • Create subscriptions for API consumers
  • Manage keys (primary and secondary)
  • Rotate keys for security
  • Monitor usage through APIM analytics

Scaling and performance

Scale your connectors to handle varying workloads.

Horizontal scaling

Connectors automatically scale based on:
  • CPU utilization
  • Memory usage
  • Request queue length
  • Custom metrics

Manual scaling

Adjust replica count in deployment configuration:
replicas: 3  # Number of connector instances

Performance tuning

Optimize connector performance:
  • Connection pooling: Configure pool sizes
  • Threading: Adjust thread pool configurations
  • Caching: Implement caching strategies
  • Batch processing: Process messages in batches

Health checks and readiness

Configure health checks to ensure your connector is running correctly.

Health endpoints

Connectors expose health endpoints:
  • Liveness: /health/live - Is the connector running?
  • Readiness: /health/ready - Is the connector ready to serve traffic?

Monitoring health

Kubernetes uses health checks to:
  • Restart unhealthy pods
  • Route traffic only to ready instances
  • Prevent deployment of unhealthy versions

Rolling updates and rollbacks

Deploy updates with zero downtime and roll back if issues occur.

Rolling updates

When deploying new versions:
  1. New pods are created with updated image
  2. Traffic gradually shifts to new pods
  3. Old pods terminate after traffic migration
  4. Zero-downtime deployment achieved

Rollback process

If issues occur:
  1. Identify problematic version
  2. Revert configuration in applications-live
  3. Create PR to rollback
  4. Merge PR to trigger rollback deployment

Environment-specific deployments

Each environment has specific access requirements and deployment processes.

Development environment

  • Purpose: Active development and testing
  • Access: Developers with dev-rw role
  • Deployment: Frequent, often auto-approved
  • Configuration: Development API endpoints

Staging environment

  • Purpose: Integration testing
  • Access: QA teams with stg-rw role
  • Deployment: Before production releases
  • Configuration: Staging API endpoints

Test environment

  • Purpose: System and performance testing
  • Access: Test teams with test-rw role
  • Deployment: For comprehensive testing
  • Configuration: Test API endpoints

UAT environment

  • Purpose: User acceptance testing
  • Access: Business users with uat-rw role
  • Deployment: Pre-production validation
  • Configuration: Production-like settings

Troubleshooting

Resolve common issues that occur during deployment and runtime.

Deployment failures

Issue: Connector fails to deploy
  • Check: Configuration syntax in applications-live
  • Check: Image availability in container registry
  • Check: Resource quotas and limits
  • Check: Health check endpoints
  • Resources: Kubernetes Troubleshooting
Issue: Connector starts but crashes
  • Check: Application logs in DataDog or Grafana
  • Check: Environment variables and secrets
  • Check: External API connectivity
  • Check: Resource limits (memory/CPU)

Runtime issues

Issue: High latency or timeouts
  • Check: External API response times
  • Check: Resource constraints
  • Check: Network connectivity
  • Check: Connection pool settings
Issue: Memory or CPU issues
  • Check: Resource limits and requests
  • Check: Memory leaks in code
  • Check: Thread pool configurations
  • Check: Message queue backlogs

Access issues

Issue: Cannot access Kubernetes cluster
  • Check: Azure login: az login
  • Check: Cluster credentials: az aks get-credentials
  • Check: Role assignments in Azure Portal
  • Check: kubectl configuration: kubectl config view

Best practices

  1. Start with dev: Deploy to dev environment first
  2. Test thoroughly: Validate in staging before production
  3. Monitor deployments: Watch logs during and after deployment
  4. Use health checks: Implement proper health endpoints
  5. Configure resources: Set appropriate resource limits
  6. Manage secrets: Never hardcode credentials
  7. Version control: Tag and version all deployments
  8. Documentation: Document deployment procedures

Accessing runtime resources

Access runtime resources to manage and troubleshoot your connectors.

Kubernetes access

With appropriate roles, access Kubernetes clusters:
# Get connector pods
kubectl get pods -n <namespace>

# View connector logs
kubectl logs -f <pod-name> -n <namespace>

# Describe connector deployment
kubectl describe deployment <connector-name> -n <namespace>

# Get services
kubectl get svc -n <namespace>

# Get ingress
kubectl get ingress -n <namespace>
Resources:

Service Bus explorer

Access Service Bus for message queue management:
  • Requires appropriate Azure roles
  • Connect via Service Bus Explorer or Azure Portal
  • Monitor queues and topics
  • Manage subscriptions
Resources: