Azure ML Studio & Designer
No‑code/low‑code environment and visual authoring for reproducible experiments.
Studio, AutoML, registry, endpoints, pipelines and MLOps to bring AI to production.
Technology Cluster · Back to Azure AI Technologies · AI for Business
No‑code/low‑code environment and visual authoring for reproducible experiments.
Automatic model/hyperparameter search; fast baselines and algorithm selection.
Versioning of models, datasets and features for reuse and traceability.
Modular steps (preprocess, train, validate, deploy) orchestrated on‑prem/cloud.
Online (real‑time) or batch services on CPU/GPU, autoscaling and revisions.
CI/CD, quality gates, approvals, lineage, security and compliance.
Explainability, fairness, drift detection and metric/cost alerting.
# 1) Register dataset
# 2) Experiment/AutoML
# 3) Register model
# 4) Create Environment (conda/docker)
# 5) Endpoint (online/batch) & test
# 6) CI/CD with approvals
Data Lake + Feature Store → Training on Compute/AKS → Registry → Endpoint (AKS/ACI/Batch) with monitoring.
Component | When to use | Output |
---|---|---|
Studio/Designer | No‑code, PoCs, business teams | Repeatable pipelines/experiments |
AutoML | Fast baseline & standard tasks | Best model + metrics |
Registry | Reuse and traceability | Model/dataset versions |
Endpoints | Serve predictions | REST/Batch + scalability |
MLOps | Production at scale | CI/CD, policies, audit |
Version datasets/features, define schemas, validate quality and manage drift.
Use profiling, autoscaling and request batching; separate training/serving resources.
Managed identities, private networks, key‑vault, least‑privilege access policies.
Online for low latency; batch for high volumes and overnight processing. They often coexist.
Spot/low‑priority, early stopping, reduced hyperparameter grids and profiling; archive non‑essential runs.
Data locality, encryption at rest/in transit, access logging, approvals and explainability for critical models.