The dl2q.24xlarge instance is part of the dl2q series, featuring 96 vCPUs and 100 Gigabit of RAM, with Machine Learning Asic Instances. It is available at a rate of $8.9194/hour.
Amazon EC2 DL2q instances, equipped with Qualcomm AI 100 accelerators, are designed for cost-effective deployment of deep learning workloads in the cloud and for validating the performance and accuracy of DL applications intended for Qualcomm devices.
Equipped with 8 Qualcomm AI 100 accelerators
Compatible with Qualcomm Cloud AI Platform and Apps SDK
Features 2nd Generation Intel Xeon Scalable Processors (Cascade Lake P-8259CL)
Provides up to 128 GB of shared accelerator memory
Supports networking speeds of up to 100 Gbps
Credits: AWS Resources, Updated At: 2024-08-23
Deploy widely-used deep learning and generative AI applications like content creation, image analysis, text summarization, and virtual assistants. Test AI workloads before deploying them on smartphones, vehicles, robotics, and extended reality devices.
General | |
---|---|
Type | dl2q.24xlarge |
Region | Oregon |
Family Group | DL2q |
Family Category | Machine Learning Asic Instances |
Instance Generation | Current |
Term Type | On-demand |
Pricing (USD/hr) | $ 8.9194 |
Compute | |
---|---|
vCPU | 96 |
Memory | 768 GiB |
Storage | EBS only |
Processor Architecture | 64-bit |
Operating System | Linux |
EBS Optimized | NA |
Tenancy | Shared |
Clock Speed | 2.5 GHz |
GPU | NA |
ECU | 0.0000 |
Virtualization | HVM |
Networking | |
---|---|
Network Performance | 100 Gigabit |
ENA Support | True |
Region | Price / Unit | Monthly Pricing |
---|---|---|
Frankfurt | $11.5952 | $8464.50 |