The trn1.32xlarge instance is part of the trn1 series, featuring 128 vCPUs and 800 Gigabit of RAM, with Machine Learning Asic Instances. It is available at a rate of $21.5000/hour.
Amazon EC2 Trn1 instances, utilizing AWS Trainium chips, are engineered for high-performance deep learning training and provide up to 50% cost savings compared to similar Amazon EC2 instances.
Equipped with 16 AWS Trainium chips
Utilizes AWS Neuron SDK for support
Features 3rd Generation Intel Xeon Scalable processors (Ice Lake SP)
Offers up to 1600 Gbps of networking bandwidth via second-generation Elastic Fabric Adapter (EFA)
Provides up to 8 TB of local NVMe storage
Includes high-bandwidth, intra-instance connectivity through NeuronLink
Implemented in EC2 UltraClusters, which can scale up to 30,000 AWS Trainium accelerators, connected via a petabit-scale non-blocking network and featuring scalable low-latency storage with Amazon FSx for Lustre
Optimized for Amazon EBS
Supports enhanced networking capabilities
Credits: AWS Resources, Updated At: 2024-08-23
Training deep learning models for tasks like natural language processing, computer vision, search functions, recommendation systems, and ranking.
Instances | vCPUs | Memory (GiB) |
---|---|---|
trn1.2xlarge | 8 | 32 GiB |
trn1.32xlarge | 128 | 512 GiB |
trn1n.32xlarge | 128 | 512 GiB |
General | |
---|---|
Type | trn1.32xlarge |
Region | N. Virginia |
Family Group | Trn1 |
Family Category | Machine Learning Asic Instances |
Instance Generation | Current |
Term Type | On-demand |
Pricing (USD/hr) | $ 21.5000 |
Compute | |
---|---|
vCPU | 128 |
Memory | 512 GiB |
Storage | 4 x 1900 NVMe SSD |
Processor Architecture | 64-bit |
Operating System | Linux |
EBS Optimized | NA |
Tenancy | Shared |
Clock Speed | Up to 3.5 GHz |
GPU | NA |
ECU | 0.0000 |
Virtualization | HVM |
Networking | |
---|---|
Network Performance | 800 Gigabit |
ENA Support | True |
Region | Price / Unit | Monthly Pricing |
---|---|---|
Oregon | $21.5000 | $15695.00 |
Ohio | $21.5000 | $15695.00 |