The inf1.2xlarge instance is part of the inf1 series, featuring 8 vCPUs and Up to 25 Gigabit of RAM, with Machine Learning Asic Instances. It is available at a rate of $0.3620/hour.
Amazon EC2 Inf1 instances are specifically engineered to support machine learning inference applications.
Equipped with up to 16 AWS Inferentia chips
Compatible with AWS Neuron SDK
High-frequency 2nd Generation Intel Xeon Scalable processors (Cascade Lake P-8259L)
Provides up to 100 Gbps of networking bandwidth
Credits: AWS Resources, Updated At: 2024-08-23
Building recommendation systems, forecasting, analyzing images and videos, advanced text analytics, document analysis, voice processing, conversational agents, translation, transcription, and fraud detection.
Instances | vCPUs | Memory (GiB) |
---|---|---|
inf1.xlarge | 4 | 8 GiB |
inf1.2xlarge | 8 | 16 GiB |
inf1.6xlarge | 24 | 48 GiB |
inf1.24xlarge | 96 | 192 GiB |
General | |
---|---|
Type | inf1.2xlarge |
Region | N. Virginia |
Family Group | Inf1 |
Family Category | Machine Learning Asic Instances |
Instance Generation | Current |
Term Type | On-demand |
Pricing (USD/hr) | $ 0.3620 |
Compute | |
---|---|
vCPU | 8 |
Memory | 16 GiB |
Storage | EBS only |
Processor Architecture | 64-bit |
Operating System | Linux |
EBS Optimized | NA |
Tenancy | Shared |
Clock Speed | NA |
GPU | NA |
ECU | 0.0000 |
Virtualization | HVM |
Networking | |
---|---|
Network Performance | Up to 25 Gigabit |
ENA Support | True |
Region | Price / Unit | Monthly Pricing |
---|---|---|
Ohio | $0.3620 | $264.26 |
Oregon | $0.3620 | $264.26 |
Mumbai | $0.3810 | $278.13 |
Stockholm | $0.3850 | $281.05 |
Ireland | $0.4030 | $294.19 |
Central | $0.4030 | $294.19 |
Paris | $0.4230 | $308.79 |
London | $0.4240 | $309.52 |
Milan | $0.4240 | $309.52 |
N. California | $0.4350 | $317.55 |
Bahrain | $0.4440 | $324.12 |
Seoul | $0.4460 | $325.58 |
Frankfurt | $0.4530 | $330.69 |
Sydney | $0.4530 | $330.69 |
Singapore | $0.4890 | $356.97 |
Tokyo | $0.4890 | $356.97 |
Sao Paulo | $0.5980 | $436.54 |