Close Menu
Innovation Village | Technology, Product Reviews, Business
    Facebook X (Twitter) Instagram
    Wednesday, June 4
    • About us
      • Authors
    • Contact us
    • Privacy policy
    • Terms of use
    • Advertise
    • Newsletter
    • Post a Job
    • Partners
    Facebook X (Twitter) LinkedIn YouTube WhatsApp
    Innovation Village | Technology, Product Reviews, Business
    • Home
    • Innovation
      • Products
      • Technology
      • Internet of Things
    • Business
      • Agritech
      • Fintech
      • Healthtech
      • Investments
        • Cryptocurrency
      • People
      • Startups
      • Women In Tech
    • Media
      • Entertainment
      • Gaming
    • Reviews
      • Gadgets
      • Apps
      • How To
    • Giveaways
    • Jobs
    Innovation Village | Technology, Product Reviews, Business
    You are at:Home»Artificial Intelligence»Nvidia unveils its powerful H200 GPU to foster generative AI development
    Nvidia H200

    Nvidia unveils its powerful H200 GPU to foster generative AI development

    3
    By Tapiwa Matthew Mutisi on November 14, 2023 Artificial Intelligence, Business, chatbot, Information Technology, Manufacturing, News, Technology

    Nvidia recently unveiled the H200, a graphics processing unit designed for training and deploying the kinds of artificial intelligence models that are powering the generative AI boom.

    The new GPU is an upgrade from the H100, the chip OpenAI used to train its most advanced large language model, GPT-4. Big companies, startups and government agencies are all vying for a limited supply of the chips.

    H100 chips cost between $25,000 and $40,000, according to an estimate from Raymond James, and thousands of them working together are needed to create the biggest models in a process called “training.”

    Excitement over Nvidia’s AI GPUs has supercharged the company’s stock, which is up more than 230% so far in 2023. Nvidia expects around $16 billion of revenue for its fiscal third quarter, up 170% from a year ago.

    The key improvement with the H200 is that it includes 141GB of next-generation “HBM3” memory that will help the chip perform “inference,” or using a large model after it’s trained to generate text, images or predictions.

    Nvidia said the H200 will generate output nearly twice as fast as the H100. That’s based on a test using Meta’s Llama 2 LLM.

    The H200, which is expected to ship in the second quarter of 2024, will compete with AMD’s MI300X GPU. AMD’s chip, similar to the H200, has additional memory over its predecessors, which helps fit big models on the hardware to run inference.

    Nvidia said the H200 will be compatible with the H100, meaning that AI companies who are already training with the prior model won’t need to change their server systems or software to use the new version.

    Nvidia says it will be available in four-GPU or eight-GPU server configurations on the company’s HGX complete systems, as well as in a chip called GH200, which pairs the H200 GPU with an Arm-based processor. However, the H200 may not hold the crown of the fastest Nvidia AI chip for long.

    While companies like Nvidia offer many different configurations of their chips, new semiconductors often take a big step forward about every two years, when manufacturers move to a different architecture that unlocks more significant performance gains than adding memory or other smaller optimizations. Both the H100 and H200 are based on Nvidia’s Hopper architecture.

    In October, Nvidia told investors that it would move from a two-year architecture cadence to a one-year release pattern due to high demand for its GPUs. The company displayed a slide suggesting it will announce and release its B100 chip, based on the forthcoming Blackwell architecture, in 2024.

    Related

    AI artificial intelligence (AI) Chipsets Generative AI GPT-4 GPUs H200 GPU Information technology Nvidia OpenAI Startups Technology
    Share. Facebook Twitter Pinterest LinkedIn Email
    Tapiwa Matthew Mutisi
    • Facebook
    • X (Twitter)
    • LinkedIn

    Tapiwa Matthew Mutisi has been covering blockchain technology, intelligent technologies, cryptocurrency, cybersecurity, telecommunications technology, sustainability, autonomous vehicles, and other topics for Innovation Village since 2017. In the years since, he has published over 4,000 articles — a mix of breaking news, reviews, helpful how-tos, industry analysis, and more. | Open DM on Twitter @TapiwaMutisi

    Related Posts

    Nigeria Partners with Gates Foundation to Launch $7.5M AI Scaling Hub

    dLocal to Acquire AZA Finance, Strengthening Cross-Border Payments in Africa

    Top 10 Lucrative Tech Skills That Don’t Require Coding

    3 Comments

    1. Pingback: Amazon announces next-gen chips for training and running AI models - Innovation Village | Technology, Product Reviews, Business

    2. Pingback: Intel's new chip Gaudi3 competes with Nvidia and AMD - Innovation Village | Technology, Product Reviews, Business

    3. Pingback: Nvidia launches gaming chip with reduced performance in China to comply with U.S. export controls - Innovation Village | Technology, Product Reviews, Business

    Leave A Reply Cancel Reply

    You must be logged in to post a comment.

    Copyright ©, 2013-2024 Innovation-Village.com. All Rights Reserved

    Type above and press Enter to search. Press Esc to cancel.