Close Menu
Innovation Village | Technology, Product Reviews, Business
    Facebook X (Twitter) Instagram
    Saturday, June 21
    • About us
      • Authors
    • Contact us
    • Privacy policy
    • Terms of use
    • Advertise
    • Newsletter
    • Post a Job
    • Partners
    Facebook X (Twitter) LinkedIn YouTube WhatsApp
    Innovation Village | Technology, Product Reviews, Business
    • Home
    • Innovation
      • Products
      • Technology
      • Internet of Things
    • Business
      • Agritech
      • Fintech
      • Healthtech
      • Investments
        • Cryptocurrency
      • People
      • Startups
      • Women In Tech
    • Media
      • Entertainment
      • Gaming
    • Reviews
      • Gadgets
      • Apps
      • How To
    • Giveaways
    • Jobs
    Innovation Village | Technology, Product Reviews, Business
    You are at:Home»Artificial Intelligence»2023 UK’s AI Safety Summit key takeaways on the future of AI
    AI Safety Summit 2023

    2023 UK’s AI Safety Summit key takeaways on the future of AI

    0
    By Tapiwa Matthew Mutisi on November 2, 2023 Artificial Intelligence, Events, Government, Innovation, Investments, News, Safety, Security, Technology

    Artificial intelligence (AI) is changing the world, but who’s setting the rules? Good question, huh? I happen to have came across the answer by following the UK’s AI Safety Summit. Yes, this week the UK hosted a major artificial intelligence (AI) summit aimed at establishing some ground rules and fostering international collaboration.

    The event brought together political leaders and tech experts to discuss both the promise and potential perils of this rapidly advancing technology.

    As AI continues to grow, countries around the world are trying to get ahead of the curve and establish some ground rules. The summit resulted in some important declarations and initiatives that give us a glimpse into the future of AI governance.

    Just in case you missed out the summit, let’s break down the critical highlights for you:

    1. The Bletchley Declaration

    A big focus of the gathering was on establishing global coordination and standards around AI safety. This led to the signing of the new Bletchley Declaration, which was agreed to by 28 countries, including heavyweights like the US, UK, and China.

    The declaration lays out plans for greater transparency from AI developers regarding safety practices and more scientific collaboration on understanding AI’s risks. It’s being hailed as a landmark achievement in getting the world’s AI leaders aligned on managing the dangers of AI posed on the domains of daily life from “misuse or unintended issues of control relating to alignment with human intent.” While a bit vague in details, it’s seen as an important first step towards creating international norms and mitigation strategies.

    2. Kamala Harris calls out threats to rights and democracy

    US Vice President Kamala Harris gave a speech highlighting current harms from AI, like discrimination, misinformation, and democratic challenges, saying that they are already affecting vulnerable populations. She announced the Biden administration will take steps to manage AI’s societal risks and regulatory challenges.

    Harris stressed that in addition to existential threats, we need to address AI dangers already affecting marginalized groups and democratic institutions. Her remarks signalled a focus on AI ethics and consumer protections from the US government.

    3. Elon Musk warns of AI’s existential dangers

    As the CEO of Tesla and SpaceX, Elon Musk has been vocal about his fears of AI getting out of human control. He reiterated those concerns at the summit, describing advanced AI as “one of the biggest threats to humanity” given its potential to become far more intelligent than people.

    “So, you know, we’re not stronger or faster than other creatures, but we are more intelligent. And here we are, for the first time really in human history, with something that’s going to be far more intelligent than us,” he said at the summit.

    While he hopes to guide AI’s development responsibly, he admitted we may not be able to control such an entity. Although “we can aspire to guide it in a direction that’s beneficial to humanity”.

    4. DeepMind co-founder says current models don’t present significant harms

    At the AI summit, Mustafa Suleyman, the co-founder of DeepMind, the UK-based AI company that Google bought and made its core AI division, suggested that a temporary halt in AI development might be necessary in the near future. He told journalists that this question would have to be taken very seriously in the next five years or so.

    However, he also assured that the current state-of-the-art AI models, such as ChatGPT, were not a major risk. He said: “There is no proof today that cutting-edge models like GPT-4 … cause any significant or disastrous harms.”

    5. UK invests in new AI supercomputer

    The UK government announced a major £225 million investment into a powerful new supercomputer called Isambard-AI. It will be built at the University of Bristol and is intended to achieve breakthroughs across healthcare, energy, climate modelling and other fields. Along with another planned supercomputer called Dawn, these systems are part of the UK’s aim to lead in AI while partnering with allies like the US. These computers will be brought online next summer.

    6. Global AI dominance up for grabs

    With major players like the US, EU and China also vying for AI leadership, it’s clear there’s a high-stakes technological arms race at play. While the UK summit focused on cooperation and safety, each region wants to dictate the rules and standards for AI in alignment with their economic and political goals.

    President Joe Biden said, “America will lead the way during this period of technological change” after signing an AI executive order on October 30, even as the EU is aggressively drafting AI regulations. And China has unveiled its own policies to shape AI’s trajectory. But with developing frameworks like the Bletchley Declaration, perhaps these rival powers can together prevent unchecked AI from spiralling out of control.

    Related

    AI AI Supercomputers artificial intelligence (AI) Bletchley Declaration Elon Musk governments Investments Kamala Harris Security Technology UK’s AI Safety Summit
    Share. Facebook Twitter Pinterest LinkedIn Email
    Tapiwa Matthew Mutisi
    • Facebook
    • X (Twitter)
    • LinkedIn

    Tapiwa Matthew Mutisi has been covering blockchain technology, intelligent technologies, cryptocurrency, cybersecurity, telecommunications technology, sustainability, autonomous vehicles, and other topics for Innovation Village since 2017. In the years since, he has published over 4,000 articles — a mix of breaking news, reviews, helpful how-tos, industry analysis, and more. | Open DM on Twitter @TapiwaMutisi

    Related Posts

    7 Things Nigerians Actually Want in a Phone in 2025 (Hint: It’s Not Just a Good Camera)

    How to Block Your Phone from Being Tracked or Hacked

    How to Format Any Smartphone Without Losing Your Files (2025 Guide)

    Leave A Reply Cancel Reply

    You must be logged in to post a comment.

    Copyright ©, 2013-2024 Innovation-Village.com. All Rights Reserved

    Type above and press Enter to search. Press Esc to cancel.