Close Menu
5gantennas.org5gantennas.org
  • Home
  • 5G
    • 5G Technology
  • 6G
  • AI
  • Data
    • Global 5G
  • Internet
  • WIFI
  • 5G Antennas
  • Legacy

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

4 Best Wi-Fi Mesh Networking Systems in 2024

September 6, 2024

India is on the brink of a new revolution in telecommunications and can lead the world with 6G: Jyotiraditya Scindia

August 29, 2024

Speaker Pelosi slams California AI bill headed to Governor Newsom as ‘ignorant’

August 29, 2024
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
5gantennas.org5gantennas.org
  • Home
  • 5G
    1. 5G Technology
    2. View All

    Deutsche Telekom to operate 12,500 5G antennas over 3.6 GHz band

    August 28, 2024

    URCA Releases Draft “Roadmap” for 5G Rollout in the Bahamas – Eye Witness News

    August 23, 2024

    Smart Launches Smart ZTE Blade A75 5G » YugaTech

    August 22, 2024

    5G Drone Integration Denmark – DRONELIFE

    August 21, 2024

    Hughes praises successful private 5G demo for U.S. Navy

    August 29, 2024

    GSA survey reveals 5G FWA has become “mainstream”

    August 29, 2024

    China Mobile expands 5G Advanced, Chunghwa Telecom enters Europe

    August 29, 2024

    Ateme and ORS Boost 5G Broadcast Capacity with “World’s First Trial of IP-Based Statmux over 5G Broadcast” | TV Tech

    August 29, 2024
  • 6G

    India is on the brink of a new revolution in telecommunications and can lead the world with 6G: Jyotiraditya Scindia

    August 29, 2024

    Vodafonewatch Weekly: Rural 4G, Industrial 5G, 6G Patents | Weekly Briefing

    August 29, 2024

    Southeast Asia steps up efforts to build 6G standards

    August 29, 2024

    Energy efficiency as an inherent attribute of 6G networks

    August 29, 2024

    Finnish working group launches push for 6G technology

    August 28, 2024
  • AI

    Speaker Pelosi slams California AI bill headed to Governor Newsom as ‘ignorant’

    August 29, 2024

    Why Honeywell is betting big on Gen AI

    August 29, 2024

    Ethically questionable or creative genius? How artists are engaging with AI in their work | Art and Design

    August 29, 2024

    “Elon Musk and Trump” arrested for burglary in disturbing AI video

    August 29, 2024

    Nvidia CFO says ‘enterprise AI wave’ has begun and Fortune 100 companies are leading the way

    August 29, 2024
  • Data
    1. Global 5G
    2. View All

    Global 5G Enterprise Market is expected to be valued at USD 34.4 Billion by 2032

    August 12, 2024

    Counterpoint predicts 5G will dominate the smartphone market in early 2024

    August 5, 2024

    Qualcomm’s new chipsets will power affordable 5G smartphones

    July 31, 2024

    Best Super Fast Download Companies — TradingView

    July 31, 2024

    Crypto Markets Rise on Strong US Economic Data

    August 29, 2024

    Microsoft approves construction of third section of Mount Pleasant data center campus

    August 29, 2024

    China has invested $6.1 billion in state-run data center projects over two years, with the “East Data, West Computing” initiative aimed at capitalizing on the country’s untapped land.

    August 29, 2024

    What is the size of the clinical data analysis solutions market?

    August 29, 2024
  • Internet

    NATO believes Russia poses a threat to Western internet and GPS services

    August 29, 2024

    Mpeppe grows fast, building traction among Internet computer owners

    August 29, 2024

    Internet Computer Whale Buys Mpeppe (MPEPE) at 340x ROI

    August 29, 2024

    Long-term internet computer investor adds PEPE rival to holdings

    August 29, 2024

    Biden-Harris Administration Approves Initial Internet for All Proposals in Mississippi and South Dakota

    August 29, 2024
  • WIFI

    4 Best Wi-Fi Mesh Networking Systems in 2024

    September 6, 2024

    Best WiFi deal: Save $200 on the Starlink Standard Kit AX

    August 29, 2024

    Sonos Roam 2 review | Good Housekeeping UK

    August 29, 2024

    Popular WiFi extender that eliminates dead zones in your home costs just $12

    August 29, 2024

    North American WiFi 6 Mesh Router Market Size, Share, Forecast, [2030] – அக்னி செய்திகள்

    August 29, 2024
  • 5G Antennas

    Nokia and Claro bring 5G to Argentina

    August 27, 2024

    Nokia expands FWA portfolio with new 5G devices – SatNews

    July 25, 2024

    Deutsche Telekom to operate 12,150 5G antennas over 3.6 GHz band

    July 24, 2024

    Vodafone and Ericsson develop a compact 5G antenna in Germany

    July 12, 2024

    Vodafone and Ericsson unveil new small antennas to power Germany’s 5G network

    July 11, 2024
  • Legacy
5gantennas.org5gantennas.org
Home»AI»UK government urged to adopt more positive outlook for LLMs to avoid missing ‘AI goldrush’
AI

UK government urged to adopt more positive outlook for LLMs to avoid missing ‘AI goldrush’

5gantennas.orgBy 5gantennas.orgFebruary 2, 2024No Comments11 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest WhatsApp Email


Big Ben, Westminster and House of Lords at the sunset. London. England.

Image Credits: Peterscode / Getty Images

The U.K. government is taking too “narrow” a view of AI safety and risks falling behind in the AI gold rush, according to a report released today.

The report, published by the parliamentary House of Lords’ Communications and Digital Committee, follows a months-long evidence-gathering effort involving input from a wide gamut of stakeholders, including big tech companies, academia, venture capitalists, media and government.

Among the key findings from the report was that the government should refocus its efforts on more near-term security and societal risks posed by large language models (LLMs) such as copyright infringement and misinformation, rather than becoming too concerned about apocalyptic scenarios and hypothetical existential threats, which it says are “exaggerated.”

“The rapid development of AI large language models is likely to have a profound effect on society, comparable to the introduction of the internet — that makes it vital for the Government to get its approach right and not miss out on opportunities, particularly not if this is out of caution for far-off and improbable risks,” the Communications and Digital Committee’s chairman Baroness Stowell said in a statement. “We need to address risks in order to be able to take advantage of the opportunities — but we need to be proportionate and practical. We must avoid the U.K. missing out on a potential AI goldrush.”

The findings come as much of the world grapples with a burgeoning AI onslaught that looks set to reshape industry and society, with OpenAI’s ChatGPT serving as the poster child of a movement that catapulted LLMs into the public consciousness over the past year. This hype has created excitement and fear in equal doses, and sparked all manner of debates around AI governance — President Biden recently issued an executive order with a view toward setting standards for AI safety and security, while the U.K. is striving to position itself at the forefront of AI governance through initiatives such as the AI Safety Summit, which gathered some of the world’s political and corporate leaders into the same room at Bletchley Park back in November.

At the same time, a divide is emerging around to what extent we should regulate this new technology.

Regulatory capture

Meta’s chief AI scientist Yann LeCun recently joined dozens of signatories in an open letter calling for more openness in AI development, an effort designed to counter a growing push by tech firms such as OpenAI and Google to secure “regulatory capture of the AI industry” by lobbying against open AI R&D.

“History shows us that quickly rushing towards the wrong kind of regulation can lead to concentrations of power in ways that hurt competition and innovation,” the letter read. “Open models can inform an open debate and improve policy making. If our objectives are safety, security and accountability, then openness and transparency are essential ingredients to get us there.”

And it’s this tension that serves as a core driving force behind the House of Lords’ “Large language models and generative AI” report, which calls for the government to make market competition an “explicit AI policy objective” to guard against regulatory capture from some of the current incumbents such as OpenAI and Google.

Indeed, the issue of “closed” versus “open” rears its head across several pages in the report, with the conclusion that “competition dynamics” will not only be pivotal to who ends up leading the AI / LLM market, but also what kind of regulatory oversight ultimately works. The report notes:

At its heart, this involves a contest between those who operate ‘closed’ ecosystems, and those who make more of the underlying technology openly accessible. 

In its findings, the committee said that it examined whether the government should adopt an explicit position on this matter, vis à vis favouring an open or closed approach, concluding that “a nuanced and iterative approach will be essential.” But the evidence it gathered was somewhat colored by the stakeholders’ respective interests, it said.

For instance, while Microsoft and Google noted they were generally supportive of “open access” technologies, they believed that the security risks associated with openly available LLMs were too significant and thus required more guardrails. In Microsoft’s written evidence, for example, the company said that “not all actors are well-intentioned or well-equipped to address the challenges that highly capable [large language] models present“.

The company noted:

Some actors will use AI as a weapon, not a tool, and others will underestimate the safety challenges that lie ahead. Important work is needed now to use AI to protect democracy and fundamental rights, provide broad access to the AI skills that will promote inclusive growth, and use the power of AI to advance the planet’s sustainability needs.

Regulatory frameworks will need to guard against the intentional misuse of capable models to inflict harm, for example by attempting to identify and exploit cyber vulnerabilities at scale, or develop biohazardous materials, as well as the risks of harm by accident, for example if AI is used to manage large scale critical infrastructure without appropriate guardrails.

But on the flip side, open LLMs are more accessible and serve as a “virtuous circle” that allows more people to tinker with things and inspect what’s going on under the hood. Irene Solaiman, global policy director at AI platform Hugging Face, said in her evidence session that opening access to things like training data and publishing technical papers is a vital part of the risk-assessing process.

What is really important in openness is disclosure. We have been working hard at Hugging Face on levels of transparency [….] to allow researchers, consumers and regulators in a very consumable fashion to understand the different components that are being released with this system. One of the difficult things about release is that processes are not often published, so deployers have almost full control over the release method along that gradient of options, and we do not have insight into the pre-deployment considerations.

Ian Hogarth, chair of the U.K. government’s recently launched AI Safety Institute, also noted that we’re in a position today where the frontier of LLMs and generative AI is being defined by private companies that are effectively “marking their own homework” as it pertains to assessing risk. Hogarth said:

That presents a couple of quite structural problems. The first is that, when it comes to assessing the safety of these systems, we do not want to be in a position where we are relying on companies marking their own homework. As an example, when [OpenAI’s LLM] GPT-4 was released, the team behind it made a really earnest effort to assess the safety of their system and released something called the GPT-4 system card. Essentially, this was a document that summarised the safety testing that they had done and why they felt it was appropriate to release it to the public. When DeepMind released AlphaFold, its protein-folding model, it did a similar piece of work, where it tried to assess the potential dual use applications of this technology and where the risk was.

You have had this slightly strange dynamic where the frontier has been driven by private sector organisations, and the leaders of these organisations are making an earnest attempt to mark their own homework, but that is not a tenable situation moving forward, given the power of this technology and how consequential it could be.

Avoiding or striving to attain regulatory capture lies at the heart of many of these issues. The very same companies that are building leading LLM tools and technologies are also calling for regulation, which many argue is really about locking out those seeking to play catch-up. Thus, the report acknowledges concerns around industry lobbying for regulations, or government officials becoming too reliant on the technical know-how of a “narrow pool of private sector expertise” for informing policy and standards.

As such, the committee recommends “enhanced governance measures in DSIT [Department for Science, Innovation and Technology] and regulators to mitigate the risks of inadvertent regulatory capture and groupthink.”

This, according to the report, should:

….apply to internal policy work, industry engagements and decisions to commission external advice. Options include metrics to evaluate the impact of new policies and standards on competition; embedding red teaming, systematic challenge and external critique in policy processes; more training for officials to improve technical know‐how; and ensuring proposals for technical standards or benchmarks are published for consultation.

Narrow focus

However, this all leads to one of the main recurring thrusts of the report’s recommendation, that the AI safety debate has become too dominated by a narrowly focused narrative centered on catastrophic risk, particularly from “those who developed such models in the first place.”

Indeed, on the one hand the report calls for mandatory safety tests for “high-risk, high-impact models” — tests that go beyond voluntary commitments from a few companies. But at the same time, it says that concerns about existential risk are exaggerated and this hyperbole merely serves to distract from more pressing issues that LLMs are enabling today.

“It is almost certain existential risks will not manifest within three years, and highly likely not within the next decade,” the report concluded. “As our understanding of this technology grows and responsible development increases, we hope concerns about existential risk will decline. The Government retains a duty to monitor all eventualities — but this must not distract it from capitalising on opportunities and addressing more limited immediate risks.”

Capturing these “opportunities,” the report acknowledges, will require addressing some more immediate risks. This includes the ease with which mis- and dis-information can now be created and spread — through text-based mediums and with audio and visual “deepfakes” that “even experts find increasingly difficult to identify,” the report found. This is particularly pertinent as the U.K. approaches a general election.

“The National Cyber Security Centre assesses that large language models will ‘almost certainly be used to generate fabricated content; that hyper‐realistic bots will make the spread of disinformation easier; and that deepfake campaigns are likely to become more advanced in the run up to the next nationwide vote, scheduled to take place by January 2025’,” it said.

Moreover, the committee was unequivocal on its position around using copyrighted material to train LLMs — something that OpenAI and other big tech companies have been doing, arguing that training AI is a fair-use scenario. This is why artists and media companies such as The New York Times are pursuing legal cases against AI companies that use web content for training LLMs.

“One area of AI disruption that can and should be tackled promptly is the use of copyrighted material to train LLMs,” the report notes. “LLMs rely on ingesting massive datasets to work properly, but that does not mean they should be able to use any material they can find without permission or paying rightsholders for the privilege. This is an issue the Government can get a grip of quickly, and it should do so.”

It is worth stressing that the Lords’ Communications and Digital Committee doesn’t completely rule out doomsday scenarios. In fact, the report recommends that the government’s AI Safety Institute should carry out and publish an “assessment of engineering pathways to catastrophic risk and warning indicators as an immediate priority.”

Moreover, the report notes that there is a “credible security risk” from the snowballing availability of powerful AI models which can easily be abused or malfunction. But despite these acknowledgements, the committee reckons that an outright ban on such models is not the answer, on the balance of probability that the worst-case scenarios won’t come to fruition, and the sheer difficulty in banning them. And this is where it sees the government’s AI Safety Institute coming into play, with recommendations that it develops “new ways” to identify and track models once deployed in real-world scenarios.

“Banning them entirely would be disproportionate and likely ineffective,” the report noted. “But a concerted effort is needed to monitor and mitigate the cumulative impacts.”

So for the most part, the report doesn’t say that LLMs and the broader AI movement don’t come with real risks. But it says that the government needs to “rebalance” its strategy with less focus on “sci-fi end-of-world scenarios” and more focus on what benefits it might bring.

“The Government’s focus has skewed too far towards a narrow view of AI safety,” the report says. “It must rebalance, or else it will fail to take advantage of the opportunities from LLMs, fall behind international competitors and become strategically dependent on overseas tech firms for a critical technology.”







Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleHealth Department Provides ‘Scarce Data’, Fumbling Project – Daily Montanan
Next Article Former CIA employee sentenced to 40 years in prison for the largest information leak in government history
5gantennas.org
  • Website

Related Posts

Speaker Pelosi slams California AI bill headed to Governor Newsom as ‘ignorant’

August 29, 2024

Why Honeywell is betting big on Gen AI

August 29, 2024

Ethically questionable or creative genius? How artists are engaging with AI in their work | Art and Design

August 29, 2024
Leave A Reply Cancel Reply

You must be logged in to post a comment.

Latest Posts

4 Best Wi-Fi Mesh Networking Systems in 2024

September 6, 2024

India is on the brink of a new revolution in telecommunications and can lead the world with 6G: Jyotiraditya Scindia

August 29, 2024

Speaker Pelosi slams California AI bill headed to Governor Newsom as ‘ignorant’

August 29, 2024

Crypto Markets Rise on Strong US Economic Data

August 29, 2024
Don't Miss

Business News | Communications Minister Scindia promotes 6G leadership and nationwide broadband in meeting with telecom operators

By 5gantennas.orgAugust 24, 2024

New Delhi [India]August 24 (ANI): Union Telecom Minister Jyotiraditya Scindia along with Minister of State…

SingTel and SK Telecom prepare for the 6G future

July 8, 2024

Apple focuses on 6G for future iPhones

December 11, 2023

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

About Us
About Us

Welcome to 5GAntennas.org, your reliable source for comprehensive information on 5G technology, artificial intelligence (AI), and data-related advancements. We are passionate about staying at the forefront of these cutting-edge fields and bringing you the latest insights, trends, and developments.

Facebook X (Twitter) Pinterest YouTube WhatsApp
Our Picks

4 Best Wi-Fi Mesh Networking Systems in 2024

September 6, 2024

India is on the brink of a new revolution in telecommunications and can lead the world with 6G: Jyotiraditya Scindia

August 29, 2024

Speaker Pelosi slams California AI bill headed to Governor Newsom as ‘ignorant’

August 29, 2024
Most Popular

Will 5G make 2024 the most connected year in the industry?

December 1, 2023

The current state of 5G in the US and how it can improve

September 28, 2023

How 5G technology will transform gaming on the go

January 31, 2024
© 2025 5gantennas. Designed by 5gantennas.
  • Home
  • About us
  • Contact us
  • DMCA
  • Privacy Policy
  • About Creator

Type above and press Enter to search. Press Esc to cancel.