If you were hoping that the world would break out of the AI craze by 2024, you would be sadly mistaken. Advances in hardware and software (everywhere) are opening the floodgates to dynamic applications of generative AI, suggesting that 2023 is the year we truly begin to scratch the surface.
This year, the year of the Dragon in the Chinese zodiac, will see the widespread and strategic integration of generational AI across all sectors. As risks are assessed and strategies begin to take shape, businesses are ready to leverage next-generation AI not just as a new technology but as a core component of their operational and strategic frameworks. In short, CEOs and business leaders are recognizing the potential and necessity of generational AI and are now actively incorporating these technologies into their processes.
The result is a situation where artificial intelligence is not just an option, but a key driver of innovation, efficiency, and competitiveness. This transformative shift means a move from tentative exploration to confident and informed application, making 2024 the year that artificial intelligence moves from emerging trend to fundamental business practice. .
Rich in quantity and variety
A key aspect is a growing understanding of how Gen AI enables both the quantity and variety of applications, ideas, and content.
VB event
AI Impact Tour – New York
We’re partnering with Microsoft to be in New York on February 29th to discuss how to balance the risks and benefits of AI applications. Request an invitation to an exclusive event below.
request an invitation
We are only just beginning to discover the impact of the vast amount of content generated by AI. The sheer volume of this content (since 2022, AI users have collectively created more than 15 billion images, a number that previously took humans 150 years to create) has led historians to We have to look at his post-2023 Internet as something completely different. Something that happened before, just like the atomic bomb set back radiocarbon dating.
But for businesses, regardless of what artificial intelligence is doing to the Internet, this expansion is raising the bar for all players in every field, and not engaging with the technology is not just a missed opportunity. , indicating that we have reached a critical juncture. Competitive Disadvantage.
Jagged frontier
In 2023, we know that gen ai will not only raise the bar for the entire industry, but also raise the bar for employee competency. In a YouGov survey last year, 90% of workers said AI was making them more productive. One in four respondents use AI every day (73% of employees use AI at least once a week).
Another study found that employees with proper training completed 12% of their tasks 25% faster with the help of Gen AI, improved overall work quality by 40%, and outperformed employees with lower skill levels. was found to have improved the most. However, for tasks that were beyond the AI’s capabilities, employees were 19% less likely to come up with the correct solution.
This duality has created what experts call a “jagged frontier” of AI capabilities. It works like this: At one end of the spectrum, we witness the incredible capabilities of AI. Tasks that once seemed insurmountable to machines are now performed accurately and efficiently.
On the other hand, there are tasks where AI struggles to match human intuition and adaptability and falters. These are areas characterized by nuance, context, and complex decision-making, and are areas where the machine’s binary his logic fits (for now).
cheaper AI
This year, we will see next-generation AI projects begin to take hold and become the norm as companies begin to grapple with and overcome the jagged frontier. Underlying this adoption is the declining cost of training fundamental large-scale language models (LLMs) due to advances in silicon optimization (estimated to halve every two years).
Amid increased demand and global shortages, the AI chip market is expected to become more affordable in 2024, with alternatives to industry leaders like Nvidia emerging from the market.
Similarly, new fine-tuning methods (such as Self-Play fine-tuning (SPIN)) that can grow strong LLMs from weak LLMs without the need for additional human annotated data leverage synthetic data. and do more with less human input.
Please enter “Modelverse”
This cost reduction has enabled a wider range of companies to develop and implement their own LLMs. Although the implications are vast and diverse, it is clear that innovative LLM-based applications will proliferate in the coming years.
Similarly, 2024 will see a shift from primarily cloud-dependent models to locally executed AI. This evolution is driven in part by advances in hardware such as Apple Silicon, but it also leverages the untapped potential of raw CPU power in everyday mobile devices.
Similarly, on the business side, small language models (SLMs) will become more prevalent in large and medium-sized enterprises as they meet more specific and niche needs. As the name suggests, SLM is more lightweight than LLM, making it ideal for real-time applications and integration into various platforms.
So while LLMs are trained on vast amounts of diverse data, SLMs are trained on more domain-specific data (often sourced from within the enterprise) and are tailored to specific industries and uses. Tailored to your case, ensuring relevance and privacy at the same time.
Migration to Large Vision Model (LVM)
As we move into 2024, the focus will shift from LLM to Large Vision Models (LVM), especially domain-specific ones, that will revolutionize the processing of visual data.
While LLM trained on Internet text adapts well to unique documents, LVM faces unique challenges. Images on the Internet primarily include memes, cats, and selfies, and are very different from specialized images used in fields such as manufacturing and life sciences. Therefore, a general-purpose LVM trained on Internet images may not be able to efficiently identify salient features in specialized domains.
However, LVM tailored to specific imaging domains such as semiconductor manufacturing or pathology shows significantly better results. Research has demonstrated that adapting his LVM to a specific domain using approximately 100K unlabeled images significantly reduces the need for labeled data and increases performance levels. These models, unlike the general-purpose LVM, are tailored to specific business domains and excel at computer vision tasks such as defect detection and object localization.
We’ll start to see companies adopt large-scale graphical models (LGMs) elsewhere as well. These models excel at handling tabular data, typically contained in spreadsheets and databases. They stand out for their ability to analyze time-series data, providing a new perspective in understanding sequential data often found in business contexts. This feature is very important because the majority of a company’s data falls into these categories. This is a challenge that existing AI models, including LLM, have not yet adequately addressed.
ethical dilemma
Of course, such developments must be supported by rigorous ethical considerations. The common consensus is that we have been very wrong in our previous understanding of general-purpose technologies (technologies that have wide-ranging applications, have a major impact on diverse areas of human activity, and fundamentally change economies and societies). That’s it. While tools such as smartphones and social media have brought tremendous benefits, they have also brought negative externalities that permeate every aspect of our lives, whether we are directly involved or not.
In genetic AI, it is considered most important to regulate past mistakes so that they do not happen again. But it could fail, stifle innovation, or take time to take effect, so there will be organizations that oppose governments leading the regulations.
Perhaps the best-known ethical quagmire surrounding artificial intelligence introduced last year is copyright. As AI technology rapidly advances, pressing issues regarding intellectual property rights have surfaced. Of course, the crux of the issue is whether and how AI-generated content (which often draws on existing human-created works for training) is subject to copyright law. That’s the point.
The tension between AI and copyright exists because copyright law was created to prevent the illegal use of other people’s IP. You are allowed to read articles and texts for inspiration, but you are not allowed to copy them. If a person read all of Shakespeare and created their own version, that would be considered inspiration, but the challenge is that AI can consume an unlimited amount of data, as opposed to the limitations that humans have.
The copyright and copy-wrong debate is just one aspect of a fluid medium. In 2024, we will see the outcome of the following landmark precedent-setting cases: NYT vs. OpenAI (However, it’s unclear whether this will actually end up in court or just a bargaining tool by publishers.) And witness how the media landscape adapts to the new reality of AI.
Deepfakes are rampant
From a geopolitical perspective, the AI conversation this year will inevitably be about how this technology intersects with the biggest election year in human history. More than half of the world’s population will head to the polls this year, with presidential, parliamentary and referendum elections scheduled in countries including the United States, Taiwan, India, Pakistan, South Africa and South Sudan.
Such interference is already occurring in Bangladesh, which is due to vote in January. Some pro-government media outlets and influencers actively promoted disinformation created using low-cost AI tools.
In one example, a deepfake video (which has since been deleted) showed a dissident figure appearing to withdraw support for Gaza residents, a stance that is clearly reflected in the Muslim-majority Palestinian population. This could be detrimental to a country that has strong ties to its people.
The threat of AI imagery is not theoretical. Recent research has revealed that subtle changes designed to fool AI in image recognition can also affect human perception. This discovery is nature communications, highlights the similarities between human vision and machine vision, but more importantly highlights the need for further research into how adversarial images affect both humans and AI systems. These experiments showed that even minimal perturbations imperceptible to the human eye can bias human judgments, as well as decisions made by AI models.
While there is a growing global consensus around the concept of watermarks (or content credentials) as a means of distinguishing between real and synthetic content, the solution remains complex. Will detection become universal? If so, how can we prevent people from abusing it, i.e. labeling works as synthetic when they are not? On the other hand So, making such media undetectable by anyone would be ceding a significant amount of power to the person who has it. Once again we may ask ourselves: Who decides what is true?
With public trust still at rock bottom around the world, 2024 will be the year in which the world’s biggest election intersects with the most defining technology of our time. For better or worse, 2024 will be the year that AI will be applied in real, tangible ways. Hold on tight.
Elliot Leavy is the founder of ACQUAINTED, Europe’s first generative AI consultancy.
data decision maker
Welcome to the VentureBeat community!
DataDecisionMakers is a place where experts, including technologists who work with data, can share data-related insights and innovations.
If you want to read about cutting-edge ideas, updates, best practices, and the future of data and data technology, join DataDecisionMakers.
Why not consider contributing your own articles?
Read more about DataDecisionMakers