[ad_1]
Earlier this week, Channel Nine published a doctored image of Victorian MP Georgie Purcell wearing a tank top that exposed her belly button. His costume was actually a dress.
Mr Purcell criticized the channel’s image manipulation and accused it of sexism. Nine apologized for the edits and blamed it on Adobe Photoshop’s artificial intelligence (AI) tools.
Generative AI has become increasingly popular over the past six months, as popular image editing and design tools like Photoshop and Canva have started integrating AI features into their programs.
But what exactly can they do? Can we blame them for the manipulated images? As these tools become more popular, there are opportunities as well as opportunities to learn more about them and their dangers. becomes increasingly important.
Read more: AI to Z: All the terms you need to know to stay on top of the AI hype
What happened to Purcell’s photo?
Creating an AI-generated or AI-enhanced image typically involves a “prompt” that uses text commands to describe what you want to see or edit.
But late last year, Photoshop announced a new feature called Generative Fills. Among its options is an “enhance” tool that lets you add content to images without the need for a text prompt.
For example, to extend an image beyond its original boundaries, users simply extend the canvas and Photoshop will “imagine” the content that could extend beyond the frame. This functionality is powered by Firefly, Adobe’s proprietary generative AI tool.
Nine resized the image to match the TV’s configuration, which also created new parts of the image that weren’t there originally.
The source material and whether it is cropped or not is very important here.
In the example above, the photo frame stops at Purcell’s waist, but Photoshop simply extends the dress as expected. However, when you use Generative Expand on photos that are more tightly cropped or composited, Photoshop has to “imagine” more of what’s going on in the image, and the results may vary.
Is it legal to change a person’s image in this way? Ultimately, the court will decide. It depends on the jurisdiction and above all on the risk of reputational damage. If a party can claim that the publication of the altered image has caused or is likely to cause “serious harm,” it may be able to sue for defamation.
Read more: Australia plans to regulate ‘high-risk’ AI.Here’s how to do it successfully
How else is generative AI being used?
Generative fill is just one way news organizations are using AI. Others use it to create or publish images, such as photorealistic images depicting current events. One example is the ongoing conflict between Israel and Hamas.
Some people use it in place of stock photos or to create illustrations of topics that are difficult to visualize, such as AI itself.
Many adhere to organizational or industry-wide codes of conduct, such as the Australian Media, Entertainment and Arts Alliance’s Code of Ethics for Journalists. The regulations state that journalists must “present true and accurate photographs and audio” and disclose “any operations that may be misleading.”
Read more: Should media communicate when they use AI to report news? What consumers need to know
Some news organizations may not use AI-generated or enhanced images at all, or may only report on them if they go viral.
Newsrooms can also benefit from generative AI tools. Examples include uploading a spreadsheet to a service like his ChatGPT-4 and receiving suggestions on how to visualize the data. Or use it to create three-dimensional models that show how a process works or how events unfold.
What safeguards should media take to ensure the responsible use of generative AI?
Over the past year, I’ve been interviewing photo editors and people in related roles about how they use generative AI and what policies they have in place to use it safely.
We learned that some news organizations prohibit their staff from using AI to create content. Some only allow non-realistic illustrations, such as using AI to create Bitcoin his symbols or to illustrate stories about finance.
According to the editors I spoke with, news organizations want to be transparent with their audiences about the content they create and how they edit it.
Adobe launched the Content Authenticity Initiative in 2019, which now includes major media organizations, image libraries, and multimedia companies. This now exposes content credentials, the digital history of what equipment was used to create an image and what editing was done.
This is touted as a way to increase transparency in content generated or augmented by AI. However, content credentials are not yet widely used. Additionally, viewers should not delegate critical thinking to third parties.
In addition to transparency, the news editors I spoke to were also sensitive about the potential for AI to displace human labor. Many retailers strive to only use AI generators that have been trained with their own content. This comes as litigation continues in jurisdictions around the world over whether AI training data and the generations it generates violate copyright.
Read more: How the New York Times copyright lawsuit against OpenAI could change how AI and copyright work
Finally, news editors said they are aware that there can be bias in the generation of AI, given that the data on which AI models are trained is not representative.
This year, the World Economic Forum named AI-powered misinformation and disinformation as the world’s biggest short-term risk. This is prioritized over disasters such as extreme weather, inflation, and armed conflict.
![](https://images.theconversation.com/files/572608/original/file-20240131-25-bln54d.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip)
World Economic Forum Global Risk Perceptions Survey 2023-2024
Because of this risk, and with elections taking place in the U.S. and around the world this year, a healthy skepticism of what you see online is imperative.
So is being thoughtful about where you get your news and information. That way, you will be better equipped to participate in our democracy and less likely to fall for scams.
[ad_2]
Source link