AI automation ‘revolution’ built on plagiarism

334
0
Chatbot technology isometric banner. Ai robot customer support, help and online consultation. Future marketing innovation, artificial intelligence digital advisor, 3d vector illustration, line art

If you’ve been following the news of artificial intelligence over the past few years, you may have noticed alarmist views on AI. From the service industry to education to transportation, no job and no one is safe. 

On Nov. 23 2022, Elon Musk congratulated the Tesla AI team on the release of the “Full Self-Driving Beta.”

Fully autonomous AI driven semi-trucks are said to be rolling out as early as 2023, according to several national publications.

The food service industry is also impacted.

The Matradee L is a robotic food server built for the restaurant industry to replace waiters. Possessing a battery life of 15 hours, the ability to avoid obstacles using LIDAR, and a carrying capacity of 80 pounds, the Matradee L is multi-functional. A fully robotic chef, the “ARM” being tested in kitchens.

With the service industry being under fire and automation becoming ever more likely, surely the white-collar jobs are safe. Well somewhat, but that is also changing. Jobs crunching numbers and making statistic based decisions can already be outsourced to programs designed to do the tasks much faster at a fraction of the cost of a full staff using something called Cognitive Computing. 

“The goal of cognitive computing is to simulate human thought processes in a computerized model. Using self-learning algorithms that use data mining, pattern recognition and natural language processing, the computer can mimic the way the human brain works,” according to an article by Forbes in March 2016.  

The one thing our synthetic offspring can never duplicate is creativity. 

Right? 

Wrong. Writers, intellectuals, speakers, painters, creatives alike, all could very possibly become heavily assisted and eventually made obsolete. Enter ChatGPT. 

What is it? 

“ChatGPT is a language model developed by OpenAI, it’s a type of AI that is able to understand and generate human-like text… Essentially, it’s a computer program that can communicate with people in a way that feels natural and human-like,” according to its description within Google’s web store. In other words, the AI driven Chat Bot is learning. This may sound like a harmless engine at first, but the threat of what this may become raises ethical questions.  

The latest model has shaken Google’s confidence in information search-based monopoly. The company has announced its strategy of unveiling its own model which became first available in early beta as of Feb. 7, in order to compete with ChatGPT-3 the third iteration from Open AI.  

The Open AI company website describes;  “[it] interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.” 

The AI chat model is able to learn and integrate information by allowing corrections. Students have already began to prompt ChatGPT to write entire essays and due to sources not being cited this raises concerns for plagiarism in mass. With enough prompting, ChatGPT can also mimic fiction, poetry, lyrics, and even images. Artists have raised alarm in recent months amongst growing concerns of fraudulent work produced by AI. Similar softwares have been trained on the artworks of various creatives to produce images that look nearly identical to the original works it has sourced. Without permission,  suddenly ‘new’ art has flooded the internet striking stunning resemblance to the artists it has ‘learned’ from. Because AI uses information based on the work of others, it technically creates nothing. It instead copies and rearranges. In a world becoming algorithm-dependent, it appears we are all replaceable.