Emerging Technology
Into The World Of Questionable AI Practices
By TechDogs Bureau
Updated on Fri, Oct 18, 2024
Share
Recently, we reported that Accel’s Euroscape 2024 noted that 40% of venture capital funding for cloud companies is now being directed toward GenAI startups in the European region. While this started eating away at the money available for other technologies in the cloud industry, it also found that investments in AI improved the outlook for other technologies.
It’s no secret that AI and GenAI are revolutionizing industries and turning more than just a few heads.
However, at the same time, it’s leading to companies transforming employee roles and, in some cases, even replacing them.
This was expected, with a wide range of industry leaders pointing out that the technology will not just replace people in some processes but also replace some processes. Most prominently, NVIDIA’s boss Jensen Huang spoke about how the need for human coders will soon be something from the past.
Yet, the question must be asked about the ill effects and drawbacks of AI. Here, the question of personal safety comes into the picture, considering people who may not even be related to the technology can be affected, as is the case when it was found that millions of people use AI bots to create deepfake nudes on Telegram.
Elsewhere, the world of AI was privy to a wide range of questionable practices that may not necessarily be illegal or harmful but certainly raised questions about their use.
AI Tools For Extracting Personal Details
Researchers from the University of California, San Diego (UCSD) and Nanyang Technological University in Singapore weaponized an algorithm that can turn malicious prompts into a set of hidden instructions that can gather a user’s personal information from chats, which can then be sent to hackers.
This includes information such as names, ID numbers, payment card details, email addresses, mailing addresses and more.
Called Imprompter, the algorithm tells the LLM to find personal information and link it to a URL, which will then be sent to the hacker’s domain. As per the researchers, the activity enjoyed an 80% success rate.
Xiaohan Fu, the lead author of the research and a computer science PhD student at UCSD, said, “The effect of this particular prompt is essentially to manipulate the LLM agent to extract personal information from the conversation and send that personal information to the attacker’s address. We hide the goal of the attack in plain sight.”
AI Tools For Video Scraping
In a blog post, AI researcher Simon Willison recently tested a technique he calls "video scraping" to extract data from cloud service emails, where, instead of manually inputting scattered charges from twelve emails, he recorded a 35-second screen capture and fed it into Google's AI Studio.
Then, using the Gemini 1.5 Flash AI model, he instructed the tool to pull relevant dates and charges and convert the data into a JSON format, which Willison later formatted as a CSV for spreadsheet use.
The process was successful and incredibly cost-effective (using just 11,018 tokens and costing less than a tenth of a cent) and Willison didn’t end up paying anything (as Google AI Studio offers free services). However, the question of privacy issues arises.
While Wilson used it for his personal requirements and he decided what content would be used, if bad actors were to gain access to a user’s machine, it would introduce a new form of privacy invasion and autonomous spying.
As such, this could enable hackers to access recorded data in one place and act on it later.
AI Tools For Homework
A student from Hingham High School in Massachusetts was punished for using AI tools in his assignment, leading to the boy’s parents suing the school district.
The parents allege that nowhere in the handbook does any rule restrict the use of AI. #UnoReverse
Jennifer and Dale Harris, the boy’s parents filed the lawsuit in Plymouth County Superior Court and the case was then moved to US District Court for the District of Massachusetts.
“They told us our son cheated on a paper, which is not what happened,” said Jennifer Harris, adding, “They basically punished him for a rule that doesn't exist.”
On the other hand, the school’s officials said that the boy admitted to using AI tools to generate ideas and more, while also pointing out that the student’s handbook deals with cheating and plagiarism in a section. It bans “unauthorized use of technology during an assignment” and “unauthorized use or close imitation of the language and thoughts of another author and the representation of them as one's own work.”
Ahead of the handbook, the officials cited a “written policy on Academic Dishonesty and AI expectations” that was shared with the students in fall 2023. The policy stated that students “shall not use AI tools during in-class examinations, processed writing assignments, homework or classwork unless explicitly permitted and instructed.”
While the verdict of this move is yet to be determined to be illegal or not, it brings up concerns about the future of education and the harmful effects of taking shortcuts in a process that requires students to use their natural intellect rather than artificial intelligence.
Do you think governments and AI companies need to ascertain the future of AI and GenAI before proceeding with its growth or do you think AI companies should be allowed to explore where the technology goes without any restrictions?
Let us know in the comments below!
First published on Fri, Oct 18, 2024
Enjoyed what you read? Great news – there’s a lot more to explore!
Dive into our content repository of the latest tech news, a diverse range of articles spanning introductory guides, product reviews, trends and more, along with engaging interviews, up-to-date AI blogs and hilarious tech memes!
Also explore our collection of branded insights via informative white papers, enlightening case studies, in-depth reports, educational videos and exciting events and webinars from leading global brands.
Head to the TechDogs homepage to Know Your World of technology today!
Disclaimer - Reference to any specific product, software or entity does not constitute an endorsement or recommendation by TechDogs nor should any data or content published be relied upon. The views expressed by TechDogs' members and guests are their own and their appearance on our site does not imply an endorsement of them or any entity they represent. Views and opinions expressed by TechDogs' Authors are those of the Authors and do not necessarily reflect the view of TechDogs or any of its officials. All information / content found on TechDogs' site may not necessarily be reviewed by individuals with the expertise to validate its completeness, accuracy and reliability.
Tags:
Related News on Emerging Technology
Are Self-Driving Cars Driving Their Own Problems?
Fri, Apr 14, 2023
By TD NewsDesk
Will Virgin Galactic Reach New Heights Or Crash?
Fri, Jun 2, 2023
By Business Wire
Oceaneering Reports Fourth Quarter 2022 Results
Fri, Feb 24, 2023
By Business Wire
Exro Announces C$30 Million Bought Deal Financing
Tue, May 16, 2023
By PR Newswire
Is LinkedIn's 1 Billion Club Your AI Career Oasis?
Fri, Nov 3, 2023
By TD NewsDesk
Join The Discussion