Tech & innovation
Gemini, the Google chatbot, is turning into a "reputational headache" for the company.
Last month, the chatbot made headlines after users complained it generated "ahistoric" images.
The complaints centered around Gemini's treatment of race and ethnicity. Gemini users "flooded" social media with examples of what they called "historically inaccurate" images created by Gemini, including a much-discussed example where "Gemini appeared to respond with a racially diverse group of images when prompted to generate a German soldier in 1943, when the Nazi Party was in power."
(Google has since suspended Gemini's ability to "generate images of people.")
Google apologized, saying that its efforts to "ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range."
The issues with Gemini point to larger concerns about AI. Google made an effort to ensure its chatbot had a "propensity to generate racially diverse images" to reflect Google's "diverse global user base."
And yet, a depiction of German soldiers in 1943 cannot be racially diverse and also historically accurate.
Experts say Google's Gemini issues highlight the "perilous balancing act of moderating artificial-intelligence programs that can generate novel text and images."
Read more via The Wall Street Journal, Associated Press, The Wall Street Journal
According to a new survey by Veritas, oversharing by workers using generative AI tools is “putting businesses at risk.”
Highlights from Veritas' survey of 11,500 office workers in Australia, Brazil, China, France, Germany, Japan, Singapore, South Korea, the United Arab Emirates, the United Kingdom and the United States:
Use of GenAI tools at work:
57% of respondents said they use "public generative AI tools at work" a minimum of once a week, and 15% of respondents said they use such tools "a few times a month to a few times a year." (28% of respondents said they don't use GenAI at work at all.)
Asked for what purpose they use GenAI tools, the most common responses were for "improving my written work" (40%), writing emails/message/memos (41%), and undertaking research/information gathering for analysis (42%).
Sharing potentially sensitive information via GenAI:
31% of global office workers "admitted to inputting potentially sensitive information into generative AI tools, such as customer details or employee financials."
64% of respondents said they are "not sure" if they've shared potentially sensitive information when using GenAI tools.
Only 5% of respondents said "no" when asked if they've shared potentially sensitive information when using GenAI tools.
U.S. respondents were most likely (54%) to admit having "personally entered sensitive or confidential information into a generative AI tool, such as ChatGPT or Bard," or knowing a “colleague in the organization who has.”
On company AI compliance policies, or lack thereof:
12% of respondents said their employer has instituted a "mandatory policy" barring them from "using generative AI tools at work."
21% said their employer has issued "voluntary guidelines on the use of generative AI tools."
24% of respondents said their employer has issued "mandatory policies on the use of generative AI tools."
36% of respondents said they have “received nothing”.
Read more via Veritas
AI adoption continues to increase, but office workers remain wary about the implications and risks around AI implementation, according to a newly published survey by Slack.
Highlights from Slack's survey of 10,200 desk workers in the U.S., Australia, France, Germany, Japan and the United Kingdom:
25% of desk workers are now turning to AI at work.
More than 25% of respondents said they are "concerned about the impact of AI implementation."
More than 40% of workers say they are "excited for AI to replace tedious tasks."
A third of workers have "neutral" feelings when it comes to AI.
Of workers who are using AI, almost 80% "believe using the technology is already improving their productivity."
43% of workers say they have "received no guidance from their bosses or organization on how to use AI tools at work."
“The data indicates that failing to provide guidance or instruction on AI may be inhibiting your employees from giving it a try. If you’re looking to ready your workforce for the AI revolution, you can start by providing guidelines for how AI can be used at work.”
Read more via Slack
If you thought your Slack and Microsoft Teams messages were private, think again. Many major employers are now using AI to analyze messages on "Slack, Microsoft Teams, Zoom and other popular apps." Within AI, "employee surveillance" is a "rapidly expanding but niche piece of a larger AI market," according to experts.
AI firm Aware specializes in "analyzing employee messages," and counts major employers like Starbucks, Delta Airlines, T-Mobile and Walmart among its clients.
According to Aware, its "data repository contains messages that represent about 20 billion individual interactions across more than 3 million employees."
Aware's founder says the tech assists employers in understanding "the risk within their communications," by understanding "employee sentiment in real time, rather than depending on an annual or twice-per-year survey."
Aware anonymizes data so that clients can "see how employees of a certain age group or in a particular geography are responding to a new corporate policy or marketing campaign."
The analytics tools that "monitors employee sentiment and toxicity" does not "have the ability to flag individual employee names."
Aware also offers an eDiscovery tool that can flag individual employees "in the event of extreme threats or other risk behaviors that are predetermined by the client."
The company also offers "dozens of AI models" that are "built to read text and process images." Those models can "also identify bullying, harassment, discrimination, noncompliance, pornography, nudity and other behaviors."
The company's revenue "has jumped 150% per year on average over the past five years," suggesting employers are increasingly interested in utilizing AI tools to monitor employees.
“It’s always tracking real-time employee sentiment, and it’s always tracking real-time toxicity. If you were a bank using Aware and the sentiment of the workforce spiked in the last 20 minutes, it’s because they’re talking about something positively, collectively. The technology would be able to tell them whatever it was.”
Read more via CNBC, Fox Business
ADP announced plans to integrate "generative AI capabilities" as part of an effort to “bolster HR processes.”
ADP's planned integration of generative AI will be targeted at improving efficiency and "upgrading HR to more advisory roles."
The integration of GenAI will "automate repetitive tasks and expedite processes."
HR professionals will see their roles elevated by "reducing the time they spend responding to basic problems and hunting for information," according to ADP.
ADP's new "chatbot assistant" is only being offered to a select group of customers, for now.
The new chatbot is able to do things like check for "anomalies" (example: "employees forgetting to clock out").
Read more via TechTarget