Klemchuk

View Original

Google Exposes AI Abuse and Other Fraudulent Activity

Google Fights Online Fraud and Requires Disclosure of AI Use

Google is pursuing formal legal action against scammers who used the buzz of Google’s new AI chatbot – Bard – to spread malware. According to Google, the scammers used links claiming a “free download” of Bard (which, incidentally, does not require download) resulting in consumers downloading not Bard, but malicious malware capable of compromising not only their social media accounts but other sensitive and personal information.

Google Fighting Against Online Fraud

Since April, Google has filed hundreds of takedowns on social media platforms. Google later filed a formal lawsuit against the group of fraudsters seeking to prevent them from continuing to create domains for the spreading of malware and to ultimately disable the fraudulent domains.

In a separate suit filed more recently, Google has targeted bad actors submitting false takedown notices claiming copyright infringement (where none exists) as a means of eliminating competition. Google claims that these fraudulent tactics have resulted in the removal of more than 100,000 businesses’ websites which translates to millions of dollars and thousands of hours of employee time lost. The Digital Millennium Copyright Act, or DMCA, was created to protect both the public and copyright owners from online infringement and forms the basis for Google’s takedown policy intended to stem legitimate infringement. This recent lawsuit by Google is meant to stop weaponization of the policy and ensure proper restriction of online content consistent with current copyright law.

Google to Require AI Use Disclosure

Also of interest, specifically with regard to AI and ethics, Google-owned YouTube has announced that in the coming months, creators will be required to disclose when they use AI and other digital tools to create altered or synthetic videos. A failure to disclose pursuant to the new policy can result in account removal or suspension of the account – and any ability to profit from the content. Additionally, videos in which creators use AI to simulate an identifiable person will be removable per YouTube’s privacy tools.

These developments are important not just in the wake of Google’s release of chatbot Bard, but also in light of Google’s new Search Generative Experience (SGE) which utilizes Generative AI, a highly advanced form of AI capable of creating content. As Google recently announced, the goal of SGE is to transform searching from a passive activity to a more reactive and interactive experience. To that end, Google’s SGE uses Generative AI to create concise overviews of search topics to further enhance search results—including results in more than just static text, but also in images, generating unlimited possibilities for engaging with Google search results.

While the sky is the limit on potential uses of AI, the same may be true for potential misuses and abuses of the technology. Certainly, the technical, intellectual property, and legal communities will evolve considerably in this area in the coming years. As these advances continue, businesses are wise to consider potential safeguards from abuse and liability within their own sector.

For more information about technology law, see our Technology and Data Law Services and Industry Focused Legal Solutions pages.

This article has been provided for informational purposes only and is not intended and should not be construed to constitute legal advice. Please consult your attorneys in connection with any fact-specific situation under federal law and the applicable state or local laws that may impose additional obligations on you and your company. © 2024 Klemchuk PLLC