Fears of AI hitting black market stir concerns of criminals evading government regulations: Expert
Dr. Harvey Castro said he's less concerned about AI being developed by big corporations because there are safeguards, but it can be created without safeguards and sold. Artificial intelligence – specifically large language models like ChatGPT – can theoretically give criminals information needed to cover their tracks before and after a crime, then erase that evidence, an expert warns. Large language models, or LLMs, make up a segment of AI technology that uses algorithms that can recognize, summarize, translate, predict and generate text and other content based on knowledge gained from massive datasets. ChatGPT is the most well known LLM, and its successful, rapid development has created unease among some experts and sparked a Senate hearing to hear from Sam Altman, the CEO of ChatGPT maker OpenAI, who pushed for oversight. Corporations like Google and Microsoft are developing AI at a fast pace. But when it comes to crime, that's not what scares Dr. Harvey Castro, a board-certified emergency medicine physician and national speaker on artificial intelligence who created his own LLM called "Sherlock."
May-22-2023, 06:00:29 GMT
- Country:
- North America > United States
- District of Columbia > Washington (0.05)
- Idaho (0.06)
- Massachusetts (0.06)
- North America > United States
- Industry:
- Information Technology (0.71)
- Law Enforcement & Public Safety > Fraud (0.41)
- Media > News (0.32)
- Technology: