Ryan N. Phelan is a registered patent attorney who counsels and works with clients in all areas of intellectual property (IP), with a focus on patents. Clients enjoy Ryan's business-focused approach to IP. With a MBA from Northwestern's Kellogg School of Management, Ryan works with clients to achieve their business objectives, including developing and protecting their innovations and businesses with IP.
When Peter George saw news of the racially motivated mass-shooting at the Tops supermarket in Buffalo last weekend, he had a thought he's often had after such tragedies. "Could our system have stopped it?" he said. But I think we could democratize security so that someone planning on hurting people can't easily go into an unsuspecting place." George is chief executive of Evolv Technology, an AI-based system meant to flag weapons, "democratizing security" so that weapons can be kept out of public places without elaborate checkpoints. As U.S. gun violence like the kind seen in Buffalo increases -- firearms sales reached record heights in 2020 and 2021 while the Gun Violence Archive reports 198 mass shootings since January -- Evolv has become increasingly popular, used at schools, stadiums, stores and other gathering spots. To its supporters, the system is a more effective and less obtrusive alternative to the age-old metal detector, making events both safer and more pleasant to attend. To its critics, however, Evolv's effectiveness has hardly been proved. And it opens up a Pandora's box of ethical issues in which convenience is paid for with RoboCop surveillance. "The idea of a kinder, gentler metal detector is a nice solution in theory to these terrible shootings," said Jay Stanley, senior policy analyst for the American Civil Liberties Union's project on speech, privacy, and technology. "But do we really want to create more ways for security to invade our privacy?
Depending on which Terminator movies you watch, the evil artificial intelligence Skynet has either already taken over humanity or is about to do so. But it's not just science fiction writers who are worried about the dangers of uncontrolled AI. In a 2019 survey by Emerj, an AI research and advisory company, 14% of AI researchers said that AI was an "existential threat" to humanity. Even if the AI apocalypse doesn't come to pass, shortchanging AI ethics poses big risks to society -- and to the enterprises that deploy those AI systems. Central to these risks are factors inherent to the technology -- for example, how a particular AI system arrives at a given conclusion, known as its "explainability" -- and those endemic to an enterprise's use of AI, including reliance on biased data sets or deploying AI without adequate governance in place.
In the second of a series of blogs from our global offices, we provide a overview of key trends in artificial intelligence in France. What is France's strategy for Artificial Intelligence? The French president, Emmanuel Macron, announced in March 2018 his ambition for France to become a global leader of the artificial intelligence (AI) ecosystem. The first phase of the National Programme included an initial investment of €1.5 billion into the creation of a network of interdisciplinary institutes dedicated to artificial intelligence (the "3IA" institutes) and the financing of multiple AI projects overseen by Bpifrance. The second phase will provide for €2 billion of private and public funding to attract and train new talent.
US AI guidelines are everything the EU's AI Act is not: voluntary, non-prescriptive and focused on changing the culture of tech companies. As the EU's Artificial Intelligence (AI) Act fights its way through multiple rounds of revisions at the hands of MEPs, in the US a little-known organisation is quietly working up its own guidelines to help channel the development of such a promising and yet perilous technology. In March, the Maryland-based National Institute of Standards and Technology (NIST) released a first draft of its AI Risk Management Framework, which sets out a very different vision from the EU. The work is being led by Elham Tabassi, a computer vision researcher who joined the organisation just over 20 years ago. Then, "We built [AI] systems just because we could," she said.
They all had some effect, surely. Could I have done it without them? Hang on, what *is* the it that I wouldn't have done? Real life usually lacks counterfactuals. I sense this topic could add some spice to the discussions of those who have been asking about the role of psychoactive substances in art since time immemorial, though the AI component adds nothing fundamentally new.
Autonomous vehicle startup Gatik says it will start using its self-driving box trucks in Kansas as it expands to more territories. Governor Laura Kelly last week signed a bill that makes it legal for self-driving vehicles to run on public roads under certain circumstances. Following a similar effort in Arkansas, Gatik says it and its partner Walmart worked with legislators and stakeholders to "develop and propose legislation that prioritizes the safe and structured introduction of autonomous vehicles in the state." Before Gatik's trucks hit Kansas roads, the company says it will provide training to first responders and law enforcement. Gatik claims that, since it started commercial operations three years ago, it has maintained a clean safety record in Arkansas, Texas, Louisiana and Ontario, Canada.
More than a million Illinois residents will receive a $397 settlement payment from Facebook this week, thanks to a legal battle over the platform's since-retired photo-tagging system that used facial recognition. It's been nearly seven years since the 2015 class-action lawsuit was first filed, which accused Facebook of breaking a state privacy law that forbids companies from collecting biometric data without informing users. The platform has since faced broad, global criticism for its use of facial recognition tech, and last year Meta halted the practice completely on Facebook and Instagram. But as Vox notes, the company has made no promises to avoid facial recognition in future products. Even though it was first filed in Illinois, the class-action lawsuit eventually wound up on Facebook's home turf -- at the U.S. District Court for Northern California.
A group of Democratic lawmakers led by Senator Ron Wyden of Oregon is calling on the Federal Trade Commission to investigate ID.me, the controversial identification company best known for its work with the Internal Revenue Service. In a letter addressed to FTC Chair Lina Khan, the group suggests the firm misled the American public about the capabilities of its facial recognition technology. Specifically, lawmakers point to a statement ID.me made at the start of the year. After CEO Blake Hall said the company did not use one-to-many facial recognition, an approach that involves matching images against those in a database, ID.me backtracked on those claims. It clarified it uses a "specific" one-to-many check during user enrollment to prevent identity theft.
As companies increasingly involve AI in their hiring processes, advocates, lawyers, and researchers have continued to sound the alarm. Algorithms have been found to automatically assign job candidates different scores based on arbitrary criteria like whether they wear glasses or a headscarf or have a bookshelf in the background. Hiring algorithms can penalize applicants for having a Black-sounding name, mentioning a women's college, and even submitting their résumé using certain file types. They can disadvantage people who stutter or have a physical disability that limits their ability to interact with a keyboard. All of this has gone widely unchecked.