loomis
Hold, Pick, Feel: How AI Changes Lives of Amputees
Melissa Loomis from Ohio, US lost an arm in a terrible accident and her normal life went upside down. But, she had never imagined that one day, with the help of AI, she would be able to get an arm as good as a real one. In 2016, Loomis became the first amputee in the world to feel a sense of touch through a mind-controlled bionic arm. This is often considered one of the biggest events in the history of prosthesis. The history of prostheses dates back to the Egyptian era when the first functional prosthesis limb was used between 950 and 710 BC.
- North America > United States > Ohio (0.25)
- North America > United States > Michigan (0.05)
- Health & Medicine > Therapeutic Area (0.51)
- Health & Medicine > Health Care Technology (0.33)
AI Trends In 2022: What's Real And What's Hype? Hear From The Experts
The end of the year is a time not just for predictions of top trends but also to watch for the biggest hype and most misleading recommendations that get dished out to business leaders. I asked several industry leaders five actionable questions for 2022. Inspired by Tribe of Mentors, the bestselling book by Tim Ferris, I gave the traditional questions a slight twist. This article is organized around these questions. Feel free to skip around to those that interest you the most.
Remember When Multiplayer Gaming Needed Envelopes and Stamps?
The term multiplayer gaming likely brings to mind the image of hopping online or sitting down with friends. A third method--mail--is unlikely to come up. Once a sprawling industry, play-by-mail (PBM) games have been rendered an afterthought in gaming history by the internet. Chess and Go were played by mail for centuries, and in the 1960s devotees of Diplomacy, a lengthy multiplayer board game that requires a neutral moderator, began using mail. According to a 1985 Computer Gamer article, competition sometimes got so intense that players offered bribes and forged letters.
NST Leader: Artificial Intelligence in court
Not as a defendant -- it would have been a novel case had it been so -- but as an aid to help the magistrate with sentencing. Trends elsewhere suggest something more. But if a machine should one day sit at the bench presiding over a court battle between men and men, then it will be a surrender most ominous. There are at least two reasons why we should not defer to machines. It is true that AI can do many complicated things.
- North America > United States > Wisconsin (0.08)
- Asia > Malaysia (0.06)
- Law (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.33)
Machine Learning and Discrimination
Most of the time, machine learning does not touch on particularly sensitive social, moral, or ethical issues. Someone gives us a data set and asks us to predict house prices based on given attributes, classifying pictures into different categories, or teaching a computer the best way to play PAC-MAN -- what do we do when we are asked to base predictions of protected attributes according to anti-discrimination laws? How do we ensure that we do not embed racist, sexist, or other potential biases into our algorithms, be it explicitly or implicitly? It may not surprise you that there have been several important lawsuits in the United States on this topic, possibly the most notably one involving Northpointe's controversial COMPAS -- Correctional Offender Management Profiling for Alternative Sanctions -- software, which predicts the risk that a defendant will commit another crime. The proprietary algorithm considers some of the answers from a 137-item questionnaire to predict this risk.
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Law (1.00)
Courts Are Using AI to Sentence Criminals. That Must Stop Now
There is a stretch of highway through the Ozark Mountains where being data-driven is a hazard. Jason Tashea (@justicecodes), a writer and technologist based in Baltimore, is the founder of Justice Codes, a criminal justice and technology consultancy. Heading from Springfield, Missouri, to Clarksville, Arkansas, navigation apps recommend the Arkansas 43. While this can be the fastest route, the GPS's algorithm does not concern itself with factors important to truckers carrying a heavy load, such as the 43's 1,300-foot elevation drop over four miles with two sharp turns. The road once hosted few 18-wheelers, but the last two and half years have seen a noticeable increase in truck traffic--and wrecks.
- North America > United States > Arkansas (0.47)
- North America > United States > Missouri > Greene County > Springfield (0.25)
- North America > United States > Wisconsin (0.06)
- Law > Criminal Law (0.76)
- Transportation > Ground > Road (0.55)
- Transportation > Freight & Logistics Services (0.55)
- Law > Litigation (0.49)
The case for open source classifiers in AI algorithms
Dr. Carol Reiley's achievements are too long to list. She co-founded Drive.ai, a self-driving car startup that raised $50 million in its second round of funding last year. Forbes magazine named her one of "20 Incredible Women in AI," and she built intelligent robot systems as a PhD candidate at Johns Hopkins University. But when she built a voice-activated human-robot interface, her own creation couldn't recognize her voice. Dr. Reiley used Microsoft's speech recognition API to build her interface.
- North America > United States > North Carolina > Wake County > Raleigh (0.06)
- North America > United States > Wisconsin (0.05)
- North America > United States > Pennsylvania (0.05)
- (2 more...)
- Information Technology (0.91)
- Law (0.76)
- Transportation (0.58)
Perspective AI is more powerful than ever. How do we hold it accountable?
A self-driving car operated by Uber struck and killed a woman last Sunday in Tempe, Ariz. Few details have emerged, but it's reportedly the first fatality involving a self-driving vehicle. In January, a Pittsburgh car crash sent two people to the hospital; the accident involved a self-driving Fusion from Ford-backed Argo AI. The sedan was hit by a truck that ran a red light, and at the last second, the human back-up driver reportedly switched the car out of autonomous mode and took control of the Fusion's wheel. Could these crashes have been avoided?
- North America > United States > Arizona > Maricopa County > Tempe (0.25)
- North America > United States > Wisconsin (0.05)
- Law (1.00)
- Transportation > Ground > Road (0.91)
- Transportation > Passenger (0.56)
Why artificial intelligence does what it does
A self-driving car operated by Uber struck and killed a woman last week in Tempe, Ariz. Few details have emerged, but it's reportedly the first fatality involving a self-driving vehicle. In January, a Pittsburgh car crash sent two people to the hospital; the accident involved a self-driving Fusion from Ford-backed Argo AI. The Fusion was hit by a truck that ran a red light, and at the last second, the human backup driver reportedly took the car out of autonomous mode and took control of the Fusion's wheel. Could these crashes have been avoided?
- North America > United States > Arizona > Maricopa County > Tempe (0.25)
- North America > United States > Wisconsin (0.06)
- Law (1.00)
- Information Technology (0.70)
- Transportation > Ground > Road (0.57)
Artificial intelligence is more powerful than ever. How do we hold it accountable?
Entrusting important decisions to a system that can't explain itself presents obvious dangers. Take the case of Eric Loomis, a Wisconsin man sentenced to six years in prison for eluding police while driving a car that had been used in a drive-by shooting. The judge's sentence was based in part on a risk score for Loomis generated by COMPAS, a commercial risk-assessment tool used, according to one study, "to assess more than 1 million offenders" in the last two decades. Loomis appealed his sentence, based on the court's use of the AI-generated risk score, because it relied on a proprietary algorithm whose exact methodology is unknown. COMPAS is designed to estimate an individual's likelihood of committing another crime in the future, but evidence suggests that it may be no better at predicting risk than untrained observers.