Goto

Collaborating Authors

 public life


ChatGPT exploded into public life a year ago. Now we know what went on behind the scenes John Naughton

The Guardian

If a week is a long time in politics, a year is an eternity in tech. Just over 12 months ago, the industry was humming along in its usual way. The big platforms were deep into what Cory Doctorow calls "enshittification" – the process in which platforms go from being initially good to their users, to abusing them to make things better for their business customers and finally to abusing those customers in order to claw back all the value for themselves. Elon Musk was ramping up his efforts to alienate advertisers on Twitter/X and accelerate the death spiral of his expensive toy. TikTok was monopolising every waking hour of teenagers.


Artificial Intelligence poses a challenge to our principles TheArticle

#artificialintelligence

Algorithms that help councils detect potholes, AI tools that help doctors know when patients are stable enough to return home, AI that scans and finds melanomas . . . But there are other aspects to this tech, like the use of live facial recognition in our cities or automated decisions about benefit entitlement. Artificial intelligence is starting to prove game-changing for some applications. It's reasonable to think that efficiency and effectiveness of our public services can be radically improved through its use. But it is clear that the public will need more reassurance about the way in which AI will be used by government, especially since, in the public sector, citizens will often have no choice but to be subject to an algorithm's decision-making power.


Artificial Intelligence and Public Standards: Committee publishes report

#artificialintelligence

The Committee on Standards in Public Life today published its report and recommendations to the Prime Minister to ensure that high standards of conduct are upheld as technologically assisted decision making is adopted more widely across the public sector. The Committee also published new polling on public attitudes to AI. "Honesty, integrity, objectivity, openness, leadership, selflessness and accountability were first outlined by Lord Nolan as the standards expected of all those who act on the public's behalf. "Artificial intelligence – and in particular, machine learning – will transform the way public sector organisations make decisions and deliver public services. Demonstrating high standards will help realise the huge potential benefits of AI in public service delivery. However, it is clear that the public need greater reassurance about the use of AI in the public sector.


AI Experts Want to End 'Black Box' Algorithms in Government

#artificialintelligence

The right to due process was inscribed into the US constitution with a pen. A new report from leading researchers in artificial intelligence cautions it is now being undermined by computer code. Public agencies responsible for areas such as criminal justice, health, and welfare increasingly use scoring systems and software to steer or make decisions on life-changing events like granting bail, sentencing, enforcement, and prioritizing services. The report from AI Now, a research institute at NYU that studies the social implications of artificial intelligence, says too many of those systems are opaque to the citizens they hold power over. The AI Now report calls for agencies to refrain from what it calls "black box" systems opaque to outside scrutiny. Kate Crawford, a researcher at Microsoft and cofounder of AI Now, says citizens should be able to know how systems making decisions about them operate and have been tested or validated.


AI Experts Want to End 'Black Box' Algorithms in Government

WIRED

The right to due process was inscribed into the US constitution with a pen. A new report from leading researchers in artificial intelligence cautions it is now being undermined by computer code. Public agencies responsible for areas such as criminal justice, health, and welfare increasingly use scoring systems and software to steer or make decisions on life-changing events like granting bail, sentencing, enforcement, and prioritizing services. The report from AI Now, a research institute at NYU that studies the social implications of artificial intelligence, says too many of those systems are opaque to the citizens they hold power over. The AI Now report calls for agencies to refrain from what it calls "black box" systems opaque to outside scrutiny. Kate Crawford, a researcher at Microsoft and cofounder of AI Now, says citizens should be able to know how systems making decisions about them operate and have been tested or validated.