bateson
When America First Dropped Acid
One evening in September of 1957, viewers across America could turn on their television sets and tune in to a CBS broadcast during which a young woman dropped acid. She sat next to a man in a suit: Sidney Cohen, the researcher who had given her the LSD. The woman wore lipstick and nail polish, and her eyes were shining. "I wish I could talk in Technicolor," she said. And, at another point, "I can see the molecules. Were some families maybe--oh, I don't know--eating meat loaf on TV trays as they watched this nice lady undergo her mind-bending, molecule-revealing journey through inner space? Did they switch to "Father Knows Best" or "The Perry Como Show" afterward? One of the feats that the historian Benjamin Breen pulls off in his lively and engrossing new book, "Tripping on Utopia: Margaret Mead, the Cold War, and the Troubled Birth of Psychedelic Science" (Grand Central), is to make a cultural moment like the anonymous woman's televised trip seem less incongruous, if no less ...
- North America > United States > Oregon (0.04)
- North America > United States > New York (0.04)
- North America > United States > New Jersey > Passaic County > Paterson (0.04)
- (9 more...)
Microsoft accused of damaging Guardian's reputation with AI-generated poll
The Guardian has accused Microsoft of damaging its journalistic reputation by publishing an AI-generated poll speculating on the cause of a woman's death next to an article by the news publisher. Microsoft's news aggregation service published the automated poll next to a Guardian story about the death of Lilie James, a 21-year-old water polo coach who was found dead with serious head injuries at a school in Sydney last week. The poll, created by an AI program, asked: "What do you think is the reason behind the woman's death?" Readers were then asked to choose from three options: murder, accident or suicide. Readers reacted angrily to the poll, which has subsequently been taken down – although highly critical reader comments on the deleted survey were still online as of Tuesday morning.
- Questionnaire & Opinion Survey (0.57)
- Personal > Interview (0.57)
A Recursive Bateson-Inspired Model for the Generation of Semantic Formal Concepts from Spatial Sensory Data
de Miguel-Rodriguez, Jaime, Sancho-Caparrini, Fernando
Neural-symbolic approaches to machine learning incorporate the advantages from both connectionist and symbolic methods. Typically, these models employ a first module based on a neural architecture to extract features from complex data. Then, these features are processed as symbols by a symbolic engine that provides reasoning, concept structures, composability, better generalization and out-of-distribution learning among other possibilities. However, neural approaches to the grounding of symbols in sensory data, albeit powerful, still require heavy training and tedious labeling for the most part. This paper presents a new symbolic-only method for the generation of hierarchical concept structures from complex spatial sensory data. The approach is based on Bateson's notion of difference as the key to the genesis of an idea or a concept. Following his suggestion, the model extracts atomic features from raw data by computing elemental sequential comparisons in a stream of multivariate numerical values. Higher-level constructs are built from these features by subjecting them to further comparisons in a recursive process. At any stage in the recursion, a concept structure may be obtained from these constructs and features by means of Formal Concept Analysis. Results show that the model is able to produce fairly rich yet human-readable conceptual representations without training. Additionally, the concept structures obtained through the model (i) present high composability, which potentially enables the generation of 'unseen' concepts, (ii) allow formal reasoning, and (iii) have inherent abilities for generalization and out-of-distribution learning. Consequently, this method may offer an interesting angle to current neural-symbolic research. Future work is required to develop a training methodology so that the model can be tested against a larger dataset.
- Europe > Ireland (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- Europe > Spain > Aragón (0.04)
- Europe > Spain > Andalusia > Seville Province > Seville (0.04)
- Information Technology > Artificial Intelligence > Cognitive Science (1.00)
- Information Technology > Artificial Intelligence > Natural Language (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.88)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Fuzzy Logic (0.46)
AI, AI on the wall -- Who's the Fairest of them all?
"A world perfectly fair in some dimensions would be horribly unfair in others." "Fairness" in Artificial Intelligence (AI) applications -- both as a concept and a practice -- is the focus of many organisations as they deploy new technologies for greater effectiveness and efficiencies. That machines are faster at processing large amounts of information and the notion that they are'more objective' than humans, appear to make them an obvious choice for progressivity and seemingly impartial actors in'fairer' decision-making. Yet, algorithmic based decisions have not come without their share of controversies -- Australia's recent'robo-debt' government intervention which wrongly pursued thousands of welfare recipients; the UK's'A-Levels fiasco' of downgrading graduating grades based on historical data, its controversial visa application streaming tool; and concerns about Clearview AI's facial recognition software for policing are raising new questions on the role of these technologies in society. Risk assessments are part of the fabric of modern society, but what we are dealing with here is not just'scaling up' human capacity for decision-making without the unwanted human biases and errors -- we are also extolling the'virtues of objectivity' under the guise of'fairness' (which is inherently subjective!) and failing to recognise the many inter-relationships that are being unraveled through the use of these algorithms in our daily lives.