welsh
On Program Synthesis and Large Language Models
Much has been made of the abilities of the new developments in machine intelligence and in particular of what chatbots such as ChatGPT that are based on large language models (LLMs) are capable of. While these new pieces of software are impressive when it comes to generating text, some people in the computing community take this observation much further and, in my opinion, much too far. They claim programming will be a thing of the past. In a January 2023 Communications column, Matt Welsh put forward this opinion: "Programming will be obsolete. I believe the conventional idea of'writing a program' is headed for extinction, and indeed, for all but very specialized applications, most software, as we know it, will be replaced by AI systems that are trained rather than programmed. In situations where one needs a'simple' program (after all, not everything should require a model of hundreds of billions of parameters running on a cluster of GPUs), those programs will, themselves, be generated by an AI rather than coded by hand."14
AI chatbot launches on Gov.UK to help business users – with mixed results
It speaks a bit of Welsh, can recite the building regulations, refuses to say whether Rishi Sunak is better than Keir Starmer and won't explain the UK corporation tax regime. The government is launching an artificial intelligence chatbot to help businesses chart the 700,000 page labyrinth that is the Gov.UK website and it looks like users can expect varied results. The experimental system will be tested by up to 15,000 business users before wider availability, possibly next year. Before you get started it warns: "The biggest limitation of AI tools like me is a problem known as'hallucination'. This means we sometimes make up false information or facts but present them to you confidently."
Why OpenAI's new model is such a big deal
I thought OpenAI's GPT-4o, its leading model at the time, would be perfectly suited to help. I asked it to create a short wedding-themed poem, with the constraint that each letter could only appear a certain number of times so we could make sure teams would be able to reproduce it with the provided set of tiles. The model repeatedly insisted that its poem worked within the constraints, even though it didn't. It would correctly count the letters only after the fact, while continuing to deliver poems that didn't fit the prompt. Without the time to meticulously craft the verses by hand, we ditched the poem idea and instead challenged guests to memorize a series of shapes made from colored tiles. However, last week OpenAI released a new model called o1 (previously referred to under the code name "Strawberry" and, before that, Q*) that blows GPT-4o out of the water for this type of purpose.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.90)
Universal Syntactic Structures: Modeling Syntax for Various Natural Languages
Kim, Min K., Takero, Hafu, Fedovik, Sara
We aim to provide an explanation for how the human brain might connect words for sentence formation. A novel approach to modeling syntactic representation is introduced, potentially showing the existence of universal syntactic structures for all natural languages. As the discovery of DNA's double helix structure shed light on the inner workings of genetics, we wish to introduce a basic understanding of how language might work in the human brain. It could be the brain's way of encoding and decoding knowledge. It also brings some insight into theories in linguistics, psychology, and cognitive science. After looking into the logic behind universal syntactic structures and the methodology of the modeling technique, we attempt to analyze corpora that showcase universality in the language process of different natural languages such as English and Korean. Lastly, we discuss the critical period hypothesis, universal grammar, and a few other assertions on language for the purpose of advancing our understanding of the human brain.
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Massachusetts > Middlesex County > Malden (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- (7 more...)
- Transportation (0.96)
- Health & Medicine > Therapeutic Area (0.46)
- Leisure & Entertainment > Games (0.46)
Data-to-text Generation for Severely Under-Resourced Languages with GPT-3.5: A Bit of Help Needed from Google Translate
LLMs like GPT are great at tasks involving English which dominates in their training data. In this paper, we look at how they cope with tasks involving languages that are severely under-represented in their training data, in the context of data-to-text generation for Irish, Maltese, Welsh and Breton. During the prompt-engineering phase we tested a range of prompt types and formats on GPT-3.5 and~4 with a small sample of example input/output pairs. We then fully evaluated the two most promising prompts in two scenarios: (i) direct generation into the under-resourced language, and (ii) generation into English followed by translation into the under-resourced language. We find that few-shot prompting works better for direct generation into under-resourced languages, but that the difference disappears when pivoting via English. The few-shot + translation system variants were submitted to the WebNLG 2023 shared task where they outperformed competitor systems by substantial margins in all languages on all metrics. We conclude that good performance on under-resourced languages can be achieved out-of-the box with state-of-the-art LLMs. However, our best results (for Welsh) remain well below the lowest ranked English system at WebNLG'20.
- Europe > Denmark > Capital Region > Copenhagen (0.07)
- Europe > Spain > Galicia > Madrid (0.05)
- North America > United States > Mississippi (0.05)
- (4 more...)
The End of Programming Is Nigh - The New Stack
Is the end of programming nigh? If you ask Matt Welsh, he'd say yes. As Richard McManus wrote on The New Stack, Welsh is a former professor of computer science at Harvard who spoke at a virtual meetup of the Chicago Association for Computing Machinery (ACM), explaining his thesis that ChatGPT and GitHub Copilot represent the beginning of the end of programming. Welsh joined us on The New Stack Makers to discuss his perspectives about the end of programming and answer questions about the future of computer science, distributed computing, and more. Welsh is now the founder of Fixie.ai, a platform they are building to let companies develop applications on top of large language models to extend with different capabilities. For 40 to 50 years, programming language design has had one goal.
How ChatGPT mangled the language of heaven Letter
Ian Watson (Letters, 17 February) asks for a translation of my letter in Welsh (13 February). I did include an English translation in my letter, but only the Welsh was published. I sent a second letter asking the Guardian to publish the translation, as I was having a lot of stick from a certain friend who couldn't read it, but with no luck. Hopefully Ian's letter will change the letters editor's mind. The English version was as follows: "Thank you very much for the excellent editorial article which sang the praises of the Welsh language … Since you are now so enthusiastic about Welsh, may I, from now on, write to you in the language of heaven?" Meanwhile, there has been much glee about my letter on Welsh-language social media.
Why Artificial Intelligence Needs Quantum Computing
The current attention trade shows, media outlets, and (more and more) vendors are devoting to quantum computing is far from transitory. This form of computing is almost certain to play an integral part in the most meaningful future developments of Artificial Intelligence--if not in those for today. The bifurcation of quantum computing's applicability to AI is clear. On the one hand, "Quantum computing is necessary to reach Artificial General Intelligence," denoted Kyndi CEO Ryan Welsh. On the other, quantum computing is able to solve a critical problem related to AI that is a vital steppingstone to actually achieving Artificial General Intelligence. According to Welsh, quantum computing methods have a definite capacity for "fusing the gap between continuous mathematics and discreet mathematics," which is at the crux of the dichotomy between statistical AI and symbolic AI for Natural Language Processing applications.
To Be Ethical, AI Must Become Explainable. How Do We Get There? - Liwaiwai
AI can now write realistic-sounding text, give debating champs a run for their money, diagnose illnesses, and generate fake human faces--among much more. After training these systems on massive datasets, their creators essentially just let them do their thing to arrive at certain conclusions or outcomes. The problem is that more often than not, even the creators don't know exactly why they've arrived at those conclusions or outcomes. There's no easy way to trace a machine learning system's rationale, so to speak. The further we let AI go down this opaque path, the more likely we are to end up somewhere we don't want to be--and may not be able to come back from.
- Government > Military (0.33)
- Government > Regional Government > North America Government > United States Government (0.31)
Is there a smarter path to artificial intelligence? Some experts hope so
For the past five years, the hottest thing in artificial intelligence has been a branch known as deep learning. The grandly named statistical technique, put simply, gives computers a way to learn by processing massive amounts of data. Thanks to deep learning, computers can easily identify faces and recognize spoken words, making other forms of humanlike intelligence suddenly seem within reach. Companies like Google, Facebook and Microsoft have poured money into deep learning. And the technology's perception and pattern-matching abilities are being applied to improve progress in fields such as drug discovery and self-driving cars. But now some scientists are asking whether deep learning is really so deep after all.
- North America > United States > New York (0.05)
- North America > United States > California > Alameda County > Berkeley (0.05)
- Asia > Middle East > Jordan (0.05)
- Health & Medicine (0.50)
- Information Technology (0.35)
- Education (0.35)
- (2 more...)