Collaborating Authors


Make Lemonade Out of Lemonade - Insurance Thought Leadership


Lemonade's recent glitch sheds light on public fears about AI -- and about what must be done to keep AI innovation from slowing. Being a disruptor is hard. It requires taking disproportionate risks, pushing the status quo and -- more often than not -- hitting speed bumps. Recently, Lemonade hit a speed bump in their journey as a visible disruptor and innovator in the insurance industry. I am not privy to any details or knowledge about the case or what Lemonade is or isn't doing, but the Twitter event and public dialogue that built up to this moment brings forward some reflections and opportunities every carrier should pause to consider.

The Future of Artificial intelligence: Pavan Reddy Appakonda


As AI advances and becomes more powerful, its impact on the global economy will grow significantly more prominent. It will dramatically impact virtually every aspect of the world economy, ranging from unemployment rates to economic growth, productivity, and income inequality, says Pavan, popularly known as Entrepreneur, Software Engineer and Investor. However, how exactly does AI impact the business world? How can artificial intelligence assist your business in growing? I could tell the most remarkable benefits of Artificial Intelligence that are helping to reshape the world that we know of today.

Challenges to coordinate policies on AI regulation: international conference


The Council of Europe and the Hungarian presidency of its Committee of Ministers are holding an online international conference on 26 October to discuss the challenges governments face to regulate artificial intelligence (AI) in a coordinated manner. Under the theme "Current and Future Challenges of Coordinated Policies on AI Regulation", the event will showcase various AI governance models and examine the interplay between national policies and the work of the Council of Europe and other organisations. One of the main contributions of the Council of Europe in this field is the work of the intergovernmental AI expert body CAHAI, which is examining the development of an international legal framework for the development, design and application of artificial intelligence based on the Council of Europe's standards on human rights, democracy and the rule. Representatives of international organisations, national policy experts, IT companies, civil society and academia will discuss the way to improve AI policymaking at the global, regional and national level. They will also examine case studies on best practices of AI governance and discuss issues such as the possible long-term societal effects of AI and the sustainable development of AI applications.

What Should the Future of AI Look Like?


It hasn't been that long since artificial intelligence began its journey out of the realm of sci-fi novels and into our daily lives. Perhaps because of its recency, AI's transition into real-world systems and technologies has been both inspiring and unsettling, a tension that is just as strong in debates around its future. In this week's Variable, we share two eye-opening contributions to this conversation. If you prefer to keep things more actionable, however, have no fear: we also include some of our recent favorites on topics like MLOps and model stacking. Thank you for joining us on another week of exciting and thought-provoking articles!

How AI and Machine Learning are Transforming the Education Sector - AACE


Artificial Intelligence is impacting several industries, including education. It's transforming the way teachers and institutions work while revolutionizing the learning process for students. According to research, by 2025, AI-powered education will be worth at least $5.8 billion and significantly higher in subsequent years. In this article, we'll explore how AI is transforming the education industry and its benefits. In addition to managing classrooms, teachers also traditionally handle organizational and administrative tasks.

Most Americans want AI regulation -- and they want it yesterday


Nearly two-thirds of Americans want the U.S to regulate the development and use of artificial intelligence in the next year or sooner -- with half saying that regulation should have begun yesterday, according to a Morning Consult poll. Another 13% say that regulation should start in the next year. "You can thread this together," Austin Carson, founder of new nonprofit group SeedAI and former government relations lead for Nvidia, said in an email. "Half or more Americans want to address all of these things, split pretty evenly along ideological lines." The poll, which SeedAI commissioned, backs up earlier findings that while U.S. adults support investment in the development of AI, they want clear rules around that development.

AI rules: what the European Parliament wants


Parliament is working on the Commission proposal, presented on 21 April 2021, for turning Europe into the global hub for trustworthy AI. Ahead of the Commission's proposal on AI, the Parliament set up a special committee to analyse the impact of artificial intelligence on the EU economy. "Europe needs to develop AI that is trustworthy, eliminates biases and discrimination, and serves the common good, while ensuring business and industry thrive and generate economic prosperity," said the new committee chair Dragoș Tudorache. On 20 October 2020, Parliament adopted three reports outlining how the EU can best regulate AI while boosting innovation, ethical standards and trust in technology. One of the reports focuses on how to ensure safety, transparency and accountability, prevent bias and discrimination, foster social and environmental responsibility, and ensure respect for fundamental rights.

EU Proposed Regulatory Regime for Artificial Intelligence (AI) Could Set Global Standard


The European Union (EU) has launched the world's first comprehensive legislative package to regulate AI. The Artificial Intelligence Act (AIA), which is currently progressing through the EU legislative process, will establish a risk-based framework for regulating use of AI anywhere within the EU, including by companies based outside the EU. A limited number of unacceptable AI use cases, such as social profiling by governments, would be completely banned; high-risk use cases would be subjected to prior conformity assessment and wide-ranging new compliance obligations; medium risk functions are subject to enhanced transparency rules, and low-risk use cases can largely be pursued without any new obligations under the AIA. By legislating now, the EU hopes to establish a de facto global standard for AI. The EU is certainly well ahead of the US in this area, with debate in the US more focused on the extent to which the US may be falling behind China in military applications of AI, although some think tanks are looking at the ethics of AI and new state privacy laws have tasked regulators to develop standards for transparency and choice.