Have you ever thought that, eventually, business would go back to normal and everything would be "business as usual"? If so, it's time to rethink what "as usual" actually means. The business world keeps evolving, and brands continuously keep adopting new ways to meet the demands of their target audience. The needs of customers are never constant, as they vary from one person to the other. However, a brand has the responsibility to ensure they meet every consumer's needs, as it is how they add value.
The transformer architecture has shown an uncanny ability to model not only language but also images and proteins. New research found that it can apply what it learns from the first domain to the others. What's new: Kevin Lu and colleagues at UC Berkeley, Facebook, and Google devised Frozen Pretrained Transformer (FPT). After pretraining a transformer network on language data, they showed that it could perform vision, mathematical, and logical tasks without fine-tuning its core layers. Key insight: Transformers pick up on patterns in an input sequence, be it words in a novel, pixels in an image, or amino acids in a protein.
Can AI help people enhance their online dating game? Would you trust a computer with your digital pickup lines? A team at Medzino, a digital health and wellness clinic, had some fun prompting OpenAI's GPT-3 language prediction model to generate dating advice for different situations. For good or ill, we're seeing a lot of these "research" applications of GPT-3, which are decidedly unscientific but do gesture at some of novel uses of AI waiting for us just down the road. The team surveyed over 700 singles to see how well the AI did with pickup lines and general dating tactics and decorum.
It started out as a social experiment, but it quickly came to a bitter end. Microsoft's chatbot Tay had been trained to have "casual and playful conversations" on Twitter, but once it was deployed, it took only 16 hours before Tay launched into tirades that included racist and misogynistic tweets. As it turned out, Tay was mostly repeating the verbal abuse that humans were spouting at it -- but the outrage that followed centered on the bad influence that Tay had on people who could see its hateful tweets, rather than on the people whose hateful tweets were a bad influence on Tay. As children, we are all taught to be good people. Perhaps even more important, we are taught that bad company can corrupt good character -- and one bad apple can spoil the bunch. Today, we increasingly interact with machines powered by artificial intelligence -- AI-powered smart toys as well as AI-driven social media platforms that affect our preferences.
With innovations and inventions becoming the pacesetter of development right across the world, IPR and IP analytics have come to the center stage of the innovation ecosystem. The IP database is growing day by day and many more countries, especially in Asia and America are emerging as the new nerve centers of technological revolution. Thus, data analysis and analytics pertaining to IPR have become more complex, vast, and unwieldy for manual operations. Fortunately, AI/ML-powered data mining and data analytics have taken over the area in recent years. Depending upon the dedicated research invested in this area, the efficacy and accuracy of the results vary from service provider to service provider.
The media ecosystem propagated by inhuman recommendation algorithms can easily be weaponized by bad faith actors, and there exist clear examples in which such tactics have been utilized for anti-democratic ends. In the 2016 United States presidential election, Russian operatives created fake social media accounts with the goal of exacerbating political polarization and increasing social discord. These accounts propagated provocative and misleading statements that intentionally targeted delicate subjects, specifically issues regarding race. Controversial and highly engaging content like this is exactly what the recommendation algorithms favor, and there is no doubt the technology helped circulate Russian disinformation, which ultimately affected the results of the election.
Twitter is offering a cash reward to users who can help it weed out bias in its photo-cropping algorithm. The social-media platform announced'bounties' as high as $3,500 as part of this week's DEF CON hacker convention in Las Vegas. 'Finding bias in machine learning models is difficult, and sometimes, companies find out about unintended ethical harms once they've already reached the public,' Rumman Chowdhury and Jutta Williams of Twitter's Machine-Learning, Ethics, Transparency and Accountability (META) project said in a blog post. 'We want to change that.' The challenge was inspired by how researchers and hackers often point out security vulnerabilities to companies, Chowdhury and Williams explained.
Advances in technology, robotics, genetic engineering, quantum computing will blur the boundaries ... [ ] between the digital, physical, and biological worlds, and usher in a whole new set of complex challenges for business leaders. Current smart technology has ushered in the Fourth Industrial Revolution, a new era integrating communications with automating industrial practices and traditional manufacturing. Through this improved communication, smart devices make human intervention unnecessary as machines communicate, self-diagnose and solve problems. While these new products and services may increase efficiency, analysts say they should be as ethical as possible, given their impact on our lives. Advances in AI, the internet of things (IoT), 3-D printing, robotics, genetic engineering, quantum computing will blur the boundaries between the digital, physical, and biological worlds, and with them usher in a whole new set of complex challenges for business leaders to negotiate.
Twitter Inc said on Friday it will launch a competition for computer researchers and hackers to identify biases in its image-cropping algorithm after a group of researchers previously found the algorithm tended to exclude Black people and men. The competition is part of a wider effort across the tech industry to ensure artificial intelligence technologies act ethically. The social networking company said in a blog post that the bounty competition was aimed at identifying potential harms of this algorithm beyond what we identified ourselves. "Following criticism last year about image previews in posts excluding Black people's faces, the company said in May a study by three of its machine learning researchers found an 8% difference from demographic parity in favour of women and a 4% favour toward white individuals. Twitter released publicly the computer code that decides how images are cropped in the Twitter feed, and said on Friday participants are asked to find how the algorithm could cause harm, such as stereotyping or denigrating any group of people.