Goto

Collaborating Authors

 c-suite


Spotify Is About to Be More Expensive Than Apple Music. That's Not the Worst Part.

Slate

Spotify is going through something right now. On Monday morning, the industry-defining audio streaming service announced that it would be hiking its Premium subscription prices for users in the United States, effective next month. The individual plan is rising by 1, the Duo plan by 2, and the family subscription by 3. These shifts arrive almost a year after Spotify raised U.S. subscription rates for the first time ever, upping the individual plan to 10.99 a month to match with competitors' price points. That increase was meant to mollify music-industry executives (who demanded better royalty payouts) and investors (who demanded that Spotify squeeze out regular profits).


A Long View On How Big Data And AI Have Transformed Business Culture

#artificialintelligence

As my colleagues and I attend 3 major data and technology industry events in New York City this coming week – ML Ops, Finovate, and the granddaddy of them all, Strata Data Conference – it is interesting to note how far we have come in a few short decades. It may be hard to imagine today, but there was a time not too very long ago when data analysts, with a few notable exceptions, were relegated to the hidden recesses of most corporations. Better to toil away in the bowels than to be shown the light of day. For many decades, even as information technology (IT) emerged as a critical business function, data was viewed more as something that firms filed away in vaults for the mandatory seven years to comply with regulators, and not a business asset that could be mined to unlock critical business insights. Data was perceived as the purview of those who were sometimes derisively referred to as data geeks or "propeller heads". This was long before Silicon Valley, or Wall Street, embraced the term "geek".



Data ethics: What it means and what it takes

#artificialintelligence

Now more than ever, every company is a data company. By 2025, individuals and companies around the world will produce an estimated 463 exabytes of data each day, 1 1. Jeff Desjardins, "How much data is generated each day?" World Economic Forum, April 17, 2019. With that in mind, most businesses have begun to address the operational aspects of data management--for instance, determining how to build and maintain a data lake or how to integrate data scientists and other technology experts into existing teams. Fewer companies have systematically considered and started to address the ethical aspects of data management, which could have broad ramifications and responsibilities. If algorithms are trained with biased data sets or data sets are breached, sold without consent, or otherwise mishandled, for instance, companies can incur significant reputational and financial costs. Board members could even be held personally liable.


The C-Suite has Trust Issues with AI

#artificialintelligence

This post was originally published in Harvard Business Review. Despite rising investments in artificial intelligence (AI) by today's enterprises, trust in the insights delivered by AI can be a hit or a miss with the C-suite. Are executives just resisting a new, unknown, and still unproven technology, or their hesitancy is rooted in something deeper? Executives have long resisted data analytics for higher-level decision-making, and have always preferred to rely on gut-level decision-making based on field experience to AI-assisted decisions. AI has been adopted widely for tactical, lower-level decision-making in many industries -- credit scoring, upselling recommendations, chatbots, or managing machine performance are examples where it is being successfully deployed.


AI ethics responsibility shifts from tech silos to broader executive champions in the C-suite

#artificialintelligence

When asked which function is primarily accountable for AI ethics in a new survey from IBM's Institute for Business Value (IBV), 80 percent of respondents pointed to a non-technical executive, such as a CEO, as the primary "champion" for AI ethics, a sharp uptick from 15 percent in 2018--revealing a radical shift in the roles responsible for leading and upholding AI ethics at an organization. The firm's global study also indicates that despite a strong imperative for advancing trustworthy AI, including better performance compared to peers in sustainability, social responsibility, and diversity and inclusion, there remains a gap between leaders' intention and meaningful actions. "As many companies today use AI algorithms across their business, they potentially face increasing internal and external demands to design these algorithms to be fair, secured and trustworthy; yet, there has been little progress across the industry in embedding AI ethics into their practices," said Jesus Mantas, global managing partner at IBM Consulting, in a news release. "Our IBV study findings demonstrate that building trustworthy AI is a business imperative and a societal expectation, not just a compliance issue. As such, companies can implement a governance model and embed ethical principles across the full AI life cycle."


69% of employees need to deal with more security measures in a hybrid work environment - Help Net Security

#artificialintelligence

Ivanti worked with global digital transformation experts and surveyed 10,000 office workers, IT professionals, and the C-Suite to evaluate the level of prioritization and adoption of DEX in organizations and how it shapes the daily working experiences for employees. The report revealed that 49% of employees are frustrated by the tech and tools their organization provides and 64% believe that the way they interact with technology directly impacts morale. Conflicting views remain between C-Suite, IT, and employees when it comes to the future of work and technology's role in enabling the culture of hybrid work. Just 13% of knowledge workers prefer to work exclusively from the office, yet 56% of CXOs still feel that employees need to be in the office to be productive, although 74% of the C-Suite report they are more productive since the start of the pandemic – showing a disconnect between what they have experienced and what they believe employees need to do to be productive. Globally the C-Suite's number one priority was employee productivity, with workplace culture and employee satisfaction falling further down the list.


This AI attorney says companies need a chief AI officer -- pronto

#artificialintelligence

When Bradford Newman began advocating for more artificial intelligence expertise in the C-suite in 2015, "people were laughing at me," he said. Newman, who leads global law firm Baker McKenzie's machine learning and AI practice in its Palo Alto office, added that when he mentioned the need for companies to appoint a chief AI officer, people typically responded, "What's that?" But as the use of artificial intelligence proliferates across the enterprise, and as issues around AI ethics, bias, risk, regulation and legislation currently swirl throughout the business landscape, the importance of appointing a chief AI officer is clearer than ever, he said. This recognition led to a new Baker McKenzie report, released in March, called "Risky Business: Identifying Blind Spots in Corporate Oversight of Artificial Intelligence." The report surveyed 500 US-based, C-level executives who self-identified as part of the decision-making team responsible for their organization's adoption, use and management of AI-enabled tools. In a press release upon the survey's release, Newman said: "Given the increase in state legislation and regulatory enforcement, companies need to step up their game when it comes to AI oversight and governance to ensure their AI is ethical and protect themselves from liability by managing their exposure to risk accordingly."


Hey C-Suite: AI Won't Save You!

#artificialintelligence

This article is a collaboration with David Gossett, Principal with Infornautics, who builds first mover technologies that have no instruction set and need to be invented from scratch. He believes data has a story to tell if we apply the right machine models. His specialty is unstructured data. This article is intended to be provocative, to summon curiosity into the issues that plague us today when it comes to machine learning. Three years ago, I wrote this article, Artificial Intelligence Needs to Reset. The AI Hype that was supposed to transpire into all-things automated is still far off. Since that time, we've experienced speed bumps that have pointed to issues including lack of model accountability (black boxes), bias, lack of data representation in the training set etc. An AI Ethics movement emerged to demand more responsible tech, increased model transparency and verifiable models that do what they're supposed to do without impairment or harm to individuals or groups, in the process. Our future is Artificial Intelligence. It's been conjectured that this wonderful AI will be our savior.


Overcoming the C-Suite's Distrust of AI

#artificialintelligence

Despite rising investments in artificial intelligence (AI) by today's enterprises, trust in the insights delivered by AI can be a hit or a miss with the C-suite. Are executives just resisting a new, unknown, and still unproven technology, or their hesitancy is rooted in something deeper? Executives have long resisted data analytics for higher-level decision-making, and have always preferred to rely on gut-level decision-making based on field experience to AI-assisted decisions. AI has been adopted widely for tactical, lower-level decision-making in many industries -- credit scoring, upselling recommendations, chatbots, or managing machine performance are examples where it is being successfully deployed. However, its mettle has yet to be proven for higher-level strategic decisions -- such as recasting product lines, changing corporate strategies, re-allocating human resources across functions, or establishing relationships with new partners.