Microsoft's talks to acquire Tik Tok don't make a whole lot of sense on the surface. In fact, nothing about this deal makes sense given you have a tech giant that is known for the enterprise, President Trump tweeting about Tik Tok, legislators chiming in and a 45-day deal deadline. Sure, I've read a few Wall Street analysts do some mental gymnastics to argue for the Microsoft purchase of Tik Tok. Depending on price ($10 billion too good to pass up and $50 billion crazy), Microsoft CEO Satya Nadella is going to have some explaining to do. With all that said, here is a bit of informed speculation about why this Microsoft-Tik Tok lunacy is happening. The Department of Defense's JEDI cloud contract is to be announced soon.
The conference will host keynote presentations from leading voices in data-driven innovation, lightning talks from Columbia University researchers, & interactive poster & technology demonstrations. Data Science Day provides a forum for innovators in academia, industry, & government to connect. Keynote Speakers Pat Bajari, Chief Economist, Vice President of Artificial Intelligence, Amazon Eric Schmidt, Technical Advisor to the Board, Alphabet Columbia University & Columbia University Data ScienceInstitute Affiliated Faculty Talks Lightning Talk:Cause, Learn, Optimize & Reason Melanie Wall, Professor, Department of Biostatistics, Mailman School of Public Health; & Director of Mental Health Data Science in the Department of Psychiatry, Columbia University Irving Medical Center & the New York State Psychiatric Institute Samory Kpotufe, Associate Professor, Department of Statistics, Faculty of Arts & Sciences Elias Bareinboim, Associate Professor, Department of Computer Science, Columbia Engineering; & Director of the Causal Artificial Intelligence (CausalAI) Laboratory, Columbia University Clifford Stein, Professor of Industrial Engineering & Operations Research, Department of Computer Science, Columbia Engineering; & Associate Director for Research, Data Science Institute, Columbia University Lightning Talk: Human Machine: A New Hybrid World Oded Netzer, Professor of Business, Marketing Division, Columbia Business School Lydia Chilton, Assistant Professor, Department of Computer Science, Columbia Engineering Sarah Rossetti, Assistant Professor, Biomedical Informatics, Department of Biomedical Informatics; Assistant Professor, School of Nursing, Columbia University Irving Medical Center Lightning Talk: Ethics & Privacy: Terms of Usage Roxana Geambasu, Associate Professor, Department of Computer Science, Columbia Engineering Rafael Yuste, Professor, Department of Biological Sciences, Faculty of Arts & Sciences Jeff Goldsmith, Associate Professor, Department of Biostatistics, Columbia University Mailman School of Public Health What are my transportation/parking options for getting to & from the event? Please visit the following link for directions & parking information: http://transportation.columbia.edu/For How can I contact the organizer with any questions?
Several sports teams are exploring the use and implementation of facial-recognition technology in their stadiums, an effort that would help reduce risks from the coronavirus when fans return, the Wall Street Journal reported. The initial outbreak of coronavirus appeared to accelerate due to high-occupancy sports venues in Europe acting as super-spreaders – most notably soccer stadiums in Italy, and matches in the Champions League involving Spanish teams. With some areas seeing the pandemic under control, sports teams are looking to bring back fans in a safe and controlled way. The use of facial-recognition technology may allow sports venues to bring back small numbers of fans – most likely season-ticket holders or VIP guests – suggested Shaun Moore, chief executive of Trueface, a facial-recognition supplier. Moore indicated that the primary concern is that even scanning ticket bar codes could help to spread the virus.
LONDON/NEW YORK – Nvidia Corp. is in advanced talks to acquire Arm Ltd., the chip designer that SoftBank Group Corp. bought for $32 billion four years ago, according to people familiar with the matter. The two parties aim to reach a deal in the next few weeks, the people said, asking not to be identified because the information is private. Nvidia is the only suitor in concrete discussions with SoftBank, according to the people. A deal for Arm could be the largest ever in the semiconductor industry, which has been consolidating in recent years as companies seek to diversify and add scale. But any deal with Nvidia, which is a customer of Arm, would likely trigger regulatory scrutiny as well as a wave of opposition from other firms.
Nine philosophers explore the various issues and questions raised by the newly released language model, GPT-3, in this edition of Philosophers On, guest edited by Annette Zimmermann. Introduction Annette Zimmermann, guest editor GPT-3, a powerful, 175 billion parameter language model developed recently by OpenAI, has been galvanizing public debate and controversy. As the MIT Technology Review puts it: “OpenAI’s new language generator GPT-3 is shockingly good—and completely mindless”. Parts of the technology community hope (and fear) that GPT-3 could brings us one step closer to the hypothetical future possibility of human-like, highly sophisticated artificial general intelligence (AGI). Meanwhile, others (including OpenAI’s own CEO) have critiqued claims about GPT-3’s ostensible proximity to AGI, arguing that they are vastly overstated. Why the hype? As is turns out, GPT-3 is unlike other natural language processing (NLP) systems, the latter of which often struggle with what comes comparatively easily to humans: performing entirely new language tasks based on a few simple instructions and examples. Instead, NLP systems usually have to be pre-trained on a large corpus of text, and then fine-tuned in order to successfully perform a specific task. GPT-3, by contrast, does not require fine tuning of this kind: it seems to be able to perform a whole range of tasks reasonably well, from producing fiction, poetry, and press releases to functioning code, and from music, jokes, and technical manuals, to “news articles which human evaluators have difficulty distinguishing from articles written by humans”. The Philosophers On series contains group posts on issues of current interest, with the aim being to show what the careful thinking characteristic of philosophers (and occasionally scholars in related fields) can bring to popular ongoing conversations. Contributors present not fully worked out position papers but rather brief thoughts that can serve as prompts for further reflection and discussion. The contributors to this installment of “Philosophers On” are Amanda Askell (Research Scientist, OpenAI), David Chalmers (Professor of Philosophy, New York University), Justin Khoo (Associate Professor of Philosophy, Massachusetts Institute of Technology), Carlos Montemayor (Professor of Philosophy, San Francisco State University), C. Thi Nguyen (Associate Professor of Philosophy, University of Utah), Regina Rini (Canada Research Chair in Philosophy of Moral and Social Cognition, York University), Henry Shevlin (Research Associate, Leverhulme Centre for..
At a typical annual meeting of the Association for Computational Linguistics (ACL), the program is a parade of titles like "A Structured Variational Autoencoder for Contextual Morphological Inflection." At this year's conference in July, though, something felt different--and it wasn't just the virtual format. Attendees' conversations were unusually introspective about the core methods and objectives of natural-language processing (NLP), the branch of AI focused on creating systems that analyze or generate human language. Papers in this year's new "Theme" track asked questions like: Are current methods really enough to achieve the field's ultimate goals? What even are those goals? My colleagues and I at Elemental Cognition, an AI research firm based in Connecticut and New York, see the angst as justified.
Reading through a data science book or taking a course, it can feel like you have the individual pieces, but don't quite know how to put them together. Taking the next step and solving a complete machine learning problem can be daunting, but preserving and completing a first project will give you the confidence to tackle any data science problem. This series of articles will walk through a complete machine learning solution with a real-world dataset to let you see how all the pieces come together. We'll follow the general machine learning workflow step-by-step: Along the way, we'll see how each step flows into the next and how to specifically implement each part in Python. The complete project is available on GitHub, with the first notebook here. After completing the work, I was offered the job, but then the CTO of the company quit and they weren't able to bring on any new employees. I guess that's how things go on the start-up scene!) The first step before we get coding is to understand the problem we are trying to solve and the available data. In this project, we will work with publicly available building energy data from New York City. The objective is to use the energy data to build a model that can predict the Energy Star Score of a building and interpret the results to find the factors which influence the score. We want to develop a model that is both accurate *-- it can predict the Energy Star Score close to the true value -- and *interpretable -- we can understand the model predictions. Once we know the goal, we can use it to guide our decisions as we dig into the data and build models. Contrary to what most data science courses would have you believe, not every dataset is a perfectly curated group of observations with no missing values or anomalies (looking at you mtcars and iris datasets). Real-world data is messy which means we need to clean and wrangle it into an acceptable format before we can even start the analysis. Data cleaning is an un-glamorous, but necessary part of most actual data science problems.