Sony to open first Southeast Asian video game studio in Malaysia in 2020

The Japan Times

KUALA LUMPUR – Sony Corp.'s gaming arm will establish its first Southeast Asian video game studio in Malaysia in 2020. Sony Interactive Entertainment LLC and the Malaysian government jointly said the studio, named Sony Interactive Entertainment Worldwide Studios Malaysia Sdn. Bhd., will provide art and animation to develop global game titles for its PlayStation consoles. Sony Interactive Entertainment President and CEO Jim Ryan said in a statement the firm decided to set up the studio in Malaysia because of the country's talented human resources, vibrant game ecosystem and the government's support. Malaysian International Trade and Industry Minister Datuk Darell Leiking said in the statement that the Sony studio "is a key win for Malaysia and a testament to the nation's efforts to attract strategic high-quality investments from international companies."

History as a giant data set: how analysing the past could help save the future

The Guardian

In its first issue of 2010, the scientific journal Nature looked forward to a dazzling decade of progress. By 2020, experimental devices connected to the internet would deduce our search queries by directly monitoring our brain signals. Crops would exist that doubled their biomass in three hours. Humanity would be well on the way to ending its dependency on fossil fuels. It warned that all these advances could be derailed by mounting political instability, which was due to peak in the US and western Europe around 2020. Human societies go through predictable periods of growth, the letter explained, during which the population increases and prosperity rises. Then come equally predictable periods of decline. In recent decades, the letter went on, a number of worrying social indicators – such as wealth inequality and public debt – had started to climb in western nations, indicating that these societies were approaching a period of upheaval. The letter-writer would go on to predict that the turmoil in the US in 2020 would be less severe than the American civil war, but worse than the violence of the late 1960s and early 70s, when the murder rate spiked, civil rights and anti-Vietnam war protests intensified and domestic terrorists carried out thousands of bombings across the country. The author of this stark warning was not a historian, but a biologist.

Echo Dot with Clock: Amazon's cheap Alexa alarm clock replacement

The Guardian

Amazon has a new twist on its popular cut-price Echo Dot smart speaker, now setting its sights squarely on your beleaguered bedside alarm clock with a new LED display embedded in the side. The Echo Dot with Clock is one of those true Ronseal products - it says what it does on the tin. It is literally the same as the excellent third-generation Echo Dot, but is only available in white and has a white LED display showing the time peeking through the fabric side. It's formally priced at £60 – £10 more than the regular Echo Dot – but is frequently discounted to about half that. You get the same four-way buttons on the top: volume up and down, mute for the microphones and an action button.

r/MachineLearning - [D] Tuning of generated synthetic data for instance segmentation


The resulting images contain all the objects with perfect masks and bounding box labels, over some arbitrary backgrounds. However, the generated training data still looks fairly different from real images. I do, however, have a large dataset of unlabeled real images with the real objects in them. Would anyone be aware of a method for tuning a generated image to look more similar to the images in the real dataset? I would want to preserve spatial information so as to not invalidate generated labels, but also add noise / shadows / pixel artifacts in a meaningful way that resembles those found in my real dataset. My first thought was to look for papers using something like auto-encoders, but I was flooded with papers about VAEs and end-to-end generation. Is anyone aware of research for this specific problem?

Take your Machine Learning Models to Production with these 5 simple steps


The world around us is rapidly changing, and what might be applicable two months back might not be relevant now. In a way, the models we build are reflections of the world, and if the world is changing our models should be able to reflect this change. Model performance deteriorates typically with time. For this reason, we must think of ways to upgrade our models as part of the maintenance cycle at the onset itself. The frequency of this cycle depends entirely on the business problem that you are trying to solve.

The Linley Group


Designers no longer need to worry about the costs of deep-learning acceleration: Nvidia is making the technology available for free. The company has extracted the deep-learning accelerator (NVDLA) from its Xavier autonomous-driving processor and is offering it for use under a royalty-free open-source license. It's managing the NVDLA project as a directed community, which it supports with comprehensive documentation and instructions. Nvidia delivers the NVDLA core as synthesizable Verilog RTL code, along with a step-by-step SoC-integrator manual, a run-time engine, and a software manual. The company's strategy in creating the open-source project is to foster more-widespread adoption of neural-network inference engines. It expects to thereby benefit from greater demand for its expensive GPU-based training platforms. Most neural-network developers train their models on Nvidia GPUs, and many use the Cuda deep-neural-network (cuDNN) library and software-development kit (SDK) to run models built in Caffe2, Pytorch, TensorFlow, and other popular frameworks.

AI & ML Course for Managers


In this chapter, we will learn the process of Machine Learning and various important concepts using real life applications. We will start with the basics of Machine learning and by the end, we will be ready to build Machine Learning projects. We will take a case study of a spam filter for email. To achieve this, we will employ 4 case studies. This will be followed up with 5 exercises to ensure that you build a comfort level with these concepts.

Richard Bartle interview: How game developers should think about sapient AI characters


Richard Bartle is one of the leading academics on video games and is a senior lecturer and honorary professor of computer game design at the University of Essex in the United Kingdom. He might seem an unusual choice to talk about the ethics of artificial intelligence, but video game developers have grappled with the ethics of creating virtual worlds with AI beings in them for a long time. Not only do they have to consider the ethics of what they create in their own worlds, the game designers also have to consider how much control to grant players over the AI characters who inhabit the worlds. If game developers are the gods, then players can be the demi-gods. He recently spoke about this topic in a fascinating talk in August on the IEEE Conference on Games in London. I interviewed him about our own interests in the intersection of AI, games, and ethics. He is in the midst of writing a book about the ethics of AI in games.

Helping the Disabled Live an Active Life with Robots & Exoskeletons Work in Japan for engineers


In the House of Councillors election of July 2019 two new Diet members were elected who each have severe physical disabilities. One is an Amyotrophic Lateral Sclerosis (ALS) patient and the other has Cerebral Palsy. Both are barely able to move their bodies and require large electric wheelchairs to get about. The assistance of a carer is also necessary. In particular, the ALS patient is dependent on an artificial respirator and is even unable to speak.