Goto

Collaborating Authors

 shutter


Canon EOS R5 II review: Canon's most powerful camera yet puts Sony on notice

Engadget

Move over Sony, Canon is trying to take the lead in bleeding-edge tech for mirrorless cameras. The company's new 4,300, 45-megapixel EOS R5 II offers advanced features like eye-tracking autofocus (AF) that can't be found on any recent Sony model. The new camera is also pushing Sony's A1 and other models in the key areas of speed, video and autofocus. And it's arguably more desirable than Canon's own upcoming flagship R1 as it has nearly double the resolution. I've had the R5 II for a few weeks, evaluating not only its practicality and speed for both professionals and serious amateurs, but also how it stacks up against Sony's A1, the gold standard for high-resolution mirrorless cameras.


Streamlined shape of cyborg cockroach promotes traversability in confined environments by gap negotiation

Kai, Kazuki, Long, Le Duc, Sato, Hirotaka

arXiv.org Artificial Intelligence

The centimeter-scale cyborg insects have a potential advantage for application in narrow environments where humans cannot operate. To realize such tasks, researchers have developed a small printed-circuit-board (PCB) which an insect can carry and control it. The electronic components usually remain bare on the board and the whole board is mounted on platform animals, resulting in uneven morphology of whole cyborg with sharp edges. It is well known that streamlined body shape in artificial vehicles or robots contributes to effective locomotion by reducing drag force in media. However, little is known how the entire body shape impacts on locomotor performance of cyborg insect. Here, we developed a 10 mm by 10 mm board which provided electrical stimulation via Sub-GHz communication and investigated the impact of physical arrangement of the board using Madagascar hissing cockroach. We compared the success rate of gap negotiation between the cyborg with mounted board and implanted board and found the latter outperformed the former. We demonstrated our cyborg cockroach with implanted board could follow faithfully to the locomotion command via antennal or cercal stimulation and traverse a narrow gap like air vent cover. In contrast to the conventional arrangement, our cyborg insects are suitable for application in a concealed environment.


Dynamic Fairness Perceptions in Human-Robot Interaction

Claure, Houston, Candon, Kate, Shin, Inyoung, Vázquez, Marynel

arXiv.org Artificial Intelligence

People deeply care about how fairly they are treated by robots. The established paradigm for probing fairness in Human-Robot Interaction (HRI) involves measuring the perception of the fairness of a robot at the conclusion of an interaction. However, such an approach is limited as interactions vary over time, potentially causing changes in fairness perceptions as well. To validate this idea, we conducted a 2x2 user study with a mixed design (N=40) where we investigated two factors: the timing of unfair robot actions (early or late in an interaction) and the beneficiary of those actions (either another robot or the participant). Our results show that fairness judgments are not static. They can shift based on the timing of unfair robot actions. Further, we explored using perceptions of three key factors (reduced welfare, conduct, and moral transgression) proposed by a Fairness Theory from Organizational Justice to predict momentary perceptions of fairness in our study. Interestingly, we found that the reduced welfare and moral transgression factors were better predictors than all factors together. Our findings reinforce the idea that unfair robot behavior can shape perceptions of group dynamics and trust towards a robot and pave the path to future research directions on moment-to-moment fairness perceptions


Canon EOS R5 II hands-on: Nifty eye-tracking autofocus and reduced overheating problems

Engadget

As it teased earlier, Canon has launched the R5 II, a successor to the powerful but imperfect EOS R5. With a new 45-megapixel backside-illuminated (BSI) stacked sensor, it not only has superior specs for video, shooting speeds and more, but also adds advanced features like eye-controlled AF. The R5 II was launched alongside Canon's new flagship, the EOS R1, which I've covered in a separate post. With the new R5, Canon has mostly dealt with the original's primary problem: overheating while shooting video. To see what's different and try out some of the new features, I spent some time with an R5 II pre-production camera in Phoenix, Arizona. The R5 II's body is largely the same as before, but there are a couple of key changes.


Self-supervised Learning of Event-guided Video Frame Interpolation for Rolling Shutter Frames

Lu, Yunfan, Liang, Guoqiang, Wang, Lin

arXiv.org Artificial Intelligence

This paper makes the first attempt to tackle the challenging task of recovering arbitrary frame rate latent global shutter (GS) frames from two consecutive rolling shutter (RS) frames, guided by the novel event camera data. Although events possess high temporal resolution, beneficial for video frame interpolation (VFI), a hurdle in tackling this task is the lack of paired GS frames. Another challenge is that RS frames are susceptible to distortion when capturing moving objects. To this end, we propose a novel self-supervised framework that leverages events to guide RS frame correction and VFI in a unified framework. Our key idea is to estimate the displacement field (DF) non-linear dense 3D spatiotemporal information of all pixels during the exposure time, allowing for the reciprocal reconstruction between RS and GS frames as well as arbitrary frame rate VFI. Specifically, the displacement field estimation (DFE) module is proposed to estimate the spatiotemporal motion from events to correct the RS distortion and interpolate the GS frames in one step. We then combine the input RS frames and DF to learn a mapping for RS-to-GS frame interpolation. However, as the mapping is highly under-constrained, we couple it with an inverse mapping (i.e., GS-to-RS) and RS frame warping (i.e., RS-to-RS) for self-supervision. As there is a lack of labeled datasets for evaluation, we generate two synthetic datasets and collect a real-world dataset to train and test our method. Experimental results show that our method yields comparable or better performance with prior supervised methods.


Shutter, the Robot Photographer: Leveraging Behavior Trees for Public, In-the-Wild Human-Robot Interactions

Lew, Alexander, Thompson, Sydney, Tsoi, Nathan, Vázquez, Marynel

arXiv.org Artificial Intelligence

Deploying interactive systems in-the-wild requires adaptability to situations not encountered in lab environments. Our work details our experience about the impact of architecture choice on behavior reusability and reactivity while deploying a public interactive system. In particular, we introduce Shutter, a robot photographer and a platform for public interaction. In designing Shutter's architecture, we focused on adaptability for in-the-wild deployment, while developing a reusable platform to facilitate future research in public human-robot interaction. We find that behavior trees allow reactivity, especially in group settings, and encourage designing reusable behaviors.


Unwrap a new gadget over the holidays? Try out these 6 tech tips, tricks

USATODAY - Tech Top Stories

Can't figure out how to use your new tech toy? While you may have found a new phone, smart speaker, tablet or laptop under the tree this holiday season, you might be a little overwhelmed with all its features. In fact, whether you're tech-savvy or tech-shy, many of us stick to what we know and repeat those actions over and over, opposed to venturing a little outside our comfort zone. That's ok, of course, but should you want to learn a few tech tips and tricks – to help save you time, money and stress – we've got a half-dozen of ideas here for you, covering a wide range of popular products. Typing on your iPhone and want to undo what you just wrote?


To see proteins change in a quadrillionth of a second, use AI

#artificialintelligence

Have you ever had an otherwise perfect photo ruined by someone who moved too quickly and caused a blur? Scientists have the same issue while recording images of proteins that change their structure in response to light. This process is common in nature, so for years researchers have tried to capture its details. But they have long been thwarted by how incredibly fast it happens. Now a team of researchers from the University of Wisconsin Milwaukee and the Center for Free-Electron Laser Science at the Deutsches Elektronen-Synchrotron in Germany have combined machine learning and quantum mechanical calculations to get the most precise record yet of structural changes in a photoactive yellow protein (PYP) that has been excited by light.


Motion-from-Blur: 3D Shape and Motion Estimation of Motion-blurred Objects in Videos

Rozumnyi, Denys, Oswald, Martin R., Ferrari, Vittorio, Pollefeys, Marc

arXiv.org Artificial Intelligence

We propose a method for jointly estimating the 3D motion, 3D shape, and appearance of highly motion-blurred objects from a video. To this end, we model the blurred appearance of a fast moving object in a generative fashion by parametrizing its 3D position, rotation, velocity, acceleration, bounces, shape, and texture over the duration of a predefined time window spanning multiple frames. Using differentiable rendering, we are able to estimate all parameters by minimizing the pixel-wise reprojection error to the input video via backpropagating through a rendering pipeline that accounts for motion blur by averaging the graphics output over short time intervals. For that purpose, we also estimate the camera exposure gap time within the same optimization. To account for abrupt motion changes like bounces, we model the motion trajectory as a piece-wise polynomial, and we are able to estimate the specific time of the bounce at sub-frame accuracy. Experiments on established benchmark datasets demonstrate that our method outperforms previous methods for fast moving object deblurring and 3D reconstruction.


Facebook to shutter its facial recognition features: Talking Tech podcast

USATODAY - Tech Top Stories

Hit play on the player above to hear the podcast and follow along with the transcript below. This transcript was automatically generated, and then edited for clarity in its current form. Welcome back to Talking Tech. Brett Molina is off today. Facebook has been in the news a lot lately.