VR/AR

Augmented reality takes makeover tech to the next level.

It’s called the “makeup counter panic,” and we’ve all experienced it. At least, those of us who wear makeup. I’m at the store, trying to find a new foundation or shade of lipstick, but I have no idea what to go with. The tester samples look gross, and not one counter person is in sight. Do I steal a sample from an actual container, trust the color on the label, and hope for the best? It can be a mess.

Luckily, makeup has a new weapon in the fight against makeup counter panic: augmented reality.

Tech developers are teaming up with makeup, hair, and skincare companies to create augmented reality beauty simulators. Creative agency Holition recently paired up with Rimmel London to create an app, set to come out this September, where customers can test out makeup they can then purchase through the app. Mary Kay has a makeover app, and you can find independent apps like YouCam Makeup and ChouChou: Virtual Hair Makeover.

Makeup tech developer ModiFace has made apps for Urban Decay, L’Oréal, Avon Beauty, and other companies. Jennifer Tidy, ModiFace’s vice president of partnerships, said it gives makeup wearers the ultimate tool in “try before you buy.”

“Having the option to explore and try on an orange lipstick or a purple hair color really stretches the boundaries,” Tidy said. “[You can] have some fun with it without being horrified with the results.”
How ModiFace’s live effects work.

Tidy said augmented reality makeovers are great for people who buy their makeup or hair products online, or if they’re shy about getting professional consultations in person. ModiFace mostly licenses its product to outside makeup companies, but it also has several apps of its own for trying on things like makeup, hair and cosmetic surgery procedures. They have in-app purchases like more contact lens colors or celebrity hairstyles, but Tidy said the apps are more like product testers, showing beauty companies the type of work that they can do.

Makeup technology first came around in the late ’90s with Cosmopolitan Virtual Makeover CD-ROM, which The New York Times hailed as “part of a growing way of interactive computer applications directed at women.” The CD-ROM, which I owned as a teenager and used on a regular basis, let users upload a photo of themselves and create different “looks” from the safety of their computer. It was fun but rudimentary, definitely encouraging more outlandish, garish looks than realistic makeovers.

I asked Tidy what’s changed since Virtual Makeover.

“Everything,” she laughed. “That Cosmopolitan CD you can put into your computer and upload is a bit of a dinosaur, but it was the original. The technology now has a level of accuracy that makes showcasing the products in a realistic way easier. We’re light years ahead compared to where we started in 1999.”
YouCam can add hair and makeup to your face for an AR makeover.

Computer programs like Virtual Makeover required users to visibly trace their eyes, lips, and eyebrows, which mostly yielded troublesome results, but augmented reality apps like ModiFace use assisted tracking to shape those difficult areas. After a user takes or uploads a picture, ModiFace has them move targeted points around to frame where the eyes and lips are, as well as hair if you’re adding a new color to your existing style. Other apps like YouCam don’t even require you to make adjustments; it will automatically map your face and put the makeup on. It’s not as precise as others like ModiFace, but it still works well.

Having tried some of the latest makeup and hair apps, I was impressed. Augmented reality has taken makeover technology to the next level. I tried out a couple of looks and sometimes couldn’t tell what was real and what was digitally added. I was especially surprised to learn I could pull off mauve lipstick. Tidy herself said she discovered a new look thanks to the technology.

“I tried some hairstyles and I realized I looked good in bangs, which I wouldn’t thought of before,” Tidy said.

Even with all the advancements, the technology isn’t perfect. It absolutely depends on good positioning and lighting. When you use a bad photo, it’s easy to see the makeup as just being piled onto the face. Tidy recommended taking the photo in a well-lit area without a lot of shade or wind, ideally indoors.

“The thing with this type of technology is really, the type of photo you upload it does impact,” Tidy said. “If you’re in a shadow or your hair is covering up your face, there’s only so much compensation we can do to help that.”

The hairstyles also tend to fall short of the Uncanny Valley. While hair colors are surprisingly easy to integrate into your natural hue, as well as contact lenses, hairstyles end up feeling a bit layered on. ChouChou works all right, with cute hairstyles from Tokyo, but it still falls short of looking like your actual hair. It’s more a prototype than something to fool the neighbors. Even though the hairstyles don’t look the most realistic, the app did help me discover my inner Khaleesi.

Several of these apps don’t offer many, if any, options for men. A few have men’s hairstyles, but they definitely have less than those that focus on female beauty. And I couldn’t find any apps where the makeup or cosmetic surgery sections had comprehensive male options. That’s something companies should definitely take note of and improve in the future.

But the technology is constantly changing, and new features are being added all the time. Tidy said ModiFace’s newest venture was Live Video, where makeovers can be shown in real-time on the face with adjustable mapping. ModiFace’s Urban Decay app uses Live Video for trying on new lipstick shades, shown a few paragraphs above. Other apps like YouCam include tutorials for how to achieve the different looks, including costume makeup like “Queen Cleopatra.” We’re also seeing better brand integration and purchasing options, like with the upcoming Rimmel London Get the Look app. Tidy said it’s all about giving people a way to experiment with their look, and possibly discover something new about themselves in the process.

“You’re not committed to having to purchase something without trying it on first,” Tidy said. “You can explore every single color of lipstick or eyeshadow or eyeliner, with that zero commitment factor. It’s an entertaining way of exploring the brands.”

This post first appeared on UploadVR.

Read More

Nvidia claims its new chip is the ‘world’s fastest GPU’ for game and VR design

Nvidia announced today the Quadro P6000 graphics card for workstations, using the “world’s fastest GPU,” or graphics processing unit. The graphics card is targeted at designers who have to create complex simulations for everything from engineering models to virtual reality games.

The Quadro P6000 is based on Nvidia’s new Pascal graphics architecture, and it uses a GPU with 3,840 processing cores. It can reach 12 teraflops of computing performance, or twice as fast as the previous generation.

Nvidia unveiled the new platform for artists, designers, and animators at the Siggraph graphics technology conference in Anaheim, Calif. Nvidia says the new workstation GPU and new improvements in software will enable professionals to work faster and with greater creativity.

Nvidia is also announcing VRWorks’ 360 video software development kit, to enable VR developers to create applications to stitch together 4K video feeds into 360 videos. It is also adding graphics acceleration to the mental ray film-quality renderer, and it is releasing Nvidia Optix 4, the latest version of its GPU ray-tracing engine for creating ultrarealistic imagery. Artists can use it to work with scenes up to 64 gigabytes in detail.

“Often our artists are working with 50GB or higher datasets,” says Steve May, the chief technology officer at Pixar. “The ability to visualize scenes of this size interactively gives our artists the ability to make creative decisions more quickly. We’re looking forward to testing the limits of Pascal and expect the benefits to our workflows to be huge.”

The Quadro graphics cards will be available soon from major computer makers and system integrators.

Read More

Pokémon Go is nice, but here’s what *real* augmented reality will look like

Stop referring to Pokémon Go as augmented reality. Yes the popularity of this game gives us a glimpse into consumers’ hunger for AR games, but the technology to interact with the real world is just not there yet.

True augmented reality “requires computer vision and dynamic mapping of the real world environment.” In contrast, Pokémon Go characters rely solely on Google Maps’ fixed latitude and longitude. If true augmented reality technologies were in use here then inherent real-time depth mapping and object recognition would empower game characters to interact with the real-world, keeping them out of incongruous play areas.

True AR is still a holy grail, however, and the key to making augmented reality work will be AI-driven image recognition made possible by “unsupervised learning.” This will make it possible for devices to assess any image or video and understand it as well as we can – all in real time.
The limitations of current AI

To understand what’s meant by unsupervised learning, let’s first take a look at what’s currently in play. Leading companies like Google, IBM, and Facebook have been hard at work on AI technologies and improved image recognition, but they’ve faced limitations. Most of the current work done in this area has focused on Deep Learning techniques – the creation of artificial networks with hundreds of computational layers.

Deep Learning offers massive computational power, but contains two critical flaws. First, as systems reach 1,000 layers, they plateau in computational ability and are hamstrung with an inability to scale further.

Second, the learning process required by this model depends on hundreds of hours of human guidance – we call this “supervised learning.” Computer scientists correct wrong answers and eventually the system learns from its mistakes. It’s effective within very specific data sets – like answering questions for a quiz show, or playing a board game – but cannot be applied to environments full of constantly changing variables, such as the natural world.
‘Humanizing’ AI

And that’s a key word to consider in this conversation: natural. Deep Learning fails when applied to AR because it is an artificial system asked to understand naturally occurring environments.

Living creatures, of course, have no trouble with these tasks. To enable computers to understand an environment just as accurately, it takes processes that more closely resemble those that have evolved naturally. Stemming directly from the latest brain research, unsupervised learning is the answer here.

Unsupervised learning differs from Deep Learning in that it doesn’t require human intervention. Instead of setting upon a limited data set with the intention of getting more answers right than wrong – “this is a building, this is not a building” – a system that employs unsupervised learning will amass thousands of signatures based on similar/dissimilar factors it “sees” in each image. These include signatures for colors, shapes, negative spaces, and compounded combinations of all.

The system gains a full understanding of what a building looks like as a structure (and even what specific landmark it might be) rather than just answering whether or not that structure is or is not a building. And because a self-learning system compiles signatures – a relatively simple and repetitive task – rather than trying to process “facts,” it requires fewer computational layers to operate and can scale infinitely – just like the human brain.
What real AI-powered augmented reality could enable

Armed with a more complete understanding, AI-driven AR games will be able to fully integrate fictional characters’ actions and game-play into natural settings. But improved game-play is only the tip of the iceberg.

This is powerful technology, capable of transforming countless sectors. For example, consumers will soon be able to make better use of the thousands of photos taken on their phones and devices. Imagine a built-in AI assistant that can automatically organize images and videos, execute ultra-specific searches in milliseconds, and make sharing recommendations based on subject matter in images. Millions of forgotten photos will suddenly gain new life, and the potential of visual search can be unlocked.

Image recognition enabled by unsupervised learning will make driver-less cars exponentially safer. Cars will spot pedestrians with complete clarity, perfectly identify road debris, and follow detours that may not yet be mapped.

In the medical field, doctors in the midst of surgical procedures will be able to get real-time information and comparisons from hundreds of related operations – guiding doctors based on challenges and solutions found all over the world.

The applications are nearly endless.

I accurately depict AR as a movement that will only take shape when the technology can flawlessly integrate with the world. Pokémon Go is not there yet, but the technology will be available soon. The same innovations that will allow a virtual Pikachu to hide behind a real tree will be driven by unsupervised learning – a breakthrough that will fundamentally change how AI systems view the world.

Read More

Fungus Presentation

OPEN PRESENTATION

Read More

At CES in Las Vegas, Sony showed the development of an attachable Single-Lens Display Module, under the working title SmartEyeglass Attach! as the concept model.

Sony’s SmartEyeglass Attach! is something it debuted at the Consumer Electronics Show this year, offering up a smart display module that adds connected intelligence to any kind of existing eyeware, be they optical frames, protective goggles or sunglasses. The concept design is very similar in practice to Google Glass, so it might be surprising that Sony has now released its first video of the device in action following the CES debut, when Google has just wound down its initial Glass project and gone back to the drawing board.

But in fact, Sony is a much more logical place for Glass to take root and prosper. The company does, after all, make a range of devices that suit a number of verticals. It has action cameras, for instance, with niche appeal mostly limited to those who enjoy extreme sports, and it also serves enterprise needs with variations on its VAIO line. The SmartEyeglass Attach! is a concept that refines its own Glass-like SmartEyeglass for more flexibility, creating a modular device that’s optimally suited for industrial, enterprise and action sports user without requiring consumers to pick up expensive additional gear.

Sony is encouraging developers to build for the platform, and given that this is already v2 of its smart eyeware concept, they might be the best bet when it comes to building for the long-term. Sony could swoop in and capitalize on any doubt produced by Google’s walking back of the Glass Explorer program to take an early lead in this market, especially if it gears its efforts towards serving the niche industrial, enterprise and healthcare applications where Glass was showing a lot of promise.

Read More