Connect with us

Huawei News

Huawei AR Engine installed 1.1 billion times, with over 2000 access applications

picture-1

Published

on

Huawei AR Engine Image 1

At the recent FBEC2021, Huawei’s VR/AR product line president Li Tengyue said that Huawei’s AR Engine has been installed 1.1 billion times, covering 106 models and over 2000 applications. The AR Engine service is an engine used to build augmented reality applications on Android. AR Engine provides basic AR capabilities such as motion tracking, environment tracking, human body and face tracking by integrating AR core algorithms.

With AR Engine, developers can provide your application with AR capabilities such as motion tracking, human body and face tracking, and environment tracking, helping applications integrate the virtual world with the real world, and create a new visual experience and interaction method. AR Engine uses these capabilities to better understand the real world and provide users with a new interactive experience integrating virtual and real:

The motion tracking capability of AR Engine mainly uses the camera of the terminal device to identify feature points and track the movement changes of these feature points to continuously track the position and posture of the terminal device.

The environment tracking capability of AR Engine can identify planes (such as ground, walls, etc.) and objects (such as cubes, rectangles, circles, etc.), and can also estimate the light intensity around the plane.

Motion tracking

Advertisement

Continuously track the position and posture change trajectory of the device relative to the surrounding environment, establish a unified geometric space between the virtual digital world and the real physical world, and provide your application with an interactive basic platform for the integration of virtual and real. At present, motion tracking mainly includes the following capabilities: motion tracking, hit detection.

Continuously and steadily track the changes in the position and posture of the terminal device relative to the surrounding environment, while outputting the three-dimensional coordinate information of the surrounding environment characteristics.

AR Engine mainly identifies feature points through the camera of the terminal device and tracks the movement changes of these feature points. At the same time, the movement changes of these points are combined with the inertial sensors of the terminal device to continuously track the position and posture of the terminal device.

By aligning the pose of the device camera provided by AR Engine with the pose of the virtual camera that renders 3D content, you can render virtual objects from the observer’s perspective and superimpose them on the camera image to achieve virtual and real fusion.

Hit detection

Advertisement

The user can select points of interest in the real environment by tapping the screen of the terminal device.

AR Engine uses hit detection technology to map points of interest on the screen of the terminal device to points of interest in the real environment, and uses the point of interest as the source to send a ray connected to the location of the camera, and then return to the ray and plane (or feature point). Point of intersection. Hit detection capabilities allow you to interact with virtual objects.

Environmental tracking

Track the lighting, planes, images, objects, environmental surfaces and other environmental information around the device, and assist your application to realize that virtual objects are realistically integrated into the real physical world in a scene-based way. At present, environment tracking mainly includes the following capabilities: illumination estimation, plane detection, image tracking, environment Mesh, plane semantics, 3D cloud recognition, target semantics.

Illumination estimation

Advertisement

Track the lighting information around the device, support the estimation of the intensity of the ambient light.

AR Engine can track the light information around the device and calculate the average light intensity of the camera image. The light estimation capability allows virtual objects to be integrated into the real lighting environment and look more realistic.

Plane detection

Detect horizontal and vertical planes (such as floors or walls).

AR Engine can recognize clusters of feature points on horizontal and vertical planes (ground or wall), and can recognize the boundaries of the plane. Your application can use these planes to place the virtual objects you need.

Advertisement

Image tracking

Recognize and track the position and posture of 2D images.

AR Engine provides image recognition and tracking capabilities, detects whether there is an image provided by the user in the scene, and recognizes the posture of the output image.

With image recognition and tracking functions, you can achieve augmented reality based on images (posters or covers, etc.) in real-world scenes. You can provide a set of reference images. When these images appear within the camera’s field of view of the terminal device, AR Engine can track the images for your AR application in real time, enriching the scene understanding and interactive experience.

Environmental Mesh

Advertisement

Real-time calculation and output of environmental mesh data in the current screen can be used to process virtual and real occlusion and other application scenarios.

AR Engine provides real-time output environment Mesh capability. The output content includes the pose of the terminal device in space, the three-dimensional grid under the current camera perspective, and the current model with a rear depth camera supports it, and the supported scanning environment is a static scene.

Through the environmental Mesh capabilities, you can place virtual objects on any reconstructable surface, and are no longer limited to horizontal and vertical surfaces. At the same time, you can use the reconstructed environment Mesh to achieve virtual and real occlusion and collision detection, so that the virtual character can accurately know the current surrounding three-dimensional space situation and help you achieve a better immersive AR experience.

Plane semantics

Recognize the semantics of planes (such as desktops, floors, doors, etc.). AR Engine can recognize the semantics of the current plane, and currently it can recognize desktops, floors, walls, seats, ceilings, doors, windows, and beds.

Advertisement

3D cloud recognition

Recognize and track the position and posture of 3D objects.

AR Engine detects whether there are 3D objects configured by the user on the cloud side in the scene. When these objects appear in the camera field of view of the terminal device, the cloud side returns the recognition results to the terminal device in real time to realize the enhancement based on the 3D objects in the real world scene Reality.

Target semantics

Identify the label and shape of the object. AR Engine can recognize the tags and shapes of objects. It can currently recognize tables and chairs. The supported shapes include cubes, circles, and rectangles.

Advertisement

2D cloud recognition

Identify and track the data of cloud 2D images.

AR Engine detects whether there are 2D images configured by the user on the cloud side in the scene. When these images appear in the camera field of view of the terminal device, the cloud side returns the recognition result to the terminal device in real time, realizing the interaction based on the 2D image in the real world scene .

Human body and face tracking

Track real-time information such as faces, human bodies, and gestures to assist your application in realizing the ability of users to interact with virtual objects.

Advertisement

Gesture Recognition

Recognize specific gestures and specific actions. AR Engine provides the recognition of a variety of specific gestures, outputs the recognized gesture category results and gives the palm detection frame screen coordinates, which can be supported by both left and right hands. When multiple hands appear in the image, only one hand (the clearest and highest confidence) recognition result and coordinate information are fed back. Support front and rear camera switching.

Through gesture recognition capabilities, virtual objects can be superimposed on the position of a person’s hand, and certain state switching can be activated according to different gesture changes, which can provide your AR application with basic interactive functions and add new gameplay.

Hand bone tracking

Recognize and track the position and posture of 21 hand joint points to form a hand skeleton model, and can distinguish left and right hands.

Advertisement

AR Engine provides single-hand joint point and bone recognition capabilities, and outputs advanced hand features such as finger endpoints and hand bones. When multiple hands appear in the image, only one hand (the clearest and highest confidence) recognition result and coordinate information are fed back. Currently only Mate 20 Pro and Mate 20 RS front depth cameras are supported.

With hand bone recognition capabilities, you can superimpose virtual objects on more precise hand positions, such as finger tips, palms, etc.; using hand bones, you can drive the virtual hand to make richer and more detailed movements. It can provide your AR application with enhanced interactive features and incredible novel gameplay.

Human body gesture recognition

Recognize specific human postures, currently supports 6 specific postures.

AR Engine provides single-person body posture recognition capabilities, recognizes six static body postures and outputs the recognition results, and supports front and rear camera switching.

Advertisement

You can use the human body gesture recognition ability in application scenarios that need to recognize actions and trigger certain events, such as interactive interface control, game operation action recognition and other trigger interactive behaviors. It is the basic core function of somatosensory applications and provides you with AR applications. Long-distance remote control and collaboration capabilities enrich the interactive experience of applications.

Human skeleton tracking

Recognize and track the 2D position of 23 body bone points (or the 3D position of 15 bone points), support single and double.

AR Engine provides single and double body joint point recognition capabilities. It supports the output of 2D bones (screen coordinate system) and 3D bones (space coordinate system combined with SLAM), and supports front and rear camera switching.

Through the human bone recognition ability, you can superimpose virtual objects on the designated parts of the human body, such as the left shoulder, right ankle, etc.; using human bones, you can drive the virtual doll to make richer and more detailed movements, which can give you AR The application provides a wide range of interactive functions and incredible novel gameplay.

Advertisement

Human Mask

Identify and track the area where the human body is in the current picture, and provide depth information of the area.

AR Engine provides the recognition and tracking capabilities of single or double body contours, and outputs mask information of human contours and corresponding bone point information in real time.

With the human contour tracking capability, you can use the mask information of the human contour to mask virtual objects and scenes. For example, changing the virtual background when taking photos in AR, allowing virtual dolls to hide behind people, etc., can use the Mask ability to achieve a more natural occlusion effect, which can further enhance the realism and viewing experience of AR applications.

Facial expression tracking

Advertisement

Real-time calculation of the pose of the face and the parameter value corresponding to each expression can be used to directly control the expression of the avatar.

AR Engine provides the ability to track facial expressions, track and obtain facial image information, understand the content of adult facial expressions in real time, and convert them into various expression parameters. The expression parameters can be used to control the expression of the avatar.

AR Engine provides 64 types of facial expressions, including facial expressions of the eyes, eyebrows, eyeballs, mouth, tongue and other major facial organs.

Face Mesh

The pose and Mesh model data of the face are calculated in real time, and the Mesh model data follows the shape and movement of the face in real time.

Advertisement

AR Engine provides high-precision face Mesh modeling and tracking capabilities. After acquiring face image information, it builds a realistic Mesh model in real time. The Mesh model undergoes corresponding position and shape changes with the movement and deformation of the human face, achieving the effect of accurately capturing actions in real time.

AR Engine provides a Mesh with more than 4,000 vertices and more than 7,000 triangular faces, which can finely outline the contours of the face and enhance the experience.

Face health detection

Real-time calculation of face health information, and calculation of key human health information (such as heart rate, etc.).

AR Engine provides human health detection capabilities, including health information such as heart rate, respiration rate, facial health status, and heart rate waveform signals.

Advertisement

The advantages of AR Engine include:

Software and hardware optimization: Integrate modules, chips, algorithms, and HarmonyOS systems, and use hardware acceleration to provide enhanced reality capabilities with better effects and lower power consumption.

Differentiation capabilities: Based on the unique hardware of Huawei equipment, in addition to basic SLAM positioning and environment understanding capabilities, it also provides gesture and body recognition interaction capabilities.

Multi-device support: It supports the integration of HUAWEI AR Engine on many Huawei devices, and the download volume has exceeded 1.1 billion.

 

Advertisement

Huawei AR Engine Huawei AR Engine Image Huawei AR Engine Image 1 Huawei AR Engine Image 2

(Via)

Join_Huawei_Update_Telegram_Channel

Huawei News

Huawei Mate 70 Air shows up in a promotional photo

picture-1

Published

on

By

Huawei Mate 70 Air

Huawei is getting ready to release a new super-thin phone called the Mate 70 Air. Some details about this phone have already been leaked online, and now a new picture has appeared showing what it looks like.

What the phone looks like

A promotional poster for the Huawei Mate 70 Air was leaked today. The phone looks similar to other phones in Huawei’s Mate 70 family, but it’s much thinner.

The back of the phone has a round camera area with Huawei’s XMAGE camera logo on it. The edges of the phone appear to be shiny and polished. The back cover has a textured surface, similar to another Huawei model called the Mate 70 Pro+.
What’s inside the phone

There isn’t much information available yet, but the Mate 70 Air might be a mid-priced phone. Early reports said it would have 12GB of memory and either 256GB or 512GB of storage space. However, newer information suggests it will actually have 16GB of memory.
The phone is expected to have a large 6.9-inch screen. It will also have a high-quality main camera.

Other features

Advertisement

Unlike Apple’s iPhone Air, the Huawei Mate 70 Air will have a slot for a physical SIM card. The phone will run on Huawei’s HarmonyOS 5.1 software. It will come in three colors: black, white, and gold.
Huawei is expected to launch this phone in November, around the same time as its Mate 80 series.

Continue Reading

Huawei News

Huawei Mate 80 series latest Information

picture-1

Published

on

By

Huawei Mate 80 series

Tipster Digital Chat Station has confirmed that the Chinese tech giant Huawei will release new phones called the Mate 80 series in November. These new phones will have better designs, cameras, and speed.

The company is expected to compete with Apple’s new iPhone 17 phones. Huawei will make four different Mate 80 phones: the regular Mate 80, Mate 80 Pro, Mate 80 Pro+, and Mate 80 RS. Each phone will have different cameras. The regular Mate 80 will have a 50-megapixel camera.

The Pro version will have a bigger 50-megapixel camera. The Pro+ and RS models will have even bigger 50-megapixel cameras.
The Pro model might have a flat screen and face recognition. The Pro+ and RS models might have curved screens that bend more.
The best phone, the Mate 80 RS, might have a 6.9-inch screen with special technology.

This screen should be brighter, use less battery power, and last longer than normal phone screens. Huawei might also use strong titanium metal for the frame and special glass on the back. Inside the phones, there will be a new chip called the Kirin 9030. The battery might be bigger than 6000mAh and charge very fast – 100W with a wire and 80W without wires. The phones might work with 5G internet and satellite communication.

The phones will use Huawei’s own software called HarmonyOS. This might work better with other Huawei devices and have more AI features. Some people think these improvements will help Huawei compete better with Apple, especially in battery life, communication, and smart features.

Advertisement
Continue Reading

Huawei News

Huawei’s Xu Zhijun steps down as chairman

picture-1

Published

on

By

Huawei HiSilicon

A Chinese company called HiSilicon Semiconductor just changed who’s in charge. The old boss, Xu Zhijun, stepped down from his job as the legal representative and chairman. Now the company’s CEO, Gao Ji, is taking over these roles.

Xu has been working at Huawei since 1993 and still has other important jobs there – he’s a vice chairman and sometimes serves as the rotating chairman. Him leaving this position is probably just Huawei moving people around to different jobs within the company.
HiSilicon is completely owned by Huawei and makes computer chips. This change in leadership might mean Huawei is trying to make things work better.

HiSilicon started in October 2004 when Huawei took its computer chip design team and turned it into its own company. The main office is in Shenzhen, but they also have offices in Beijing, Shanghai, and other cities in China.

Continue Reading

Most Popular