šŸ¤–Figureā€™s Voice-Controlled Robot Showcase

PLUS: Nvidia Launches ASL AI

Reading time: 5 minutes

Today we will discuss:

Anonysis ā€“ AI-Powered Anonymous Feedback & Insights

Anonysis makes it easy to collect anonymous input and turn it into actionable insights with AI-driven analysis. By removing fear of reprisal, organizations get the unfiltered truthā€”leading to better decisions, stronger teams, and more loyal customers.

šŸ” AI-powered sentiment analysis ā€“ Spot trends and concerns instantly.
šŸ’¬ Encourage candor ā€“ Foster open, honest dialogue without fear.
šŸ“Š Turn feedback into action ā€“ Identify opportunities for real improvement.

An Example: What Do You Think About Workflow Wednesday (AI Workflow Newsletter)? Tell us what you thinkā€”share your anonymous feedback Here

GoogleImages

Key Points 

  • Figure unveiled Helix, a new Vision-Language-Action model designed for humanoid robots.

  • Helix combines visual data and natural language prompts, enabling real-time control of robots for various tasks.

šŸ‘Øā€šŸ’»News ā€“ Figure founder and CEO Brett Adcock has introduced Helix, a new machine learning model designed for humanoid robots. Helix is described as a ā€œgeneralistā€ Vision-Language-Action (VLA) model, aimed at helping robots better interpret and respond to their environment through a combination of vision and language processing.

šŸ“–For context ā€“ VLAs are a relatively new development in robotics, using visual input and language commands to help machines understand and carry out tasks. One of the most recognized examples so far is Google DeepMindā€™s RT-2, which trains robots by combining video data with large language models (LLMs) to improve their ability to perform various actions.

šŸ¤–In action ā€“ Helix works in much the same way, using visual data alongside language prompts to guide a robotā€™s actions in real time. According to Figure, Helix can recognize and handle thousands of unfamiliar household items, regardless of shape, size, color, or material, all through simple natural language instructions. The goal is to make interactions intuitiveā€”say a command, and the robot gets it done.

šŸ¦¾A case in point ā€“ Figure demonstrates this with examples like, ā€œHand the bag of cookies to the robot on your right,ā€ or, ā€œReceive the bag of cookies from the robot on your left and place it in the open drawer.ā€ These tasks involve two robots working together, with one assisting the other to complete an action.

However, the technology is still in its early stages, and much of whatā€™s seen in polished demonstration videos requires significant behind-the-scenes effort to achieve.

GoogleImages

Key Points 

  • Nvidiaā€™s Signs platform is creating a validated ASL dataset with 400,000 video clips of 1,000 signs.

  • The platform offers real-time feedback through AI analysis of webcam footage, supporting learners at all skill levels.

ā˜•News ā€“ Nvidia has launched Signs, a new AI platform designed to support American Sign Language (ASL) learning and development. 

The platform is focused on building a validated dataset for both learners and developers working on ASL-based AI applications. Nvidia aims to expand this dataset to 400,000 video clips, covering 1,000 signed words. Each sign is reviewed by fluent ASL users and interpreters, ensuring accuracy and creating a reliable visual dictionary and learning tool.

šŸ™ŒWhy this matters ā€“ ASL is the third most widely used language in the United States, yet AI tools built with ASL data remain limited compared to those for English and Spanish. The Signs platform aims to support ASL education while encouraging the development of more accessible AI technologies.

šŸ¤“How it works ā€“ Learners can access a library of validated ASL signs demonstrated by a 3D avatar. The platform also features an AI tool that analyzes webcam footage, offering real-time feedback on signing. Users of all levels can contribute by signing specific words, helping expand the open-source dataset.

šŸ‘Whatā€™s more? Nvidia plans to use this dataset to develop AI applications that help bridge communication between deaf and hearing communities. The data will also be made publicly available, offering a valuable resource for building accessible technologies such as AI agents, digital human applications, and video conferencing tools.

šŸ™†šŸ»ā€ā™€ļøWhat else is happening?

In this weekā€™s Workflow Wednesday we provided AI workflows for the tasks our readers needed help with, have a question you want answered? Join here

This weekā€™s theme was AI Optimization, where we covered the topics: 

šŸ‘©šŸ¼ā€šŸš’Discover mind-blowing AI tools

  1. Learn How to Use AI - Starting January 8, 2025, weā€™re launching Workflow Wednesday, a series where we teach you how to use AI effectively. Lock in early bird pricing now and secure your spot. Check it out here

  2. OpenTools AI Tools Expert  - Find the perfect AI Tool to solve supercharge your workflow. This GPT is connected to our database, so you can ask in depth questions on any AI tool directly in ChatGPT (free)

  3. Speechnotes - A speech-to-text tool that allows you to transcribe audio and video recordings, as well as dictate notes using your voice

  4. Artisse - An innovative AI-powered photography app that allows users to create personalized and hyper-realistic self-photos

  5. Practina AI - A marketing automation platform that helps businesses with digital marketing needs

  6. Homestyler - Offers a range of tools and features for interior design, home renovation, and real estate

  7. Inkey - An AI-powered platform that offers a range of tools to assist students in their writing tasks

  8. Viroll - An AI-powered video editing tool that helps users create highlight clips from their videos

  9. Brewed - Allows users to design UI components with the help of AI

How likely is it that you would recommend the OpenTools' newsletter to a friend or colleague?

Login or Subscribe to participate in polls.

Interested in featuring your services with us? Email us at [email protected]