Jingxi Xu AI Research

Click to visit website
About
Jingxi Xu’s AI research portfolio provides a collection of machine learning frameworks and robotic control systems designed to facilitate interaction between human intent and robotic hardware. The primary focus of the work is the development of assistive and rehabilitation technologies, such as robotic hand orthoses that aid stroke patients in regaining motor control. These systems utilize multimodal biosignals, specifically surface electromyography (sEMG), to predict user intended movement in real-time. By integrating generative AI techniques, the research addresses the challenge of data scarcity in medical robotics, allowing for the creation of robust control models that can adapt to individual user needs without requiring large amounts of manually collected data. The portfolio features several notable projects, including ChatEMG and GEOTACT. ChatEMG utilizes synthetic data generation to improve the control performance of robotic hand orthoses, enabling manipulation and feedback for patients with neurological conditions. GEOTACT focuses on tactile-based object retrieval, allowing robots to identify and extract buried objects within granular media using touch sensors alone. These tools utilize algorithms for active tactile exploration and zero-shot intent detection, exploring how robots perceive and interact with environments where visual information is limited. The research also encompasses visual navigation, multi-arm motion planning, and optimal control. This work is intended for researchers in robotics, biomedical engineering, and human-computer interaction. Healthcare providers and occupational therapists can utilize the insights and wearable technologies developed for stroke recovery, while roboticists can apply the tactile sensing and manipulation policies to industrial or domestic tasks. The tools are useful for scenarios where data is limited or where interaction with hidden objects is necessary. The research specifically focuses on "sparsity and scarcity," providing resources for developers working with small datasets or complex sensing requirements. This research differs from standard robotics software through its integration of clinical needs with machine learning. The tools are specifically designed to handle the variability of human biosignals and the physical requirements of rehabilitation. By combining tactile intelligence with intent inferral, the research establishes a framework for human-robot collaboration. Additionally, the availability of codebases for projects like ChatEMG and ReactEMG on GitHub provides a transparent and accessible foundation for the broader robotics community.
Pros & Cons
Utilizes synthetic data generation to overcome data scarcity in rehabilitation robotics.
Supports zero-shot, low-latency intent detection for immediate user interaction.
Provides tactile sensing policies that allow for object retrieval in complex granular media.
Research is validated through high-tier peer-reviewed publications and awards.
Offers open-source access to several codebases and datasets via GitHub.
Most tools are research-oriented and require technical expertise to implement.
Full functionality often depends on specific hardware like robotic orthoses or tactile sensors.
Real-world medical application requires professional clinical supervision.
Documentation is primarily focused on academic audiences rather than end-users.
Use Cases
Researchers can utilize the GEOTACT framework to develop robots capable of identifying objects through touch in obstructed environments.
Stroke patients can benefit from the MyHand orthosis which uses AI to translate muscle signals into hand movements.
Bioengineers can use the ChatEMG codebase to generate synthetic sEMG data for training more robust gesture recognition models.
Platform
Features
• panoramic visual navigation
• multi-arm motion planning
• robotic hand orthosis control
• zero-shot model adaptation
• active exploration policies
• tactile object retrieval
• synthetic biosignal generation
• surface emg intent detection
FAQs
What is ChatEMG?
ChatEMG is an AI framework that creates synthetic electromyography data to improve the control systems of robotic hand orthoses. This addresses the challenge of insufficient training data available from stroke patients who cannot perform extensive movement trials.
Can the robotic hand orthosis detect user intent in real-time?
Yes, the ReactEMG project focuses on zero-shot, low-latency intent detection via biosignals. This allows the robotic system to predict and execute the user's desired movements immediately without lengthy calibration sessions.
How does GEOTACT use tactile sensing?
GEOTACT implements tactile exploration policies that allow robots to find and retrieve objects buried in materials like sand or grain. It relies on touch sensors to recognize object geometry when visual sensors are occluded.
Are the datasets and codebases accessible for external use?
Many of the research projects provide links to public GitHub repositories and datasets. You can find these links alongside the respective research publications listed on the portfolio website for collaborative or academic use.
Pricing Plans
Research Access
Free Plan• Open-source code
• Pre-trained models
• Synthetic datasets
• Hardware specifications
• Academic publications
Job Opportunities
There are currently no job postings for this AI tool.
Ratings & Reviews
No ratings available yet. Be the first to rate this tool!
Alternatives
QUT Centre for Robotics
QUT Centre for Robotics conducts world-leading research in intelligent robotics and autonomous systems, focusing on various applications and offering educational opportunities.
View DetailsMarcus M. Scheunemann Robotics Research
Develop autonomous, curiosity-driven robot behaviors and improve human-robot interaction with open-source research frameworks and predictive information models.
View DetailsFeatured Tools
adly.news
Connect with engaged niche audiences or monetize your subscriber base through an automated marketplace featuring verified metrics and secure Stripe payments.
View DetailsImage to Image AI
Transform photos and videos using advanced AI models for face swapping, restoration, and style transfer. Perfect for creators needing fast, professional visuals.
View DetailsNano Banana
Edit and enhance photos using natural language prompts while maintaining character consistency and scene structure for professional marketing and digital art.
View DetailsNana Banana Pro
Maintain perfect character consistency across diverse scenes and styles with advanced AI-powered image editing for creators, marketers, and storytellers.
View DetailsKling 4.0
Transform text and images into cinematic 1080p videos with multi-shot storytelling, character consistency, and native lip-synced audio for professional creators.
View DetailsAI Seedance
Generate 15-second cinematic 2K videos with physics-based audio and multi-shot narratives from text or images. Ideal for creators and marketing teams.
View DetailsMistrezz.AI
Engage in immersive NSFW roleplay and ASMR voice sessions with adaptive AI companions designed for structured escalation, fantasy scenarios, and personal connection.
View DetailsSeedance 3.0
Transform text prompts or static images into professional 1080p cinematic videos. Perfect for creators and marketers seeking high-quality, physics-aware AI motion.
View DetailsSeedance 3.0
Transform text descriptions into cinematic 4K videos instantly with ByteDance's advanced AI, offering professional-grade visuals for creators and marketing teams.
View DetailsSeedance 2.0
Generate broadcast-quality 4K videos from simple text prompts with precise text rendering, high-fidelity visuals, and batch processing for content creators.
View DetailsBeatViz
Create professional, rhythm-synced music videos instantly with AI-powered visual generation, ideal for independent artists, social media creators, and marketers.
View DetailsSeedance 2.0
Generate cinematic 1080p videos from text or images using advanced motion synthesis and multi-shot storytelling for marketing, social media, and creators.
View Details