Future Interfaces Group: The next phase of computer-human interaction

Engadget
17 Dec 201806:53

Summary

TLDRThe Future Interfaces Group at Carnegie Mellon University develops innovative human-computer interaction technologies, exploring new ways to communicate beyond traditional inputs like keyboards and touchscreens. Their projects include transforming smartwatches into advanced input devices through high-speed accelerometers and leveraging environmental sound recognition for contextual awareness. They also experiment with camera-based systems for real-time monitoring in smart environments. These cutting-edge innovations aim to create more intuitive, assistive technology. However, the team acknowledges the challenges of security, privacy, and practicality in the adoption of these technologies at scale.

Takeaways

  • 📱 Over 100 million devices can now distinguish between knuckle and finger touches and detect when they are lifted to the ear.
  • 🤖 The Future Interfaces Group (FIG) at Carnegie Mellon University, established in 2014, explores new modes of human-computer interaction.
  • 💡 FIG is sponsored by major tech companies like Google, Intel, and Qualcomm, focusing on speculative and experimental technologies.
  • 🖥️ The lab's vision includes creating intelligent environments where smart devices have contextual awareness and can assist users more naturally.
  • 👂 One project, Ubiquoustics, enables devices to listen to ambient sounds, like chopping vegetables or blending, to understand their context.
  • ⌚ Another innovation involves transforming smartwatches into versatile input devices using high-speed accelerometers to detect micro-vibrations.
  • 🖐️ Gesture-based interaction is being explored, such as snapping fingers to control lights or clapping to activate devices.
  • 📸 The lab also explores turning cameras into sensors, enabling smart environments to recognize objects or people without active human monitoring.
  • 🚗 FIG is testing real-time parking solutions using camera-based technology to reduce congestion and pollution in cities.
  • ⚖️ FIG balances technological innovation with privacy concerns, recognizing that no system is 100% secure and that trust hinges on the perceived value of new technologies.

Q & A

  • What is the Future Interfaces Group (FIG) and where is it located?

    -The Future Interfaces Group (FIG) is a research lab at Carnegie Mellon University in Pittsburgh, Pennsylvania, that focuses on human-computer interaction. It was founded in 2014 and works on speculative projects to improve communication between humans and machines beyond traditional methods like keyboards and touchscreens.

  • What are some examples of projects developed by FIG?

    -Examples of FIG projects include touchscreens that can detect if you're using a knuckle or finger, and devices that can recognize if you're lifting the phone to your ear. The lab works on ideas that expand how machines interact with humans, using sensors, sound recognition, and other contextual cues.

  • What is the grand vision of the FIG lab?

    -The grand vision of the FIG lab is to create intelligent environments where devices, like smart speakers or watches, are aware of their surroundings and can interact with humans using nonverbal cues, such as gestures, gaze, and sounds, similar to how humans communicate with each other.

  • How does the FIG lab increase implicit input bandwidth in devices?

    -FIG increases implicit input bandwidth by enhancing devices' ability to understand contextual information. For example, they use sound recognition to determine activities in a room, such as distinguishing between chopping vegetables or running a blender, so that devices can better assist users.

  • What is the uBaku Stiix project and how does it work?

    -The uBaku Stiix project uses sound recognition to understand what is happening in an environment. By training computers to recognize distinctive sounds, like chopping vegetables or using a blender, the project explores how devices can use microphones to gather contextual information about their surroundings.

  • How does the lab use smartwatches in its research?

    -The lab experiments with smartwatches by increasing their sensitivity. They overclock the accelerometer in a smartwatch to detect micro-vibrations, which allows the watch to sense subtle interactions, such as finger taps, transforming the watch into an input platform for controlling devices like lights or TVs using gestures.

  • What is Sensors, and how does it utilize camera feeds?

    -Sensors is a startup that uses camera-based technology to turn existing cameras in public places into sensor feeds. This system can recognize actions like counting the number of people on a sofa or identifying objects like laptops or phones, helping automate monitoring in places like libraries, restaurants, or streets.

  • How is FIG’s camera-based technology applied in real-world scenarios?

    -FIG’s camera-based technology is applied in real-world scenarios like real-time parking management, where cameras count available parking spaces and guide drivers to open spots, helping reduce congestion and pollution. This technology uses existing infrastructure to provide practical solutions.

  • What challenges do technologies developed by FIG face when moving from research to commercialization?

    -Technologies developed by FIG face challenges such as practicality, feasibility, and ensuring security and privacy when transitioning from research to commercialization. These technologies must balance innovation with real-world impact and user concerns, especially in terms of data privacy and usability.

  • How does FIG address privacy and security concerns in its projects?

    -FIG acknowledges that no technology is 100% secure or privacy-preserving. The lab focuses on making technologies that strike the right balance between innovation and privacy, ensuring that users understand and accept the potential trade-offs when adopting new devices with microphones, sensors, or cameras.

Outlines

plate

This section is available to paid users only. Please upgrade to access this part.

Upgrade Now

Mindmap

plate

This section is available to paid users only. Please upgrade to access this part.

Upgrade Now

Keywords

plate

This section is available to paid users only. Please upgrade to access this part.

Upgrade Now

Highlights

plate

This section is available to paid users only. Please upgrade to access this part.

Upgrade Now

Transcripts

plate

This section is available to paid users only. Please upgrade to access this part.

Upgrade Now
Rate This

5.0 / 5 (0 votes)

Related Tags
Human-ComputerGesture ControlContextual ComputingSmart EnvironmentsCMU LabInnovative TechSmart DevicesAI InterfacesFuture ComputingMachine Learning