Future Interfaces Group: The next phase of computer-human interaction
Summary
TLDRThe Future Interfaces Group at Carnegie Mellon University develops innovative human-computer interaction technologies, exploring new ways to communicate beyond traditional inputs like keyboards and touchscreens. Their projects include transforming smartwatches into advanced input devices through high-speed accelerometers and leveraging environmental sound recognition for contextual awareness. They also experiment with camera-based systems for real-time monitoring in smart environments. These cutting-edge innovations aim to create more intuitive, assistive technology. However, the team acknowledges the challenges of security, privacy, and practicality in the adoption of these technologies at scale.
Takeaways
- 📱 Over 100 million devices can now distinguish between knuckle and finger touches and detect when they are lifted to the ear.
- 🤖 The Future Interfaces Group (FIG) at Carnegie Mellon University, established in 2014, explores new modes of human-computer interaction.
- 💡 FIG is sponsored by major tech companies like Google, Intel, and Qualcomm, focusing on speculative and experimental technologies.
- 🖥️ The lab's vision includes creating intelligent environments where smart devices have contextual awareness and can assist users more naturally.
- 👂 One project, Ubiquoustics, enables devices to listen to ambient sounds, like chopping vegetables or blending, to understand their context.
- ⌚ Another innovation involves transforming smartwatches into versatile input devices using high-speed accelerometers to detect micro-vibrations.
- 🖐️ Gesture-based interaction is being explored, such as snapping fingers to control lights or clapping to activate devices.
- 📸 The lab also explores turning cameras into sensors, enabling smart environments to recognize objects or people without active human monitoring.
- 🚗 FIG is testing real-time parking solutions using camera-based technology to reduce congestion and pollution in cities.
- ⚖️ FIG balances technological innovation with privacy concerns, recognizing that no system is 100% secure and that trust hinges on the perceived value of new technologies.
Q & A
What is the Future Interfaces Group (FIG) and where is it located?
-The Future Interfaces Group (FIG) is a research lab at Carnegie Mellon University in Pittsburgh, Pennsylvania, that focuses on human-computer interaction. It was founded in 2014 and works on speculative projects to improve communication between humans and machines beyond traditional methods like keyboards and touchscreens.
What are some examples of projects developed by FIG?
-Examples of FIG projects include touchscreens that can detect if you're using a knuckle or finger, and devices that can recognize if you're lifting the phone to your ear. The lab works on ideas that expand how machines interact with humans, using sensors, sound recognition, and other contextual cues.
What is the grand vision of the FIG lab?
-The grand vision of the FIG lab is to create intelligent environments where devices, like smart speakers or watches, are aware of their surroundings and can interact with humans using nonverbal cues, such as gestures, gaze, and sounds, similar to how humans communicate with each other.
How does the FIG lab increase implicit input bandwidth in devices?
-FIG increases implicit input bandwidth by enhancing devices' ability to understand contextual information. For example, they use sound recognition to determine activities in a room, such as distinguishing between chopping vegetables or running a blender, so that devices can better assist users.
What is the uBaku Stiix project and how does it work?
-The uBaku Stiix project uses sound recognition to understand what is happening in an environment. By training computers to recognize distinctive sounds, like chopping vegetables or using a blender, the project explores how devices can use microphones to gather contextual information about their surroundings.
How does the lab use smartwatches in its research?
-The lab experiments with smartwatches by increasing their sensitivity. They overclock the accelerometer in a smartwatch to detect micro-vibrations, which allows the watch to sense subtle interactions, such as finger taps, transforming the watch into an input platform for controlling devices like lights or TVs using gestures.
What is Sensors, and how does it utilize camera feeds?
-Sensors is a startup that uses camera-based technology to turn existing cameras in public places into sensor feeds. This system can recognize actions like counting the number of people on a sofa or identifying objects like laptops or phones, helping automate monitoring in places like libraries, restaurants, or streets.
How is FIG’s camera-based technology applied in real-world scenarios?
-FIG’s camera-based technology is applied in real-world scenarios like real-time parking management, where cameras count available parking spaces and guide drivers to open spots, helping reduce congestion and pollution. This technology uses existing infrastructure to provide practical solutions.
What challenges do technologies developed by FIG face when moving from research to commercialization?
-Technologies developed by FIG face challenges such as practicality, feasibility, and ensuring security and privacy when transitioning from research to commercialization. These technologies must balance innovation with real-world impact and user concerns, especially in terms of data privacy and usability.
How does FIG address privacy and security concerns in its projects?
-FIG acknowledges that no technology is 100% secure or privacy-preserving. The lab focuses on making technologies that strike the right balance between innovation and privacy, ensuring that users understand and accept the potential trade-offs when adopting new devices with microphones, sensors, or cameras.
Outlines
📱 The Future of Human-Computer Interaction
This paragraph introduces how modern smartphones have advanced to detect different inputs, such as distinguishing between a knuckle or finger touch and recognizing when the phone is lifted to the ear. These innovations are part of ongoing projects at the Future Interfaces Group Lab at Carnegie Mellon University. Sponsored by tech giants like Google, Intel, and Qualcomm, the lab explores futuristic ways humans can interact with machines, beyond traditional interfaces like keyboards and touchscreens.
🔬 Creating the Future Interfaces Lab
The founder of the Future Interfaces Group, who joined Carnegie Mellon University (CMU) five years ago, reflects on setting up the lab. His research focused on using the human body as an interactive computing surface. With the support of students and researchers, the lab continues to push the boundaries of human-computer interaction. The grand vision involves creating intelligent environments where devices like smartphones and smart speakers understand contextual human interactions, similar to how humans use non-verbal cues in communication.
👂 Increasing Implicit Input for Devices
The lab's focus is on increasing devices' ability to understand their environment, often referred to as 'implicit input bandwidth.' One project, called Ubiquitous Acoustics (uBaku Stiix), enables devices to recognize environmental sounds to infer what’s happening, like detecting if someone is chopping vegetables or blending food. Using sensors like microphones already built into devices, they explore ways to collect and process contextual data affordably.
⌚ Transforming Devices with Enhanced Sensors
Smartwatches, which are highly capable computers, are the focus of another lab project. By overclocking the accelerometer in a smartwatch, the lab increases its ability to capture detailed micro-vibrations, enabling the watch to interpret subtle gestures and movements. This could allow users to control home devices through gestures, like snapping to turn on lights or clapping to control a TV. The lab develops hundreds of similar ideas each year, with a few transforming into startups.
📸 Turning Cameras into Sensors for Smart Environments
Some of the lab’s projects, such as 'sensors,' leverage existing technology, like cameras, to create smarter environments. They explore using video feeds to automatically analyze environments in real-time, for example, detecting how many people are present in a room or recognizing objects like laptops on a table. This approach could be used in public spaces or homes to offer smarter, real-time monitoring without human intervention.
🚗 Smart Cities and the Future of Parking
The lab collaborates with cities, such as using camera-based sensors to count cars and help solve real-world problems like parking. By utilizing existing infrastructure like traffic cameras, the system could potentially guide drivers to available parking spots, reducing congestion and air pollution. However, the success of such technologies depends on balancing practicality, feasibility, and cost with long-term value.
🔐 Ethical Considerations in Technology Adoption
The closing discussion addresses the ethical implications of advancing technologies, such as using surveillance cameras for sensing. While the research offers exciting possibilities, privacy and security concerns are inevitable. The lab believes that no technology can be 100% secure, but by making the right trade-offs and involving users in the design process, people may accept potential privacy risks if they see clear benefits.
Mindmap
Keywords
💡Future Interfaces Group
💡Human-computer interaction
💡Implicit input bandwidth
💡Contextual understanding
💡Ubiquitous computing
💡Sensor fusion
💡Smart environments
💡Speculative design
💡Gesture recognition
💡Computer vision
Highlights
Over a hundred million phones can now detect if you're using your knuckle or finger to touch the screen, as well as whether you're lifting the device to your ear.
The Future Interfaces Group Lab at Carnegie Mellon University in Pittsburgh, Pennsylvania, has been pioneering advancements in human-computer interaction since 2014.
The lab is backed by major sponsors like Google, Intel, and Qualcomm and develops hundreds of speculative ideas each year to enhance communication between humans and machines.
The group’s focus is on creating new modes of interaction beyond keyboards, touchscreens, mice, or even voice commands.
Founder of the lab envisions intelligent environments where devices like Google Home or smartwatches can have full contextual awareness, similar to human assistants.
A key area of research is enhancing 'implicit input bandwidth,' enabling devices to gather contextual information about their environment, like sound-based understanding.
The Ubakustix project trains computers to use microphones to understand environmental sounds, such as identifying kitchen activities like chopping vegetables or running a blender.
Research in the lab shows how smartwatches can be transformed into high-precision devices by overclocking their accelerometers, detecting micro-vibrations and enabling gesture-based controls.
Innovative smart gestures, such as snapping fingers or twisting a wrist, allow users to control lights or navigate menus through gestures alone.
Several of the lab’s projects have led to real startups, such as Kiko (touchscreen technology) and Sensors (a computer vision startup).
Sensors uses cameras in public environments, like restaurants or streets, to turn video feeds into sensor data, identifying objects, people, or even parking spaces.
Real-time parking systems are being piloted using existing city cameras, aiming to direct drivers to available spaces, reducing congestion and pollution.
The lab’s approach emphasizes practical and scalable solutions that can transition from research into real-world applications.
Challenges remain with security and privacy as cameras and microphones gain contextual awareness; however, the lab prioritizes designing technologies that balance benefits and privacy concerns.
The lab actively engages users in testing and feedback, ensuring that the value proposition of new technologies is clear, increasing the likelihood of adoption.
Transcripts
there are over a hundred million phones
that can tell if you're using your
knuckle or finger to touch the screen as
well as whether you're lifting the
device to your ear there are examples of
projects that started here at the future
interfaces group lab at Carnegie Mellon
University in Pittsburgh Pennsylvania
the lab has been around since 2014 and
counts Google Intel and Qualcomm among
its sponsors every year they develop
hundreds of speculative ideas all to do
with how we communicate with machines
beyond the mode of keyboard touchscreen
mouse or even voice we came here to see
some of their latest ideas and what they
might have to say about the future of
human-computer interaction
[Music]
I came to CMU as faculty about five
years ago and founded the future
interfaces group and we set up shop in
this building a little bit off campus so
we had lots of space to build crazy
prototypes and put things together I
wanted to build on my PhD thesis
research which was looking at how to use
the human body as a like an interactive
computing surface and so we extended a
lot of those themes and obviously I took
on master students and undergraduates
and PhD student researchers to extend
that vision and help them sort of
explore new frontiers in human-computer
interaction a grand vision that the
whole lab has has bought into is the
notion of having intelligent
environments you know right now if you
have a Google home or an Alexa or one of
these smart assistants sitting on your
kitchen countertop it's totally
oblivious to what's going on around and
that's true of your Smart Watch and
that's true of your smartphone they want
to make them truly assistive and they
can fill in all of a context like a good
humanist system would be able to do they
need to have that awareness like when
humans communicate there's these verbal
and nonverbal cues that we use like you
know gaze and gesture and all these
different things to enrich that
conversation in human-computer
interaction you don't really have that a
lot of my current work is all about
increasing implicit input bandwidth so
what I mean by that is increasing the
ability for these devices to have
contextual understanding about what's
happening around them so a good example
of this is sound we have this project
called u baku stiix that listens to the
environment and tries to guess what's
going on
if I teleported you into my kitchen but
I blindfolded you and I started blending
something or chopping vegetables you'd
be able to know that Chris's chopping
vegetables are running the blender or
turning on a stove or running the
microwave and so we just ask ourselves
well if sound is is so distinctive that
humans can do it
can we not train computers to use the
microphones that almost all of them have
you know whether it's a smart speaker or
even a smart watch you have all these
sensors that other people have created
that are at your disposal and the
question is how do you put them together
to do this in a low-cost impractical way
I think of smart watches it's like
really capable computers it's they
should be able to almost like transform
the hand until like an arm to point oh I
supposed to just extensions of the phone
typically accelerometers in the watch
are around 100 Hertz so here what we did
is we overclocked the accelerometer on
the watch so it becomes high-speed so
you can see here when I interact with
this coffee grinder
you can actually see the micro
vibrations that are propagating from my
hand to the watch you can't see that
effect from the 100 Hertz accelerometer
because it's to course the vibrations
when I tap here and when I tap here are
actually quite different so I can
basically transform this area around the
watch into like an input platform you
can also combine this with the motion
data so when I like snap I can basically
either snap to turn on the lights then I
can do this gesture and then twist to
you know adjust the lighting in that
house and then I could do like a clap
gesture to turn on the TV and do like
these types of gestures to navigate up
and down these are only a few of the
hundreds of ideas that pop up at the lab
every year a couple of them turn into
real startups one of them is Kiko which
is behind the touchscreen technology we
saw at the beginning another newer one
is a computer vision startup called
sensors one of the technologies that we
did for smart environments was a camera
based approach we noticed that in a lot
of settings like in you know restaurants
or libraries or airports or even out on
the street there's a lot of cameras
these days and what we asked know could
we turn these into a sensor feed so you
don't have to have someone in a back
room is looking at 50 screens but can we
smell action-wise and that's what we did
in sensors here's an example of how we
can go and make a question so we have a
camera
actually right above us you can see us
here right now this updates no once
every 30 seconds or once every minute so
the first thing you do is we select a
region of interest in this case these
two sofas it's going to be a let's say
how many and now Lily's gonna ask how
many people are here that's it and right
now it's saying there's three people
here and we're not just limited to be
sofas I could ask is there a laptop or
phones on this table is there food on
this table anything you can ask you can
do it so like I think though the model
of the company kind of is if you can see
it we can sense it so we're doing a
real-time parking pilot right now with
the city and and what we're using is
existing cameras along a stretch to
basically count cars so we can use that
as a real-time model potentially like
real-time parking but also does help
people find parking spots if you can
direct them to adjacent parking give me
much more efficient and reduce
congestion and air pollution and so on
deploying that sort of technology city
scale requires a huge capital investment
at the end as a number doesn't matter if
it's produced by a video camera or by
physical sensors in the pavement so in
order for technologies to be adopted
downstream passed the research phase
into the engineering and
commercialization phase is they have to
be practical feasibility is obviously
critical we like to tackle problems that
we know we can make progress on and then
will we balance that with its impact on
value the research is undoubtedly
exciting but what else happens when a
security camera doesn't just see but
understands any technology can be
misused what happens to an idea after it
leaves the lab it is a gray area sort of
like cars you're never gonna make the
100% safe car but that doesn't mean we
should eliminate all cars and we should
think about that for technology that no
technology is ever gonna be 100% secure
or 100 percent privacy preserving it and
so we always try to think about how to
make these technologies that make the
right trade-off because we have a vision
of how they're gonna exist we can think
about in our mind oh this would be so
cool if I had this in my kitchen but
we're too close to that domain we think
everything is cool all of the
technologies that we build are put in
front of users and if you can get people
to buy into the vision then maybe
they'll accept that oh but there's a
microphone on this thing that could be
listening to me in my kitchen and if you
make that value proposition right
they'll accept it if you get that value
property
wrong then it'll dispel Terr and it
won't be adopted
Ver Más Videos Relacionados
The creative interface: connecting art and computer science | Cole Wiley | TEDxLSU
Inovasi Smart City Berbasis Internet of Things
Reportagem do Mundo SA sobre RFID no Varejo - MODA
Intelligent Systems Research
Peranan Teknologi Informasi dan Komunikasi (TIK) dalam Kehidupan Manusia | Informatika Kelas 8
o FUTURO dos SMARTPHONES não é o que você pensa
5.0 / 5 (0 votes)