Everyday numerous students and their families visit CMU’s campus each year to decide if it is the right school for them. However, in the limited time available during a campus tour, it is difficult for them to develop a good understanding of the engineering and technology (ET) programs offered at CMU. This research is primarily focused on the design of a Autonomous Mobile Tour-Guide Robot (AMTGR). Utilizing RFID tags for location identification, and ultrasonic sensors for obstacle detection, the AMTGR provides the CMU visitors with up-to date information on ET programs and research conducted at CMU.
Performance Optimization of Nano-scale CMOS Circuits
The advancement of semiconductor technology with the shrinking device, currently at 45 nanometers has allowed for placement of billions of transistor on a single microprocessor chip. At the same time, the extremely low gate delays and shrinking device sizes has presented the design engineers with two major challenges: 1) timing optimization at multiple giga-hertz frequencies, and 2) reducing the daunting effects of semiconductor process variations. Failure to account for these process variations often results in loss of design productivity by one generation, and might even result in catastrophic design failure.
This research presents a just-in-time timing optimization approach that partitions a design, chooses efficient circuit style (static or dynamic) for each partition, and performs timing optimization while accounting for process variations. The presented algorithm can be used as a tape-out rescue mechanism from timing failure, for timing optimization, and can be easily incorporated into many of the optimization flows currently used by the industry.
For people with vision disabilities, it has always been difficult to accomplish many day-to-day activities. Identifying different buildings and finding the best route to reach a location is one of the difficult tasks for a person with limited or no sight capabilities. Technological advancement such as Radio Frequency Identification (RFID) is the answer for this and many other challenges for people with vision impairments. This research is primarily focused on the design of smart-devices for the blind. Utilizing RFID tags for location identification, and ultrasonics for obstacle detection, the smart-device helps the blind to safely navigate in a predefined location.
The automotive industry, in accordance to transportation methods represents a significant part of the general consumer’s daily life. Providing these vehicles with various wireless connectivity enables communication between vehicles in the vicinity, and their internal and external environments. Such connected vehicle is expected to become the key for evolution to the next generation of smart vehicles and infrastructure. Building upon this foundation, this project involves the design and implementation of a wireless access infrastructure for tomorrow’s connected vehicles. Namely, it involves design, development, and fundamental implementation of an infrastructure to enable interactions of vehicle-to-on board sensors (V2S), vehicle-to-vehicle (V2V), vehicle-to-road infrastructure (V2R), and vehicle-to-Internet (V2I).
A Modular IoT/Fog Computing Architecture for Smart Applications
Building upon the advancements in the recent years, a new paradigm in technology has emerged in Internet of Things (IoT) and Fog computing. They have allowed for communication with the surrounding environment through a multitude of sensors and actuators, yet operating on limited energy. Several researchers have presented IoT architectures for respective applications, often challenged by requiring major updates for adoption to a different application. Further, this comes with several uncertainties such as type of computational device required at the edge, mode of wireless connectivity required, methods to obtain power efficiency, and not ensuring rapid deployment. This research provides a horizontal overview of each layer in IoT architecture and options for different applications. Then it presents a broad application-driven modular architecture, which can be easily customized for rapid deployment. This paper presents implementation results for diverse applications including healthcare, structural health monitoring, agriculture, and indoor tour guide systems
There has been significant research on the physiology of sweat in the past decade, with one of the main interests being to develop a real-time hydration monitor that utilizes just sweat. The contents of sweat have been known for decades and provide significant information on the physiological condition of the human body. However, it is important to know the sweat rate as well, as sweat rate alters the concentration of the sweat constituents. Towards this goal, a calorimetric based flow-rate detection system was built and tested to determine hydration rate in real-time. The proposed hydration rate monitoring system has been validated through both controlled environment such as a machine controlled syringe pump and also human trials.
Smart Depth based Navigation System for the Visually Impaired
The lightweight and low-cost 3-dimensional depth sensors have gained much attention in the computer vision and gaming industry. While its performance has been proven successful in the gaming industry, these sensors have not been utilized successfully for assistive devices. Leveraging on this gap, this research presents the design, implementation, and evaluation of a depth-vision coordinated robust architecture for navigation for the visually impaired. The proposed system scans the scene in front, converts it into depth matrix, processes the information, to identify obstacles including physical objects and humans, and provide relevant haptic feedback for navigation of the blind. Through design and evaluation, the proposed system has shown to successfully identify objects and humans, perform real-time distance measurements, and provide a working solution.
One of the fundamental challenges faced by an inexperienced user in portable unmanned aerial vehicle (UAV) such as quadcopters is flight control, often leading to crashes. Addressing this challenge, and leveraging upon the technological advancement in perceptual computing and computer vision, this research presents a modular system that allows for hand gesture based flight control of UAV, alongside a transport mechanism for portable objects. In addition to ascertain smooth flight control by avoiding obstacles in navigation path, real-time video feedback is relayed from the UAV to user, thus allowing him/her to take appropriate actions. This research presents the design implementation by discussing the various sub-systems involved, inter system communication, and field tests to ascertain operation. The proposed system provides efficient communication between the subsystems for smooth flight control, while allowing for safe transport of portable objects.
Tour guide system, as referred by the name, is a system used for assisting visitors during tour of a certain place. Increasing tour efficiency while minimizing resources required has been studied by many researchers. The traditional multilingual human tour guides limit in the fact that, they cannot provide personalized information to every visitor because of time constraint. Also, having persons to assist tourists as a guide is expensive. Some tour guide robots and devices were presented in previous works to overcome certain limitations, but some challenges still exist. In this work an indoor location-aware portable device is presented aiming at improving the user’s experience during a tour of an indoor facility. The proposed system can provide personalized audio-visual information based on the location of the visitor, making the visitors independent of following a guide.
Indoor Positioning in GPS Constrained Environments
Indoor positioning systems (IPS) locate objects in closed structured such as warehouses, hospitals, libraries, and office building, where Global Positioning System (GPS) typically does not work due to poor satellite reception. Inherently, indoor positioning is a vital challenge facing the industry for a long time. Most of the available IPS operate based on optical tracking, motor encoding, and active RFID tags, often limited in their accuracy or high hardware cost. Answering this challenge, this research presents new passive radio frequency identification (RFID) based localization system for indoor positioning. The concept is based on placing passive RFID tags in a uniform triangular fashion for low tag density, and using an RFID reader for localization and navigation. The inputs to proposed systems are location coordinates stored on the RFID tags, and their respective time of arrivals as read by the RFID reader. The proposed system first estimates its location using a centroid method, and utilizes time difference of arrival (TDOA) to accurately estimate its position, and uses trigonometric identities to estimate its current orientation. Experimental results demonstrate the proposed system can effectively perform indoor positioning and navigate to its destination with an average accuracy of 0.07m, while avoiding any obstacles in the path.
The Internet of Things (IoT) and Edge computing technology are currently shaping different aspects of human life. Precision agriculture is one of the paradigms which can use the IoT and edge computing advantages to optimize the production efficiency and uniformity across the agriculture fields, optimize the quality of the crops, and minimize the negative environmental impact. In this research project, a customized three-layer hybrid architecture is proposed for precision agriculture. The proposed architecture collects the needed data, performs instantaneous analysis on the edge, and relays extended information to a cloud-based back-end where it is processed and analyzed longitudinally. Feedback actions based on the analyzed data can be sent back to the front-end nodes for activation of appropriate modules. A prototype of the proposed architecture has been built to demonstrate the performance advantages.
Smart Indoor-Outdoor Micro-localization for Assisted Living
Smart devices are becoming more common in our daily lives; they are being incorporated in buildings, houses, cars, and public places. Moreover, this technological revolution, known as the Internet of Things (IoT) combined with advancement in smartphones brings us new opportunities. While a variety of assistive devices have been developed for the blind, much work is yet to be done in the areas of indoor/outdoor localization and navigation. Building upon this technological advancement and need for assistive devices, this project focuses on the design and implementation of a portable Smartphone and haptics based localization and navigation system. The system consists of an array of ultrasonic sensors that are mounted on a waist belt to survey the scene, iBeacons and a Smartphone with embedded sensors to localize the user, and an array of vibration motors to provide haptic feedback to the user. The iBeacons will be deployed at different locations with each having a unique ID. In the cloud, there is a database for all the iBeacons attached with the corresponding information e.g. address and information about the place. The Smartphone detects the iBeacon’s ID and sends it to the cloud, accordingly the cloud sends back the information attached to this ID to the Raspberry Pi that converts the text to audio and plays it via a Bluetooth headset to the user. While at the same time, the ultrasonic sensors detect obstacles in path of the user, and provide haptic feedback so as to allow the user navigate around the obstacles.