Our Research
We are doing some incredible research in the areas of Internet of Things (IoT), artificial intelligence (AI), machine learning (ML), code migration, changepoint detection, smart manufacturing, smart transportation research and automated intelligence research.
Correlation Changepoint Detection Weng-Keen Wong OSU
Correlation/covariance changepoint detection has many real-world uses, including detecting failures in manufacturing, discovering unusual changes to stock prices, and detecting storms. The goal of this research project is to develop computationally efficient correlation / covariance changepoint detection algorithms for high-dimensional data that can identify which dimension(s) cause the change. We introduce a changepoint detection algorithm, which uses a linear decomposition of the precision (i.e. the inverse covariance) matrix to identify a change in the partial correlation structure of a time series. Our approach uses likelihood ratio tests to identify clusters of dimensions that are responsible for the change, thus providing more of an explanation as to why the changepoint occurs.
Global Explanations for Image Classification
Prasad Tadepalli OSU
Much of the current research in Explainable AI is aimed at explaining classification decisions of image instances through a variety of activation maps. In this research, we focus on the decisions of a neural network over an image dataset in terms of symbolic part labels. Building on earlier work, we compute the minimal sufficient explanations (MSXs) for image instances by perturbing the inputs of the opaque neural network model and examining its outputs. By finding correspondences between similar parts of different images and mapping them to symbolic part labels, we construct a human-interpretable model that is nearly consistent with the decisions of the network. We propose to extend this work to Visual Question Answering (VQA) in the context of activity recognition by constructing interpretable models of activities from videos and text.
Generative AI Programming Assistant Danny Dig U of C
Generative AI and Large Language Models (LLMs) are rapidly transforming the field of software development. Among others, developers use Generative AI to (i) search for code fragments using natural language, (ii) generate code, documentation, comments, commit messages, (iii) explain code, bug fixes, summarize recent changes, etc. A top concern remains the trustworthiness of the solutions provided by Generative AI. While many solutions resemble the ones produced by expert developers,
LLMs are known to produce hallucinations, i.e., solutions that seem plausible at first, but are deeply flawed. To help developers trust
Generative AI solutions, we discover novel approaches that synergistically combine the creative potential of LLMs with the safety
of static and dynamic analysis from program transformation systems. Our
current results show that our approach is effective: it safely automates code changes and is up to 39x more effective than previous state of the art tools. Moreover, our approach produces results that expert
developers trust: we submitted patches generated by our LLM-powered tools to famous open-source projects whose developers accepted most of our contributions. Our surveys with dozens of professional developers reveal that they agree with the recommendations provided by our tools. This shows the usefulness of our novel approach and ushers us into a new era when LLMs become effective AI assistants for developers. Your organization can also benefit from these big advancements. We invite you to partner with us so that we can turn your software developers into super-human developers. Together we go further.
System Support Research to Enable AI at the Edge - Shiv Mishra U of C
Edge computing introduces middle-tier compute servers closer to the sensors, and end users to build IoT applications. Our research goal is to develop core system-level services to enable a distributed, microservice-based system architecture that facilitates building complex AI applications at the edge. The key features of this system include incorporating humans in the loop, optimized placement of compute and data elements in a dynamically changing environment, and computing over a diverse set of processing elements including CPUs, GPUs and FPGAs. The proposed system aims to integrate and augment elements of current edge solutions such as EdgeX and Azure IoT Edge.
Our Research
Personalized Explainable Multimodal Neuro-symbolic Edge AI Models - Khalid Malik, OU
Deep learning models have: limitations in reasoning on multi-modality data; require large-volume high-quality training data; lack the ability to exploit human-in-loop feedback; exhibit slow convergence; and are not explainable. Industrial data is non-IID and cannot be transferred across organizations/sites due to privacy concerns and data regulations. To address these issues, our research aims to develop lightweight, multimodal, neuro-symbolic edge/Federated models that allow: personalized explainability; human-in-loop AI; joint processing of multimodal data and reasoning on it; and context-aware distributed processing of data. These models will be used for authentication and integrity verification in autonomous and connected vehicles, fake multimedia detection like deepfakes, and help physicians make better decisions for complex neurological disorders.
Dynamic Management of the Complexity of AI/ML Applications in Vehicles- Marouane Kessentini OU
This project’s main objective is to provide a solution and strategy to enable the optimal usage of resources in the vehicle (e.g. CPU, Memory, Energy, etc.) to execute AI/ML models/applications, implemented as docker containers,across a cluster made up of multiple nodes (embedded ECUs). This approach will be fully functional on the edge independently from the availability or not of the cloud connectivity. This will enable the vehicle to switch to move critical AI/ML applications to other ECUs in case of a failure, or if the vehicle switches to power conservation mode based on the priorities of the learning needed to take decisions related to the currently used features in the vehicle. The proposed intelligent scheduler will optimize the load of AI/ML containers dynamically based on multi-objective search to find the best trade-off, including a feature to predict the usage based on the execution history. A validation of the new scheduler will be performed on scenarios and simulations of containers running on multiple devices (e.g.Raspberry Pi).
Augmented Reality as an Interface for the Internet of Things and People - Ellen Do, U of C
We have multiple years of experience working with industry partners to provide Augmented Reality solutions for remote assistance and collaboration, as well as training and safety support in hazardous environments. With students fluent in developing with the latest immersive technologies as well as artificial intelligence paradigms, we aim to leverage our strengths to design a universal spatial interface for our interconnected future. Imagine a mobile AR application that displays all sensor information in-situ, or an AR headset which enables you to see your smart assistants as personified avatars. We are integrating Augmented Reality and Artificial Intelligence to provide Augmented Intelligence for the Internet of Things and People for applications in smart factories, supply chains, data analytics and human-machine interactions.
Context-Preserving Spatiotemporal Representation Learning & Anomaly Detection for IoT Data - Morteza Karimzadeh, U of C
IoT devices record data in the context of connected sensors, as well as the spatiotemporal settings in which they operate. This results in continuous data streams, characterized by high volume, velocity and heterogeneity. In addition to challenges in harnessing the high-dimensionality and volume of data, there are several challenges to effectively utilizing this data related to its quality, such as sensor error, contextual variability, diverse spatiotemporal resolution, and heterogeneous data modalities. This research aims to develop representation learning algorithms with the specific goals of compressing data streams from IoT sensors, preserving the contextual factors, and detecting anomalous behavior. To achieve these, we will explore techniques in contrastive sampling, graph neural networks, and adversarial/generative training with respect to spatiotemporal dimensions.
Our Research
ML Over Large Inconsistent Datasets - Arash Termehchy OSU
ML has great promise to transform our understanding about and solve vital societal problems in various domains. Unfortunately, large real-world datasets are typically inconsistent, so that the ML algorithms consuming them produce inaccurate models. For instance, the data might contain examples that are not consistent with domain rules on the range of data, such as a negative salary. These data bugs are particularly abundant in datasets created by collecting from multiple sources, such as the Internet of Things (IOT) settings. Currently, detection and repair of data bugs, a.k.a., data cleaning, are done manually by programmers. Due to the variety, complexities, and volume of data bugs, it often takes a long time and substantial effort to clean data. We have recently developed an alternative methodology by which to directly perform learning over the faulty dataset. Our key idea is a unified representation of all models / features which are approximately consistent with the provided data, and the corresponding repairs that need to be applied to the dataset.
AI: Towards Platform for Involving Customers in Design of Artificially Intelligent Tech - Douglas Zytko U of M
Consumers are rapidly accruing distrust for modern implementations of AI. In many cases consumers exhibit little understanding of an AI’s functionality and in some cases they even express overt disdain. These concerns have made clear that simply having AI integrated into products will not be a sustainable business advantage. Based on patterns with prior technological advances, consumers will migrate to the artificially intelligent products that they consider the most understandable, trustworthy, and most beneficial to their lives. Historically, the way companies have made their emerging technologies the most usable, understandable, and trustworthy is with UX research methods that directly involve customers in the design process. This is a persistent challenge in industry because consumers often have little understanding of AI and are thus stifled in their capacity to contribute to design of artificially intelligent products. We propose a web application called MyAI that enables everyday consumers to understand early-stage AI product concepts from industry and contribute to its design and development. The crux of the application is a series of “brainstorming “widgets” or design patterns that enable AI novices to independently create and revise key elements of artificially intelligent technology, such as: scenarios for new AI use cases, data for model training, and interfaces for explaining the AI’s decision-making.
Decentralized, Privacy-Preserving Data Collection & Aggregation at The Edge with Assistive Devices - Bradley Hayes, U of C
The science of privacy preserving techniques at the edge, enabling collective intelligence and data aggregation across pervasive networked devices, has many opportunities for advancement. We propose novel interaction paradigms, algorithmic developments, and human subjects study-based validation toward methods empowering users to inform sensor-rich, increasingly ubiquitous IoT systems about places or things that they “shouldn’t see”, as well as algorithms to preserve user anonymity while sharing data across cohorts (e.g., maps of indoor locations or processed visual information identifying product locations in stores, etc.). We motivate the potential of this work with a prototype networked, instrumented Smart Cane to enable people with visual impairment to live more independent lives.
Is there a specific research project that you are interested in?
Consider becoming a member so that you can have input on future research projects that we select.
Our Research
We are doing some incredible research in the areas of Internet of Things (IoT) artificial intelligence (AI), machine learning (ML), code migration, changepoint detection, smart manufacturing, smart transportation and automated intelligence.
Where can I learn more about the PPI Center's research?
Start with the agenda from our most recent public event.
What is the value of being an NSF IUCRC and where can I learn more about the IUCRC program?
The IUCRC model is designed to help startups, large corporate partners and government agencies connect directly with university researchers to solve common research obstacles in a low-risk environment. The aim is to develop new technology faster and build out the workforce in critical areas.
“The IUCRC program generates breakthrough research by enabling close and sustained engagement between industry innovators, world-class academic teams and government agencies.” — The National Science Foundation
Learn more at iucrc.nsf.gov.
How can I get involved or join as a member?
Contact us to set up a discovery meeting with one of the center directors to see how your organization can leverage the PPI Center.
Join one of our events.
Attend our monthly industry-targeted PPI Advances webinars.
How can I stay informed about the PPI Center’s news and events?
Subscribe to our mailing list for announcements (in the form below).