advanced visualization

interactive visualization of large particle simulation
Collaborators: Joseph F Pekny, Director E-Enterprise Center, Chemical Engineering; Jennifer Curtis, Chemical Engineering; Michael Lasinkski, Chemical Engineering
Large particle visualization
The goal of this project is to develop compute efficient algorithms, and rendering methodologies to visualize very large particle (100s of thousands of particles to millions of particles) simulations in a virtual environment interactively and dynamically. This framework allows interactive visualization of the particles in real-time (i.e. in actual time to the precision in microseconds). This system also allows the researchers to visualize, mark, query, and change the properties of the particles interactively.


augmented reality

augmented reality using head mounted display
This project focuses on using head mounted displays (HMDs) for visualizing augmented environments. Two types of HMDs are planned to be used in this research – see through and video feed through.

immersive augmented reality table
This project’s goal is to custom built immersive augmented reality workbench. The projector and PC graphics has developed in the past couple of years and an immersive AR system can be run in a single PC. In this immersive workbench, we plan to use high end graphics workstation for stereo display and dual projector setup for passive stereoscopic display. Haptics system and/or 3D spatial tracker will be added for haptics force feedback and direct manipulation in 3D directly.

The goal of this project is to custom-build immersive augmented reality workbench. The recent developments in projector and PC graphics technologies enable to develop this immersive AR system at an affordable cost. This immersive workbench will consist of high-end graphics workstation and dual projector setup for passive stereoscopic display. Haptics system and 3D spatial tracker will be integrated for haptics force feedback and direct manipulation in 3D directly.

gps based mobile augmented display system
Goal of this project is to explore and extend the capabilities of mobile graphics. In this research, micro head mounted display, mobile micro displays, miniature cameras will be used in conjuction with wearable systems such as PDAs and TabletPCs with GPS system to augment computer graphics with the realworld images. This project will develop a common framework that will be used in several application areas such as GIS, architectural visualization, homeland security, etc.


collaborative environments

collaborative virtual environment - v2v
V2V
Developed a framework for collaborative virtual environment called “Virtual Environment to Virtual Environment” (V2V) that allows remotely located multiple designers to collaborate in a virtual environment. The primary objective is to enable quality multi-modal interaction between designers in real-time. Several hybrid communication techniques have been proposed to enable high throughput and intuitive communication between designers. This framework is in conceptual design stage.


display technologies

portable virtual reality system
Portable VR display
It is difficult to visualize and teach from massive data sets using conventional computer displays. Anyone who has tried to communicate about 3D data using 2D means has likely encountered the inherent difficulties associated with teaching this way. However, advances in VR technology and decreases in cost now allow such technology to be used in the classroom. With affordable, high-end personal computers coupled with portable VR hardware, viewing complex data in a classroom (or in any room) is now possible. This contribution will demonstrate a portable VR system available at Purdue University that allows students and researchers to visualize and learn from complex data sets in 3D. Benefits include:
  • Cheap (The complete system can be assembled around $9K~$12K)
  • Easily portable (to places such as, classrooms, conference halls, exhibitions, etc.)
  • Can be set up in less than 5 minutes
cluster based lcd 2x3 tiled display wall
LCD tiled display wall
Traditionally desktop based graphics rendering is done using a single personal computer (PC) on a single monitor. With the latest developments in graphics hardware and computing power, a single PC is used to drive multiple displays at the same time. But the complexity of scientific visualization applications increased tremendously so that a single PC could not render complex scenes in real-time even to single display, not to mention multiple displays. This single PC, single/multi display configurations have several limitations, such as insufficient computing power, trade-off between rendering quality vs. display area, especially to render complex scenes in real-time. This project focuses on an approach that describes the use of commodity cluster to render complex scenes to a single monitor, and/or to different tiled display configurations in real-time. Our approach provides more flexibility and ability to customize the computing power, rendering quality and display area as required. Several software applications, and specialized graphics libraries will be used that utilizes the latest hardware and software components.

cluster based stereoscopic projection based 3x4 tiled display wall
Tiled display wall
Traditionally desktop based graphics rendering is done using a single personal computer (PC) on a single monitor. With the latest developments in graphics hardware and computing power, a single PC is used to drive multiple displays at the same time. But the complexity of scientific visualization applications increased tremendously so that a single PC could not render complex scenes in real-time even to single display, not to mention multiple displays. This single PC, single/multi display configurations have several limitations, such as insufficient computing power, trade-off between rendering quality vs. display area, especially to render complex scenes in real-time. This project focuses on an approach that describes the use of commodity cluster to render complex scenes to a single monitor, and/or to different tiled display configurations in real-time. Our approach provides more flexibility and ability to customize the computing power, rendering quality and display area as required. Several software applications, and specialized graphics libraries will be used that utilizes the latest hardware and software components.


high performance visualization

7-node pc cluster for 2x3 lcd tiled display (dual boot: windows and linux)
PC Cluster for LCD tiled display
Traditionally desktop based graphics rendering is done using a single personal computer (PC) on a single monitor. With the latest developments in graphics hardware and computing power, a single PC is used to drive multiple displays at the same time. But the complexity of scientific visualization applications increased tremendously so that a single PC could not render complex scenes in real-time even to single display, not to mention multiple displays. This single PC, single/multi display configurations have several limitations, such as insufficient computing power, trade-off between rendering quality vs. display area, especially to render complex scenes in real-time. This project focuses on an approach that describes the use of commodity cluster to render complex scenes to a single monitor, and/or to different tiled display configurations in real-time. Our approach provides more flexibility and ability to customize the computing power, rendering quality and display area as required. Several software applications, and specialized graphics libraries will be used that utilizes the latest hardware and software components.

13-node pc cluster for 4x3 stereoscopic projection based tiled display wall (dual boot: windows and linux)
PC cluster for tiled display wall
Traditionally desktop based graphics rendering is done using a single personal computer (PC) on a single monitor. With the latest developments in graphics hardware and computing power, a single PC is used to drive multiple displays at the same time. But the complexity of scientific visualization applications increased tremendously so that a single PC could not render complex scenes in real-time even to single display, not to mention multiple displays. This single PC, single/multi display configurations have several limitations, such as insufficient computing power, trade-off between rendering quality vs. display area, especially to render complex scenes in real-time. This project focuses on an approach that describes the use of commodity cluster to render complex scenes to a single monitor, and/or to different tiled display configurations in real-time. Our approach provides more flexibility and ability to customize the computing power, rendering quality and display area as required. Several software applications, and specialized graphics libraries will be used that utilizes the latest hardware and software components.

orad/dvg cluster system
Collaborator: ORAD Inc.
DVG Cluster
This is a collaborative research effort with Orad Inc. to enable and extend the PC cluster architecture and software technology that allows high performance scalable 3D graphics rendering in real-time for advanced scientific visualization. Cheap of the Shelf (COTS) systems such as regular PC workstations and graphics cards are used to construct the ORAD’s Digital Visualization Graphics (DVG) technology. Since at present the computing (processor) and graphics industry is releasing new products every quarter it is crucial to keep up-to-date with the latest changes to meet the real-world research demands. ORAD’s DVG technology allows customizing and updating the rendering systems in little or no time to accommodate the changes in the graphics card, processor and computing industry. In addition, by adding more rendering nodes, performance is increased almost linearly for certain types of applications. Integrate hardware and software solutions that uses (64 bit opteron) cluster based computing and video compositor based rendering technology to achieve high performance computing visualization using COTS components.


multimodal user interface

multimodal user interaction in virtual and augmented environments
Multimodal user interface
The goal of this research is to develop different modules for multimodal user interaction that can be applied easily in virtual environments and augmented environments. Different modules include interaction through 3D spatial tracker for position/orientation recognition, gloves for finger recognition, head tracking for user centered navigation, voice recognition for speech input, speech synthesizer for real-time natural speech output. This framework will allow the user to intuitively manipulate geometries, navigate in 3D immersive virtual and augmented environments, users interact directly using multi-modal interaction and collaborate between multiple users in a shared environment.

guided trace and stitch (guts) modeling using multimodal interaction
GuTS System
The goal of this research is to develop a methodology to design complex curves and surfaces precisely and rapidly. Introduced a novel approach called “Guided Trace and Stitch” (GuTS) modeling using multi-modal interaction. In GuTS, pieces of new curves and surfaces are traced from guide shapes and stitched together to make complex shapes. Automated snapping and other interaction mechanisms for placing the curves and surfaces precisely, permit precise and rapid manipulation of the shapes. Other geometric operations such as cutting, smoothing, joining, level of detail and, sweeping curves to create surfaces works naturally in GuTS interface.


virtual reality

virtual reality projects
VR
For detailed information refer to the portfolio for the following several projects, varying application areas, and research areas:
  • Architectural modeling and visualization
  • Display Technologies
  • GIS / Glaciology
  • Homeland Security
  • PLM and CAD
  • User Interaction
  • Virtual Store and Mall
  • Volume Rendering

volume rendering

intensity radiation therapy simulation
Collaborators: RCHE; Joe Pekny, Seza Orcun, Discovery Park
IMRT Simulation
The goal of this project is to develop a simulation application that integrates volume data set of real-world human organs and the mathematical model that simulates the IMRT process to predict measure and define better treatment approach. In this research, we plan to build a proof-of concept prototype system that will enable capture and relate the time dependent changes of the irradiated volume throughout the course of treatment that uses PLM techniques. This prototype system will allow managing and visualizing this dynamic information gathered from different sources in effective and efficient way.

visualization of mcnp simulation and ct data
Collaborators: Tatjana Jevremovic, Katie Hileman, Elise Lenzo, Nuclear Engineering
MCNP Simulation
This project is to visualize the CT data and MCNP (Monte Carlo N-Particle) Transport that generates dose distributions and cell interactions. This generates two 3D visualizations, one directly from the CT data and one from the MCNP data. These images should be similar to prove that the bridge between CT, geometry data and MCNP, and data conversions work. In addition graphically display dose distributions from MCNP. This simulation proves this approach can be applied to other nuclear engineering industrial applications.

volume visualization of human medical data set
Collaborator: Unity Health
Human medical data set
This is a collaborative project with UnityHealth. This project focuses on visualizing human medical data set in three dimensions using various volume rendering algorithms. Initial data sets are scanned and stored in MRI or CT-Scan data or similar 2D slice formats. Then these data sets are converted to 3D volume data format and rendered using different rendering algorithms, such as iso surface, ray tracing, texture mapping, MIP, etc. Different rendering mechanisms are used to visualize different parts and features of the data sets. Part of the goal is to visualize large volume rendering in interactive display.

volume visualization of vet medical data set
Collaborators : VET School
VET medical data
This is a collaborative project with VET school. This project focuses on visualizing vet medical data set in three dimensions using various volume rendering algorithms. Initial data sets are scanned and stored in MRI or CT-Scan data or similar 2D slice formats. Then these data sets are converted to 3D volume data format and rendered using different rendering algorithms, such as iso surface, ray tracing, texture mapping, MIP, etc. Different rendering mechanisms are used to visualize different parts and features of the data sets. Part of the goal is to visualize large volume data sets interactively.