This project investigates whether deep-learning-based super-resolution can be safely applied to scientific visualization tasks such as volume and flow rendering. Commonly used super-resolution methods are typically trained on movies or video games and may introduce hallucinated features or suppress critical fine-scale structures in scientific data. The project focuses on classifying the errors that arise when using off-the-shelf methods and addressing these issues through fine-tuning on scientific visualization images.
The workflow involves building an open-source neural super-resolution pipeline and generating paired low- and high-resolution scientific renderings, optionally including depth or motion information. Models are first applied without additional training, then fine-tuned on domain-specific data using losses that emphasize edge preservation, intensity fidelity, and structural accuracy. A comparison is also made with a proprietary hardware-accelerated super-resolution method by upscaling the same scientific scenes using its available API.
Quality is assessed using both general image metrics and domain-specific measures, including preservation of small features, consistency across frames, and sensitivity of derived analyses. The project evaluates how the different approaches respond to characteristic scientific patterns such as thin isosurfaces, vortices, or sharp scalar gradients, identifying when neural super-resolution provides reliable acceleration for scientific visualization and where standard high-resolution rendering remains necessary.
Volume rendering is a powerful tool for visualizing complex 3D datasets, but traditional transfer functions typically map color and opacity based only on scalar values at individual points, sometimes including derivatives. This approach can miss subtle structures or repeating patterns that span multiple voxels. This project explores transfer functions that operate on local spatial regions ("blocks") rather than single points, enabling visualizations to highlight patterns, textures, and recurring structures within volumetric data. Key challenges include efficiently representing these spatial patterns, extracting them from the data, and designing intuitive ways for users to explore and manipulate them.
The project focuses on scientific visualization and deep learning. It investigates methods for block-level feature extraction—such as deep learning-based descriptors, 3D convolutional features, or classical approaches like local histograms or PCA—and applies clustering or pattern classification to identify recurring structures. Interactive tools can then assign color and opacity to clustered patterns, creating more expressive and informative visualizations. The project involves working with scientific datasets, experimenting with both data-driven and user-driven approaches, and contributing to the development of more intuitive techniques for understanding complex volumetric data.
Grids are a common way to visualize large data collections, but they involve a trade-off: higher resolution allows more thumbnails to be displayed, yet each glyph has only a small space to convey meaningful information. Efficient glyph design is therefore crucial to highlight similarities and differences across many data instances.
This project focuses on creating expressive glyphs for visualizing structures and processes in porous media through a two-step approach. First, a glyph design space is defined using a small set of simple primitives that can be varied in size, position, and color. These variations support rapid, pre-attentive comparison of patterns. Second, glyphs are parametrized in a data-driven way using a Siamese network architecture to capture feature similarities between data samples. This representation allows subtle commonalities and differences to be clearly reflected, making large ensemble comparisons more intuitive and informative.
Sca2Gri is a scalable post-processing method for large-scale scatterplots that reduces visual clutter by gridifying glyph representations. It is designed for data analysis scenarios involving millions of data points, far beyond what traditional scatterplot rendering techniques can handle effectively.
This project explores ways to improve both the performance and the expressiveness of Sca2Gri. One focus is optimizing the selection of data points for rendering grid glyphs using specialized data structures. Range trees combined with fractional cascading present a promising approach. Range trees efficiently handle range queries in n-dimensional data, and fractional cascading can reduce the computational complexity to that of a one-dimensional range tree, potentially speeding up queries for visualization.
Another key aspect is the aggregation of data points within each glyph to provide a more holistic view. Rather than showing a single representative point, each glyph could summarize the full range of underlying data—through averaging or other aggregation techniques—while maintaining interactive exploration capabilities, such as a draggable lens.
This project seeks to address questions such as:
By combining efficient data structures with aggregation and interaction techniques, this project aims to make scatterplot visualization both faster and more informative, supporting deeper insights into complex datasets.
PARViT is envisioned as an ML-based perioperative support system that uses augmented reality to guide surgeons through the critical preparation phase of rectal cancer surgery. This phase requires careful dissection of four structural “pillars,” each composed of five partial steps, to enable a radical resection and favorable oncological outcomes.
The project focuses on learning from expert ratings of dissection progress in a large video database. These ratings form the basis for automated feedback and for studying how varying levels of preparation correlate with three-year oncological outcomes.
Technically, the project aims to build PARViT on top of a Vision Transformer (ViT) architecture. Surgery frames are split into patches and processed through transformer blocks to produce frame embeddings that capture visual features relevant for assessing progress. The model will be pre-trained on large image datasets (e.g., ImageNet-21k, Medical ImageNet) and fine-tuned on labeled surgery videos. Self-attention and multi-headed attention mechanisms will enable the system to identify important structures and contextual relationships in each frame.
Overall, the project aims to explore how such a system can support consistent, safe tissue preparation and reduce perioperative complications. It will be conducted in collaboration with the UMCG.
Objectives:
Detect the start of the preparation phase, track progress within each pillar on a 0–5 scale, and identify the transition to tumor removal. Maintain an internal representation of pillar-specific progress based on frame-level classifications learned from expert-annotated sequences.
Use three-year outcome data to examine whether incomplete preparation (e.g., reaching only 3/5 in a pillar) is associated with increased recurrence risk, thereby validating the clinical relevance of the preparation states.
Stretch Goals:
Retrieve similar, well-executed reference segments from the database by comparing internal embeddings with those from the ongoing procedure.
Highlight influential image regions using attention maps to support interpretation and identify structures relevant for the assessed state.
Prototype AR overlays that present state information, best-practice examples, and attention highlights in a clear, unobtrusive manner, with long-term potential for integration into robotic platforms such as the Da Vinci system.
The Hierarchical dataset explorer (Hieradex) is a visualization method for large-scale image and volumetric datasets. This method leverages the hierarchical grid structure of the Level-of-Detail Grid (LDG)[1] to visualize ensembles across multiple granularities, enabling exploration of each dataset sample itself as well as the data distribution and relations within a dataset. The implementation of Hieradex is publicly available here and a demo of its functionality can be seen here. You can find a set of project topics pertaining to the improvement of Hieradex as a dataset exploration tool below. Any questions related to these projects may be redirected to d.h.boerema@rug.nl.
[1] Frey, S. (2022), Optimizing Grid Layouts for Level-of-Detail Exploration of Large Data Collections. Computer Graphics Forum, 41: 247-258. [https:/doi.org10.1111cgf.14537](https:doi.org10.1111/cgf.14537)
One of the key features of Hieradex is the ability to design and control the transfer function through the UI. However, transfer function design is a complex process in which many visualization principles such as clarity, focus and accessibility need to be balanced. To help the user get an initial estimate for a transfer function, a rudimentary 'transfer function exploration space' is implemented in which a simple set of transfer functions is provided as a starting point for further design.Although this method helps in providing the user with an initial guess, further tuning is always needed due to the simplicity of the provided functions.
This project aims to explore the possibilities of transfer function design in this context with the end goal of providing the user with informative transfer functions which require significantly less tuning. This could be achieved through methods such as:
By providing a better set of transfer functions to the user, the dataset exploration process becomes both more efficient by cutting down time needed for tuning and more insightful by enabling more informative visualizations.
To further expand the capabilities of Hieradex as a visualization tool, a mesh renderer could be integrated to support the visualization and exploration of mesh databases. This would allow Hieradex to visualize a broad range of different types of data collections (images, volumes, meshes). Many of the tools used during volume rendering, such as level-of-detail rendering, can be repurposed to serve meshes as well. Aside from developing and integrating a mesh renderer, the main challenge in this project is to find ways to enhance the dataset exploration process specifically for meshes. For volume rendering, tools such as a user-configurable transfer function are implemented to aid in both the visualization and exploration of the data samples themselves. By developing similar tools specifically for meshes, it becomes easier to sift through and distinguish individual samples in otherwise largely homogenous mesh datasets.