Fast depth estimation with structured light
Synopsis: 3D sensing is gaining momentum with the increase of computational power, and with the possibility to display and fabricate the results with high fidelity. Structured light (SL) systems are among the most commonly used for 3D object scanning because they can be built by using off-the-shelf hardware components.
In this project, we will implement and test two structured light encoding techniques [1][2]: time-multiplexing and phase-shifting. The emphasis is on performing a fast acquisition and decoding of the captured patterns. We have implemented and tested the basic methods in Matlab, and now we would like to streamline, optimize and port the code to C/C++ (whole, or in parts). The optimization will consist of adding multi-threading support and experimenting with different pattern decoding approaches. The project will include hands-on work with a projector-camera pair.
References:
[1] Salvi, Joaquim, et al. “A state of the art in structured light patterns for surface profilometry.” Pattern recognition 43.8 (2010): 2666-2680.
[2] Geng, Jason. “Structured-light 3D surface imaging: a tutorial.” Advances in Optics and Photonics 3.2 (2011): 128-160.
Deliverables: Report and running code.
Prerequisites:
– knowledge of image processing / computer vision
– coding skills in Matlab and C/C++
Type of work: 20% research and 80% implementation
Level: BS semester project
Prediction of age of C Elegans worm
Synopsis: C Elegans is one of the most interesting worms in the domain of life sciences to study aging. In this project we provide you with microscopy images of this worm at different stages of its life. The goal of the project is to develop a machine learning based approach to predict the age of a worm from its image(s).
Deliverables: Working code for prediction of age of the worm, project report.
Prerequisites: Computer vision, machine learning, Matlab / Python / C++
Level: Master
Rectification of a security image on a smartphone
Description: Document authentication by a smartphone requires the perspective distortion rectification of the captured image by an inverse perspective transformation that can be implemented by finding the parameters of a 2D homography. The goal is to deduce these parameters from known features of the security mark and to rectify the captured image without using special markers. The work will be carried out with Matlab first, and if successful, implemented on an Android smartphone. This is a collaboration with industry.
References:
[1] Yue Liu, Ju Yang and Mingjun Liu, “Recognition of QR Code with mobile phones,” 2008 Chinese Control and Decision Conference, Yantai, Shandong, 2008, pp. 203-206.doi: 10.1109/CCDC.2008.4597299
[2] Yunhua Gu and Weixiang Zhang, “QR code recognition based on image processing,” International Conference on Information Science and Technology, Nanjing, 2011, pp. 733-736. doi: 10.1109/ICIST.2011.5765349
Deliverables: Report and running prototype (Matlab and/or Android).
Prerequisites:
– knowledge of image processing / computer vision
– coding skills in Matlab (and possibly Java Android)
Level: MS semester project
Google Glasses at the Art Gallery
Synopsis: Our lab collaborates with artists and museums, and we currently have a research project on Aby Warburg’s Bilderatlas (see e.g. warburg.library.cornell.edu). We also have research expertise and available hardware in augmented reality. Your project would be to use these glasses to form a new way to visit a museum or art gallery (and possibly a new way of curating the gallery in the first place): recognising the painting, adding additional information from our research work and from outside sources. It will likely be an interactive system, allowing the user to change the information overlaid.
In this project you will:
1. Research current interactive, AR and immersive technologies in museums and galleries
2. Build a simple painting recogniser, to recognise a painting from a small dataset (e.g. with SIFT)
3. Investigate ways to add different layers of information interactively and unintrusively
An AR (e.g. google glasses or phone-based) app; and maybe even a small exhibition!
Prerequisites:
Computer graphics, visual computing or especially computer vision would be useful, as well as some interest in art, archaeology or museology.
Type of work: 70% research and 30% implementation
Level: Masters project
Glitch Art Generator, build the Ultimate Pixel Twisting software
Synopsis: 11010111. Images, like most of the things which are now part of our environment are based on 1 and 0 sequences. 10010111 … Something has gone wrong, but what exactly is the result? How can we interfere with an image file in order to create disorder? Well not disorder, let say artistic chaos. This is more or less the purpose of glitch art, and there is a creative community which is active in experimenting ways to generate artwork based on numeric “errors” with the help of more or less sophisticated and dedicated tools. This project should lead to a prototype of such an application, that should offer many ways to generate glitch art images in an attempt to go a little further than the actual available tools and to question the limits of this practice. Mix an image with data from other representations of reality, shift and disrupt sequences, lead the destruction and the creation of data.
Tasks:
1. collaborate with a student artist photographer of CEPV (Centre Enseignement Professionnel de Vevey)
2. Investigate RAW image format3. Investigate existing glitch art tools
3. Implement an online or offline program with a rich user interface including several mechanisms of changing original file data
Deliverables: In the end of the semester, the student should provide a working tool which allows a user to generate and visualize an image as the result of the modification of another image with the help of several algorithms.
Prerequisites: Curiosity, knowledge about images structure, especially the RAW format, and any language in order to build the on/offline application
Type of work: 30% research, 60% development & testing, 10% communication
Level: Master
Interactive Website to Explore a Collection of Paintings
Synopsis: In the lab, we’re digitising and annotating all the panels of Aby Warburg’s Bilderatlas, an important collection of art works from the 1920s – and perhaps the first ‘big data’ in art history. The art works are divided into 63 ‘panels’ – Cornell has made an interactive website for ten of them [1], and the art-history journal Engramma has a (less sophisticated) collection of all of them [2]. We’ve got a digital-humanities project analysing the various panels – connections between panels in time and space, between artists, but also between recurring figures – similar characters that appear in very different works of art (see e.g. [3]).
In this project you will:
1. Decide on a database structure for the images and the significant meta-data that goes with them
2. Design an interactive web-based tool (or an app, if you strongly prefer) to allow art historians to navigate the Bilderatlas collection using our annotations
Deliverables: A written report and (most importantly) a website
Prerequisites:
Some knowledge of web development, perhaps including tools for interactive websites such as html5 or D3.js.
Type of work: 30% research and 70% implementation
Level: BS Semester project
Interactive nonphotorealistic rendering of 2d and 3d bodies
Synopsis: Human models for the annotation or semiotic study of human pose and gesture tend to be based on anatomical and photorealistic ideals. For the study of depicted poses, including in art, these are not appropriate – the pictographic or semiotic implications of the pose are more key.
In this project you will:
1. Create a high-meaning diagram or ‘visualisation’ (as opposed to a photorealistic rendering) of human pose
2. Using this visualisation, find a low-dimensional conversion for human pose, in 2d and possibly 3d
3. Use these models for the annotation and exploration of human pose datasets in art
Prerequisites: Computer graphics and python, possibly machine learning and HCI
Type of work: 30% research and 70% implementation
Level: BS Semester project