Håvard Espeland

PhD Student
Mobile: +47 992 73 472

About

Håvard is part of the Media Performance Group at Simula working with the PreP project. He started at Simula as a master student in 2007 and has been a member of the group since. As part of his PhD period, he also worked at the University of Oslo lecturing in INF5063, a course on heterogenous processing, in addition to supervising several master students in related areas to his research.

 

Research interests

Håvard has contributed to several research projects in the area of systems support for multimedia processing. Most of his research is driven by building real, working prototypes with an experimental approach to science. With a goal of solving real problems with impact not limited to the academic circle, he has a strong focus on interdisciplinary collaboration and an industry impact of his research. With this approach, Håvard and his colleagues have reached out to the industry and through collaboration found novel research opportunities in multimedia processing that can have real impact on their work. As part of his PhD contributions, he co-developed the P2G framework, an open source project comprising a language and runtime for elastic execution of multimedia workloads on heterogeneous architectures. Other contributions have been in video codecs, resource management, architecture considerations and low-level scheduling.

 

PreP project

Few major films have been produced recently without months of post production to correct mistakes, improve quality, or create large parts of their content. This hampers the influence of creative film makers because the end product is in the hands of artistic technicians, who need to interpret and implement the directors’ vision long after filming has ended. This way of cooperating was described in the following way:

A director tries to imagine something that he’s not quite sure how it will look and then tries to explain it to someone who implements it three months from now.

The PreP consortium is committed to emancipate film makers, allowing them to create the mixed reality films they want without losing control over them. We propose a seemingly simple, yet fundamental change in the production process: to move most virtual parts of film scenes out of post production and onto the set, where they can evolve and take part in the creative vision of a director and his crew. Key to this vision is the seamless integration of content and meta data of a film production in a single unified system and the ability to collaboratively access and manipulate this data in real time, e.g. in a live preview of the final scene, using real time tracking or existing post production tools. This allows the crew to quickly produce a new draft of a scene’s look right there on set and creates an immediacy in working together by instantly seeing and manipulating the same image.

Our approach enables crews to experiment on set to explore the full depth of their creative vision. It reduces costs by avoiding the need to repeat filming of scenes months after the original. It smoothens cooperation between director and post production, because scenes can be grasped visually and an understanding can be established on set. It provides a context for the director’s and actors’ imagination and gives a better spatial understanding of the final scene without needing to build large parts of a set. 


A full publication list is available.

Personal tools