Skip to Content

Instrukcja korzystania z Biblioteki


Ukryty Internet | Wyszukiwarki specjalistyczne tekstów i źródeł naukowych | Translatory online | Encyklopedie i słowniki online


Astronomia Astrofizyka

Sztuka dawna i współczesna, muzea i kolekcje

Metodologia nauk, Matematyka, Filozofia, Miary i wagi, Pomiary

Substancje, reakcje, energia
Fizyka, chemia i inżynieria materiałowa

Antropologia kulturowa Socjologia Psychologia Zdrowie i medycyna

Przewidywania Kosmologia Religie Ideologia Polityka

Geologia, geofizyka, geochemia, środowisko przyrodnicze

Biologia, biologia molekularna i genetyka

Technologia cyberprzestrzeni, cyberkultura, media i komunikacja

Wiadomości | Gospodarka, biznes, zarządzanie, ekonomia

Budownictwo, energetyka, transport, wytwarzanie, technologie informacyjne

Journal of Virtual Reality and Broadcasting (jvrb)

JVRB - Journal of Virtual Reality and Broadcasting, 7(2010), no. 4.: For broadcasting purposes MIXED REALITY,
the combination of real and virtual scene content, has
become ubiquitous nowadays. Mixed Reality recording
still requires expensive studio setups and is often limited
to simple color keying. We present a system for Mixed
Reality applications which uses depth keying and
provides threedimensional mixing of real and artificial
content. It features enhanced realism through automatic
shadow computation which we consider a core issue to
obtain realism and a convincing visual perception,
besides the correct alignment of the two modalities and
correct occlusion handling. Furthermore we present a
possibility to support placement of virtual content in the
Core feature of our system is the incorporation of a TIME-OF-FLIGHT (TOF)-camera device. This device
delivers real-time depth images of the environment at a
reasonable resolution and quality. This camera is used to
build a static environment model and it also allows correct
handling of mutual occlusions between real and virtual
content, shadow computation and enhanced content
The presented system is inexpensive, compact, mobile,
flexible and provides convenient calibration procedures.
Chroma-keying is replaced by depth-keying which is
efficiently performed on the GRAPHICS PROCESSING
UNIT (GPU) by the usage of an environment model and
the current ToF-camera image. Automatic extraction and
tracking of dynamic scene content is herewith performed
and this information is used for planning and alignment of
virtual content.
An additional sustainable feature is that depth maps of the
mixed content are available in real-time, which makes the
approach suitable for future 3DTV productions. The
presented paper gives an overview of the whole system
approach including camera calibration, environment
model generation, real-time keying and mixing of virtual

and real content, shadowing for virtual content and
dynamic object tracking for content planning. 2012/04/21 - 18:57

JVRB - Journal of Virtual Reality and Broadcasting, 7(2010), no. 3.: This paper presents an empirical study of affine
invariant feature detectors to perform matching on
video sequences of people with non-rigid surface
deformation. Recent advances in feature detection and
wide baseline matching have focused on static scenes.
Video frames of human movement capture highly
non-rigid deformation such as loose hair, cloth creases,
skin stretching and free flowing clothing. This study
evaluates the performance of six widely used feature
detectors for sparse temporal correspondence on single
view and multiple view video sequences. Quantitative
evaluation is performed of both the number of features
detected and their temporal matching against and without
ground truth correspondence. Recall-accuracy analysis of
feature matching is reported for temporal correspondence
on single view and multiple view sequences of people with variation in clothing and movement. This analysis
identifies that existing feature detection and matching
algorithms are unreliable for fast movement with common
clothing. 2012/04/21 - 18:57

JVRB - Journal of Virtual Reality and Broadcasting, 7(2010), no. 2.: This paper presents different application scenarios for which the registration of sub-sequence reconstructions or multi-camera reconstructions is essential for successful camera motion estimation and 3D reconstruction from video. The registration is achieved by merging unconnected feature point tracks between the reconstructions. One application is drift removal for sequential camera motion estimation of long sequences. The state-of-the-art in drift removal is to apply a RANSAC approach to find unconnected feature point tracks. In this paper an alternative spectral algorithm for pairwise matching of unconnected feature point tracks is used. It is then shown that the algorithms can be combined and applied to novel scenarios where independent camera motion estimations must be registered into a common global coordinate system. In the first scenario multiple moving cameras, which capture the same scene simultaneously, are registered. A second new scenario occurs in situations where the tracking of feature points during sequential camera motion estimation fails completely, e.g., due to large occluding objects in the foreground, and the unconnected tracks of the independent reconstructions must be merged. In the third scenario image sequences of the same scene, which are captured under different illuminations, are registered. Several experiments with challenging real video sequences demonstrate that the presented techniques work in practice. 2012/04/21 - 18:57