Results 1 - 10
of
23
A Shading Language on Graphics Hardware: The PixelFlow Shading System
- PROCEEDINGS OF SIGGRAPH 98 (ORLANDO, FLORIDA, JULY 19--24, 1998)
, 1998
"... Over the years, there have been two main branches of computer graphics image-synthesis research; one focused on interactivity, the other on image quality. Procedural shading is a powerful tool, commonly used for creating high-quality images and production animation. A key aspect of most procedural s ..."
Abstract
-
Cited by 92 (12 self)
- Add to MetaCart
Over the years, there have been two main branches of computer graphics image-synthesis research; one focused on interactivity, the other on image quality. Procedural shading is a powerful tool, commonly used for creating high-quality images and production animation. A key aspect of most procedural shading is the use of a shading language, which allows a high-level description of the color and shading of each surface. However, shading languages have been beyond the capabilities of the interactive graphics hardware community. We have created a parallel graphics multicomputer, PixelFlow, that can render images at 30 frames per second using a shading language. This is the first system to be able to support a shading language in real-time. In this paper, we describe some of the techniques that make this possible.
A new perceptually uniform color space with associated color similarity measure for content–based image and video retrieval
- IN ‘PROC
, 2005
"... Color analysis is frequently used in image/video retrieval. However, many existing color spaces and color distances fail to correctly capture color differences usually perceived by the human eye. The objective of this paper is to first highlight the limitations of existing color spaces and similarit ..."
Abstract
-
Cited by 12 (0 self)
- Add to MetaCart
Color analysis is frequently used in image/video retrieval. However, many existing color spaces and color distances fail to correctly capture color differences usually perceived by the human eye. The objective of this paper is to first highlight the limitations of existing color spaces and similarity measures in representing human perception of colors, and then to propose (i) a new perceptual color space model called HCL, and (ii) an associated color similarity measure denoted DHCL. Experimental results show that using DHCL on the new color space HCL leads to a solution very close to human perception of colors and hence to a potentially more effective content-based image/video retrieval. Moreover, the application of the similarity measure DHCL to other spaces like HSV leads to a better retrieval effectiveness. A comparison of HCL against L*C*H and CIECAM02 spaces using color histograms and a similarity distance based on Dirichlet distribution illustrates the good performance of HCL for a collection of 3500 images of different kinds.
Real-Time Lighting Design for Interactive Narrative
- In Proceedings of the 2 nd International Conference on Virtual Storytelling
, 2003
"... Abstract. Lighting design is an important element of scene composition. Designers use lighting to influence viewers ’ perception by evoking moods, directing their gaze to important areas, and conveying dramatic tension. Lighting is a very time consuming task; designers typically spend hours manipula ..."
Abstract
-
Cited by 8 (1 self)
- Add to MetaCart
(Show Context)
Abstract. Lighting design is an important element of scene composition. Designers use lighting to influence viewers ’ perception by evoking moods, directing their gaze to important areas, and conveying dramatic tension. Lighting is a very time consuming task; designers typically spend hours manipulating lights ’ colors, positions, and angles to create a lighting design that accommodates dramatic action and tension. Such manual design is inappropriate for interactive narrative, because the spatial and dramatic characteristics of an interactive scene, including dramatic tension, camera location, and character actions, change unpredictably, necessitating continual redesign as the scene progresses. In this paper, we present a lighting design system, called ELE (Expressive Lighting Engine), that automatically, in real-time, adjusts angles, positions, and colors of lights to accommodate the dramatic and spatial characteristics of a scene, while conforming to the established style and ensuring visual continuity. ELE uses constraintbased non-linear optimization algorithms to configure lights using cost functions formulated based on traditional film and theatrical lighting design theory. 1.
Automatic Expressive Lighting for Interactive Scenes
, 2003
"... AUTOMATIC EXPRESSIVE LIGHTING FOR INTERACTIVE SCENES MAGY SEIF EL-NASR Advances in computer graphics have led to the development of interactive entertainment applications with complex 3D graphical environments. Lighting is becoming increasingly important in such applications not only because of i ..."
Abstract
-
Cited by 4 (0 self)
- Add to MetaCart
AUTOMATIC EXPRESSIVE LIGHTING FOR INTERACTIVE SCENES MAGY SEIF EL-NASR Advances in computer graphics have led to the development of interactive entertainment applications with complex 3D graphical environments. Lighting is becoming increasingly important in such applications not only because of its role in illumination, but also because of its utility in directing the viewer's gaze to important areas and evoking moods. However, lighting design is a complex process, and is especially problematic for interactive applications. Rendering time for such applications is linear in the number of lights used, thus restricting the number of lights used in real- time rendering engines to 8 or fewer; yet current practice in the animation industry is to use 32 lights or more per scene. Moreover, the scene's spatial configuration, mood, dramatic intensity, and the relative importance of different characters, all change unpredictably in real time, necessitating continual redesign as the characters and camera move. Current systems, however, use fixed, manually designed lighting. This manual design is labor intensive and leads to partially invisible characters, unmotivated or distracting color changes, and frustration for the player/audience. In this dissertation, I present a new approach to lighting design for interactive scenes. I will describe this approach using the Expressive Lighting Engine (ELE) that I have developed. ELE uses iii non-linear constraint optimization to automatically and unobtrusively adjust lighting in real-time, utilizing traditional cinematic and theatrical lighting design theory. This approach accommodates variations in spatial and dramatic configurations that occur during interactive scenes. ELE also allows artists to author lighting changes and override it...
High Dynamic Range Reduction Via Maximization of Image Information
, 2003
"... An algorithm for blending multiple-exposure images of a scene into an image with maximum information content is introduced. The algorithm partitions the image domain into subimages and for each subimage selects the image that contains the most information about the subimage. The selected images are ..."
Abstract
-
Cited by 3 (0 self)
- Add to MetaCart
An algorithm for blending multiple-exposure images of a scene into an image with maximum information content is introduced. The algorithm partitions the image domain into subimages and for each subimage selects the image that contains the most information about the subimage. The selected images are then blended together using monotonically decreasing blending functions that are centered at the subimages and have a sum of 1 everywhere in the image domain. The optimal subimage size and width of blending functions are determined in a gradient-ascent algorithm that maximizes image information. The proposed algorithm reduces dynamic range while preserving local color and contrast.
Projecting Tension in Virtual Environments through lighting
"... Interactive synthetic environments are currently used in a wide variety of applications, including video games, exposure therapy, education, and training. Their success in such domains relies on their immersive and engagement qualities. Filmmakers and theatre directors use many techniques to project ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
(Show Context)
Interactive synthetic environments are currently used in a wide variety of applications, including video games, exposure therapy, education, and training. Their success in such domains relies on their immersive and engagement qualities. Filmmakers and theatre directors use many techniques to project tension in the hope of affecting audiences ’ affective states. These techniques include narrative, sound effects, camera movements, and lighting. This paper focuses on temporal variation of lighting color and its use in evoking tension within interactive virtual worlds. Many game titles adopt some cinematic lighting effects to evoke certain moods, particularly saturated red colored lighting, flickering lights, and very dark lighting. Such effects may result in user frustration due to the lack of balance between the desire to project tension and the desire to use lighting for other goals, such as visibility and depth projection. In addition, many of the lighting effects used in game titles are very obvious and obtrusive. In this paper, the author will identify several lighting color patterns, both obtrusive and subtle, based on a qualitative study of several movies and lighting design theories. In addition to identifying these patterns, the author also presents a system that dynamically modulates the lighting within an interactive environment to project the desired tension while balancing other lighting goals, such as establishing visibility, projecting depth, and providing motivation for lighting direction. This work extends the author’s previous work on the Expressive Lighting Engine [1-3]. Results of incorporating this system within a game will be discussed.
An Efficient Technique using Text & Content Base Image Mining Technique for Image Retrieval
"... Image mining presents special characteristics due to the richness of the data that an image can show. Effective evaluation of the results of image mining by content requires that the user point of view is used on the performance parameters. Comparison among different mining by similarity systems is ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
Image mining presents special characteristics due to the richness of the data that an image can show. Effective evaluation of the results of image mining by content requires that the user point of view is used on the performance parameters. Comparison among different mining by similarity systems is particularly challenging owing to the great variety of methods implemented to represent likeness and the dependence that the results present of the used image set. Other obstacle is the lag of parameters for comparing experimental performance. In this paper we propose an evaluation framework for comparing the influence of the distance function on image mining by color and also a way to mine an image from its name. Experiments with color similarity mining by quantization on color space and measures of likeness between a sample and the image results have been carried out to illustrate the proposed scheme. Important aspects of this type of mining are also described.
The science of digitizing paintings . . .
- JOURNAL OF IMAGING SCIENCE AND TECHNOLOGY
, 2001
"... ..."
Microsoft Word - pxflshading.–
"... Abstract Over the years, there have been two main branches of computer graphics image-synthesis research; one focused on interactivity, the other on image quality. Procedural shading is a powerful tool, commonly used for creating high-quality images and production animation. A key aspect of most pr ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract Over the years, there have been two main branches of computer graphics image-synthesis research; one focused on interactivity, the other on image quality. Procedural shading is a powerful tool, commonly used for creating high-quality images and production animation. A key aspect of most procedural shading is the use of a shading language, which allows a high-level description of the color and shading of each surface. However, shading languages have been beyond the capabilities of the interactive graphics hardware community. We have created a parallel graphics multicomputer, PixelFlow, that can render images at 30 frames per second using a shading language. This is the first system to be able to support a shading language in real-time. In this paper, we describe some of the techniques that make this possible. CR Categories and Subject INTRODUCTION We have created a SIMD graphics multicomputer, PixelFlow, which supports procedural shading using a shading language. Even a small (single chassis) PixelFlow system is capable of rendering scenes with procedural shading at 30 frames per second or more. In procedural shading, a user (someone other than a system designer) creates a short procedure, called a shader, to determine the final color for each point on a surface. The shader is respon- † Now at Silicon Graphics, Inc., 2011 N. Shoreline Blvd., M/S #590, Mountain View, CA 94043 (email: olano@engr.sgi.com) ‡ UNC Department of Computer Science, Sitterson Hall, CB #3175, Chapel Hill, NC 27599 (email: lastra@cs.unc.edu) sible for color variations across the surface and the interaction of light with the surface. Shaders can use an assortment of input appearance parameters, usually including the surface normal, texture coordinates, texture maps, light direction and colors. Procedural shading is quite popular in the production industry where it is commonly used for rendering in feature films and commercials. The best known examples of this have been rendered using Pixar's PhotoRealistic RenderMan software
A Real-time Automatic Instrument Tracking System on Cataract Surgery Videos for Dexterity Assessment
"... Abstract-In this paper we describe the SUITS (Surrey University Instrument Tracking System), an automated video processing system that analyzes videos of cataract surgeries to extract parameters for surgical skill assessment. Through image processing and object tracking techniques the eye is identi ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract-In this paper we describe the SUITS (Surrey University Instrument Tracking System), an automated video processing system that analyzes videos of cataract surgeries to extract parameters for surgical skill assessment. Through image processing and object tracking techniques the eye is identified, and its movement and direction changes throughout the operation are monitored. Any instrument that moves into or out of the eye is located with its path measured. So far we have developed a prototype real-time system that has demonstrated great potential. The developed system is automatic, with minimal human supervision required throughout the processing time. In addition, the solution is generic, and it can be applied to other tracking problems, possibly other types of surgery videos, with minor modifications.