Fan Fiction

Volume Rendering for 3D Display M A G N U S W A H R E N B E R G

Description
Volume Rendering for 3D Display M A G N U S W A H R E N B E R G Master of Science Thesis Stockholm, Sweden 2006 Volume Rendering for 3D Display M A G N U S W A H R E N B E R G Master s Thesis in Computer
Categories
Published
of 49
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Related Documents
Share
Transcript
Volume Rendering for 3D Display M A G N U S W A H R E N B E R G Master of Science Thesis Stockholm, Sweden 2006 Volume Rendering for 3D Display M A G N U S W A H R E N B E R G Master s Thesis in Computer Science (20 credits) at the School of Computer Science and Engineering Royal Institute of Technology year 2006 Supervisor at CSC was Lars Kjelldahl Examiner was Lars Kjelldahl TRITA-CSC-E 2006:099 ISRN-KTH/CSC/E--06/099--SE ISSN Royal Institute of Technology School of Computer Science and Communication KTH CSC SE Stockholm, Sweden URL: Abstract In medical imaging more extensive use of hardware capable of producing 3Ddatasets also results in a more demanding display procedure. Conventional 2D viewing of 3D datasets impose a very dicult and potentially unnecessary interpretation step. For viewing 3D scans Volume Rendering, or the display of data sampled in three dimensions, is getting more of an issue. The immense rendering power required for this application is far beyond what is found in a standard workstation. However, the development in consumer graphics hardware, used to accelerate rendering, progress in an incredible rate, likely maintained by the computer gaming industry. Real-time Volume Rendering with acceptable interactive frame-rates has recently been enabled in wake of this development. In a few years we might see all workstations within a hospital capable of this kind of rendering. Is Volume Rendering good enough for diagnostic purposes and is there anything that might enhance the experience? One answer may be found in the combination of Volume Rendering and a 3D display device. In this Masters Thesis, I look at Volume Rendering in relation to one of the most advanced autostereoscopic displays available which is currently being developed by Setred. This is done trough designing and implementing a Volume Rendering platform, capable of output on the display. One of the questions which arose, is how this special case of rendering handles on consumer graphics hardware. It seems however that it may take just a while longer before hardware, really being able to cope with the task of volume rendering, will be available at consumer level and price. Sammanfattning Volymrendering för 3D-skärmar Volymrendering är en speciell gren av rendering som fokuserar på visualisering av volym-data, till skillnad från ytor som är den vanligaste representationen inom 3D-rendering. Volymrendering används främst inom vetenskaplig visualisering och kanske mest av allt i medicinska sammanhang. Med medicinsk utrustning kapabel att producera 3D-läsningar, tillkommer svårigheten av att visualisera denna data. En enkel metod är att visa volymen som en sekvens av genomskärningar i form av 2D-bilder. Denna visualisering inför dock ett tämligen svårt tolkningssteg. Istället kan volymrendering användas och tack vare den nya generationen grakhårdvara närmar sig renderingsmetoderna interaktiva uppdateringshastigheter. Är renderingen tillräckligt bra för att användas för diagnostik inom ett så viktigt område som medicin? Finns det något som skulle kunna förbättra upplevelsen? Svar på den frågan kan vara att kombinera volymrendering med skärmar som klarar av att förmedla djupkänsla. I detta arbete tittar jag närmare på volymrenderingsmetoder tillsammans med en av de mest avancerade autostereoskopiska skärmarna som håller på att utvecklas av Setred. I examensarbetet arbetar jag fram en plattform för testning av volymrendering på skärmen. Det man kan se är att dagens generation grakhårdvara knappt klarar interaktiva hastigheter t.o.m. vid använding av lägre upplösningar. De reella kraven är betydligt högre men inom en snar framtid kan grakhårdvaran vara bra nog till detta ändamål. För rendering av era vyer använt i stereo ställs Enns större krav på hårdvaran vilket medför att den typen av rendering kommer kräva speciallösningar ett bra tag framöver. Contents I Introduction 1 1 Setred 1 2 Project Goals Limitations II Theory 3 3 Computer Graphics Hardware Geometry processing Rasteration Fragment Operation Blending Shaders Vertex Shader Fragment/Pixel Shader Texture Compression FXT S3TC Dc Volume Rendering DVR and IVR Volume rendering integral Optical Models Absorption only Emission only Absorption and Emission Scattering Voxels Reconstruction Classication Transfer function and feature extraction Algorithms of Direct Volume Rendering Image and Object order Shear-Warp D-Texture D-Texture Raycasting 4.7.6 Volume Splatting Maximum Intensity Projection Medical Imaging and DICOM File format Medical Imaging D Displaying Depth Cues Physiological Psychological D Display Device Technology Field sequential separation Time parallel separation Autostereoscopic displays Setred Holoform Display Repeating viewing regions III Implementation 24 7 Samurai 3D Volume Renderer for 3D Display Input Rendering D Texture D Texture Splatting Raycasting Classication GUI Main Form Rendering Form Color Table Form Display integration Testing on display Display Simulator Model Views Orthographic Image Plane View Perspective View Example Images from Volume Renderer 39 IV Discussion Future work Samurai Display Simulator Volume Rendering Data management Algorithms Classication Summary Displaying References 47 Part I Introduction 1 Setred Setred is developing an autostereoscopic display system based on technology being the result of joint research between MIT and Cambridge University. The company is based in England, Norway and Sweden. Supervisor at Setred for the project is Thomas Ericson. 2 Project Volume rendering is an important tool for scientic visualization. Major elds include medicine and geology. Combining volume rendering with 3D display may lead to great advantages for users, perhaps even better diagnostic result within radiology. The rst step toward that research is to create a limited software testing platform to get a preview of medical imaging and rendering on the 3D display. 2.1 Goals The goals for this project is to create a specialized volume rendering platform for testing volume imaging on the Setred 3D Display system. The system is in rst hand designed for medical data. In that eld, the dominating image format is the DICOM format, which is why one goal is to enable DICOM input. The major task of this project is to see how volume rendering is combined with 3D display. Looking at multiple volume rendering techniques is vital to get a good insight in the discussion. There will only be opportunity for a very short test of the rendering software on the actual display. Generally, only having one prototype is a limitation, which yields need for the second part of the rendering platform - the display simulator. Making a physical correct simulation of the display is a task far beyond the scope of this project. The display simulator will therefore be based on a simple model capable of showing the general output from the display as well as one artifact called tearing that may aect rendering. Here follows a summary of features included in the scope of this project. 1. DICOM Imaging (a) Support dataset generated by actual medical scanning equipment 2. Rendering 1 (a) Multiple rendering techniques i. 2D Texture based ii. 3D Texture based iii. Volume splatting iv. Raycasting (b) At least one technique implemented with some of the possible quality improvement measures. (c) Congurable transfer function 1 3. Simulator (a) Orthographic view (b) Perspective view (c) Variable number of slits. (d) Support for multiple viewing regions - Slice'n Dice The project can be summarized in that I will design and implements the entire rendering platform as in the list above. This involves implementing a DICOM image reader as well as designing and implementing a volume rendering platform with renderers demonstrating four dierent rendering algorithms, all featuring classication. Finally I will create a limited Display Simulator which gives some notion of the characteristics of the actual display. This involves working out a mathematical model based on the description of the workings of the display as well as implementation. The parts of this platform not done by myself will be the display integration library as well as some of the user controls. 2.2 Limitations The DICOM format is one of the most extensive image formats available. There are dierent levels of conformance, but this project is limited to getting basic functionality, for reading provided datasets. Since the goal is to try implementing as many dierent volume rendering methods possible, some optimizations and quality improvement methods will be left for discussion only. The display simulator will be developed with the simplest mathematical model possible, resulting in good performance and ease of use. The limitations within each area is listed below. 1. DICOM Imaging 1 See section (a) Basic uncompressed images only (b) Only support for 8 and 16 bit gray-scale (c) Only support for images created with little-endian byte order(intel). 2. Rendering (a) 2D Texture i. No opacity correction ii. Only support for 8 bit per sample iii. Only 2/3 axis-stacks - enables rotation around y-axis. iv. No user congurable clip-planes (b) 3D Texture i. Only proxy-geometry based on Spherical shells ii. Cube shaped volume only iii. No user congurable clip-planes (c) Splatting i. Based on emission only optical model due to lack of depth sorting ii. No clipping (d) Raycasting i. No clipping ii. Few implemented optimizations iii. Only 8 bit per axis ray setup 3. Display Simulation (a) Only targeting tearing (b) One eye view only - no stereo rendering Part II Theory 3 Computer Graphics Hardware Today almost every computer is shipped with a graphics card capable of accelerating 3D computer graphics. A modern GPU 2 provides acceleration used. 2 Graphics Processing Unit. The term VPU - Visual Processing Unit is also commonly 3 trough out the rendering pipeline allowing more extensive rendering. Worth noting is the power and complexity of modern GPUs. The latest generation GPUs has generally several times more transistors than CPUs, even if the latest Intel Pentium D CPU series, with its 376 million transistors[23], almost matches ATI's X1900 GPU having 384 million[13]. Compare with AMD Athlon (Newark) CPU having 105 million transistors[23]. This number only shows some notion of complexity and is not well correlated with performance. The rendering process begins with an application storing some kind of scene-description based on planar polygons. The process of rendering an image from that scene is called display traversal[14]. For a long time this process has been made up by xed number of step and features, restricting quality since real-time graphics is more or less dependant on the acceleration provided by the graphics cards. Lately a new generation of graphics hardware has been introduced which oers a programmable pipeline through programs called shaders. Although this oers much exibility the overall pipeline still looks the same, summarized below in three steps. 1. Geometry processing 2. Rasteration 3. Fragment operation The rendered values are nally written to the framebuer, the memory location which holds the current image displaying on the screen. 3.1 Geometry processing The planar polygons enter geometry processing as vertices. Most of the operations in this section is done per vertex so the actual geometry isn't composed until late in the processing stage.[14] First the vertices are transformed according to the Modeling and Viewing matrices. These matrices are often combined trough multiplication into a single model-view matrix. [14] Secondly, the transformed vertices are lit based on normal vectors and some kind of local illumination model. The lighting is dependant on the transformations so it comes after the transformation stage in the pipeline.[14] Finally, the geometry primitives are assembled for the clipping and nal projection onto the the image plane. Polygons are almost always reduced into triangle primitives for the hardware to handle.[14] 3.2 Rasteration Rasteration is where the geometry is actually converted into pixel-values or fragments. A fragment doesn't necessarily need to result in a pixel on 4 the nal output image since modern graphics hardware allow multi-pass rasteration with dierent ways of combining values[5]. When a fragment is generated from textured geometry, one or with multitextureing several texture-lookup occurs, and the nal fragment color is calculated based on texture, shading/lighting and color. 3.3 Fragment Operation We have now a complete fragment which is ready for output. However, we have still the option of throwing away the fragment or combine it with already written values. This is done trough several available fragment operations, as the ones listed below which are the standard OpenGL fragment operations. Scissor test Throws away every fragment that are not within the user specied rectangle on the screen. Alpha test Decide whether the fragment should be thrown away based on its alpha/opacity value. Stencil test Looks up if the fragment should be kept in an special stencil buer. The buer should have been written in before the fragment is generated. Depth test Tests the fragment based on its depth. Useful for hidden surface removal. Blending Combine the fragment with the existing value according to a blending function. Dithering On systems where color depth is limited dithering can be performed to enhance the experienced color depth. Not commonly used any more since color depth generally is high enough. Logical operations Dierent logical operation that can be performed as the fragment is written. If a fragment fails one test it doesn't proceed to to the next. [6] Blending Blending is a vital operation for volume rendering on graphics hardware. An generated source fragment is combined (blended) with the destination fragment already written in the framebuer. [6] We denote the source fragment color and alpha component-wise with C src = {R src, G src, B src, A src } and destination fragment color and alpha with {C dst = R dst, G dst, B dst, A dst }. From these values we also form a source factor {F src = fr src, fg src, fb src, fa src } and a destination factor 5 F dst = {fr dst, fg dst, fb dst, fa dst } that will be used to weight source and destination values. There are lots of possible combinations but this is for example very commonly used: F src = A src (1) F dst = 1 A src The resulting pixel value is calculated with a blending equation. Most relevant in this case is add and max.[4] R add G add = RfR src + R dst fr dst = GfG src + G dst fg dst B add = BfB src + B dst fb dst A add = AfA src + A dst fa dst (2) R max = max (R src, R dst ) G max = max (G src, G dst ) B max = max (B src, B dst ) A max = max (A src, A dst ) (3) If we take a look at the example blending factor in equation 1, together with blending equation add, we see that the nal result will be as if the geometry of the source fragment was transparent, with opacity A src. This is very useful but it requires the scene to be rendered back-to-front, which in complex scenes can be very dicult.[6] 3.4 Shaders New graphics cards oer a programmable pipeline trough programs called shaders. Shaders is not a new inventions thought up by the graphics card manufacturers but rather a classical part of the rendering pipeline. Hi- Quality software renderers used for example in animation have had a highly programmable pipeline for a long time. One of the rst to introduce shaders into the rendering pipeline was Pixar in their RenderMan interface but most other rendering engines was soon to follow. It's common to think of a shader as sort of an advanced material for the geometry but that's just one of several applications. The actual program consists, just like any other program, by low level instructions carried out in the vertex and fragment processor. High-level 6 languages was introduced early for more ecient development. Among the major languages are Cg 3, GLSL 4 and HLSL 5, all of which resembles the basic syntax of the C programming language Vertex Shader The vertex shader is the program which substitutes part of the x-function geometry processing stage. The program run once for every vertex and can carry out various vector operation as well as changing dierent global attributes. Vertices can however not be created or removed, which is one of the major limitations of this version of the pipeline. This limitation will be lifted in the next generation graphics cards based on the Direct3D 10 specication, where geometry shaders, a third shader type, is introduced[10] Fragment/Pixel Shader The fragment shader is a program which run for every fragment and replaces part of the rasteration stage. In Microsoft's Direct X, the fragment shader goes under the name pixel shader. The fragment shader takes interpolated surface attributes and calculate a color value for the fragment. Fragment shaders enable per-pixel eects like real-time Phong surface shading in contrary to the interpolated Gouraud shading used for example in the OpenGL xed function pipeline[6]. 3.5 Texture Compression The incredible performance of the modern graphics card is depending on fast access to scene data. Therefore graphics cards are equipped with a great amount of high-speed memory, on an internal bus, matching the speed of the memory. Despite the large amount on memory on the graphics cards whole scenes seldom ts into the memory. Data must be streamed cross the slow AGP bus or the slightly ( 2x) faster PCI-Express bus, during rendering. Much of this data comprise of textures. We might not consider images as occupying a lot of space, but that's because of the extensive use of image compression. Compression is now also available for textures stored on the graphics card. There are three main compression techniques - FXT1, S3TC and 3Dc. All are three are destructive, meaning that the image quality is reduced compared to the original. 3 C for Graphics introduced by NVidia 4 OpenGL Shading Language 5 High Level Shading Language introduced by Microsoft 7 3.5.1 FXT1 FXT1 was the rst texture compression technique available on graphics hardware. It was developed by 3dfx, but is now open source. It is a block based technique which can reach compression ratios up 8:1, with severely reduced quality. [22] S3TC S3 later developed their own compression technique, very similar to FXT1. It also uses 4x4 blocks for encoding and can only reach a compression ratio of 6:1[3]. Generally however, the quality is better than FXT1. Microsoft licenced S3TC as the compression technique used in DirectX. There it goes under the name DXTn Dc In later years, ATI developed their own technique, 3Dc, which partly targets a new type of texture namely normal maps. The other available techniques handles those textures poorly. As normal mapping techniques gets more and more common the same will probably happen to the 3Dc compression technique. It oers very high quality to the compression rate of 4:1[11]. 4 Volume Rendering The true pioneer of volume rendering, Lee Westover, gave this denition - Volume rendering is the display of data sampled in three dimensions[25]. This denition doesn't say more because of the wide range of dierent applications and techniques involved. The samples can represent density, velocity and can even hold multiple proprieties. A corner stone of computer graphics is its representation of objects as surfaces[25][14]. This representation ts most items in the real world since they are mostly opaque and reects light-rays at the surface. Even transparent objects like glass ts into the surface representation as long as the glass has homogeneous refraction index and color. In that case the light refracts trough the glass but still only changes direction when entering and exiting trough the surface. There are however cases where light interacts with the volume rath
Search
Related Search
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks
SAVE OUR EARTH

We need your sign to support Project to invent "SMART AND CONTROLLABLE REFLECTIVE BALLOONS" to cover the Sun and Save Our Earth.

More details...

Sign Now!

We are very appreciated for your Prompt Action!

x