畢業(yè)設(shè)計(jì)(論文)貼圖技術(shù)的研究與應(yīng)用外文翻譯_第1頁
畢業(yè)設(shè)計(jì)(論文)貼圖技術(shù)的研究與應(yīng)用外文翻譯_第2頁
畢業(yè)設(shè)計(jì)(論文)貼圖技術(shù)的研究與應(yīng)用外文翻譯_第3頁
畢業(yè)設(shè)計(jì)(論文)貼圖技術(shù)的研究與應(yīng)用外文翻譯_第4頁
畢業(yè)設(shè)計(jì)(論文)貼圖技術(shù)的研究與應(yīng)用外文翻譯_第5頁
已閱讀5頁,還剩4頁未讀, 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡(jiǎn)介

1、本科畢業(yè)設(shè)計(jì)(論文)外文翻譯譯文:貼 圖凹凸貼圖是一種技術(shù)在計(jì)算機(jī)圖形學(xué)中, 在場(chǎng)景中使用用燈光使一個(gè)渲染的表面紋理看起來更具真實(shí)感建模的交互技術(shù)。凹凸貼圖通過改變映象點(diǎn)的亮光做此在表面的以響應(yīng)每個(gè)表面指定的二維的高度圖。 當(dāng)渲染3d場(chǎng)景,亮度和顏色的像素決定交互作用的3d模型用燈在場(chǎng)景里。之后,這是決定一個(gè)物體可見,三角用來計(jì)算表面法線方向的“幾何”的對(duì)象,定義為一個(gè)向量在每個(gè)像素位置在該物體上。幾何表面法線然后定義了多么對(duì)象與來自一個(gè)特定方向的光強(qiáng)烈交互使用phong明暗器或一種相似的照明設(shè)備算法。 對(duì)表面的輕的移動(dòng)的垂線強(qiáng)烈比與表面是平行的光互動(dòng)。 在最初的幾何演算以后,色的紋理經(jīng)常被申

2、請(qǐng)于模型做對(duì)象看上去更加現(xiàn)實(shí)。經(jīng)過紋理貼圖,執(zhí)行計(jì)算上的每個(gè)物體表面的像素:查找二維高度圖上的對(duì)應(yīng)位置上的表面位置。1. 計(jì)算二維高度圖的表面法線。2. 從第二步添加幾何表面法線,使得法線指向一個(gè)新的方向。 3. 計(jì)算新的在場(chǎng)景中使用光線的凹凸表面,例如,phong 明暗器。結(jié)果是一個(gè)表面似乎真正的深度。該算法也可以確保表面狀態(tài)變化是跟隨著場(chǎng)景中光線的變化而時(shí)刻改變的。(法線貼圖是最常用的凹凸貼圖技術(shù),但也有一些其他的備選方案,比如視差貼圖的效果。凹凸貼圖的一個(gè)局限是它只影響表面法線不改變下墊面本身。輪廓和陰影因此不受影響。這限制是可以克服的技術(shù)包括:位移貼圖其實(shí)是用于在不平坦的表面或者使用一

3、個(gè)表示法。出于實(shí)時(shí)渲染的目的,凹凸貼圖經(jīng)常被稱為一張“通道”,是多通道渲染,并且可以被多通道執(zhí)行(三或四個(gè)通道)以減少需要三角法演算的數(shù)量。紋理貼圖用于添加細(xì)節(jié),表面紋理(位圖或光柵圖像),或色彩到計(jì)算機(jī)生成的圖形或三維模型的方法。三維圖形的應(yīng)用開創(chuàng)了埃德溫博士在他的博士學(xué)位的catmull論文1974年。紋理貼圖應(yīng)用(映射)到一個(gè)形狀表面或多邊形。這個(gè)過程是類似的圖案申請(qǐng)文件,一個(gè)普通的白盒子。每一個(gè)多邊形頂點(diǎn)分配一個(gè)紋理坐標(biāo)(這在二維情況下也被稱為紫外線坐標(biāo)),可通過轉(zhuǎn)讓或以程序明確的定義。圖像采集地點(diǎn),然后插一個(gè)多邊形產(chǎn)生的視覺效果,似乎比本來用的多邊形數(shù)量有限取得更豐富整個(gè)表面 。多貼

4、圖是在多個(gè)紋理多邊形上的使用時(shí)間。例如,光線貼圖可以用來作為代替每次渲染時(shí)對(duì)物體表面光照的重新計(jì)算。另一種是多重紋理凹凸貼圖技術(shù),它允許一個(gè)紋理直接控制了它的照明計(jì)算的目的面朝的方向,可以給一個(gè)復(fù)雜的表面非常良好的外觀,如樹的樹皮或粗糙的混凝土,這需要照明除了通常的詳細(xì)細(xì)節(jié),著色。凹凸貼圖,由于圖形硬件已足夠支持視頻游戲的實(shí)時(shí)響應(yīng),導(dǎo)致了視頻游戲的流行。在屏幕上產(chǎn)生的像素是從紋素(紋理像素)的計(jì)算方式是由紋理過濾。最快的方法是使用最近鄰插值,雙線性插值但或三線性插值之間mipmap是兩種常用的辦法是減少疊影或鋸齒形。在事件的紋理坐標(biāo)到紋理存在,這是不是夾具式或包裹式。 紋理坐標(biāo)的每一個(gè)給定的三

5、角形頂點(diǎn)指定,而這些坐標(biāo)插補(bǔ)使用擴(kuò)展bresenham畫的線算法。如果這些紋理坐標(biāo)在屏幕上是線性插值,其結(jié)果是仿射紋理貼圖。這是一種快速計(jì)算方法,但可以有一個(gè)明顯的不連續(xù)性,當(dāng)這些相鄰三角形是在一到屏幕的平面夾角。透視正確的頂點(diǎn)紋理占的位置在三維空間,而不是簡(jiǎn)單的插補(bǔ)一個(gè)二維的多邊形。這實(shí)現(xiàn)了正確的視覺效果,但它的計(jì)算速度較慢。直接對(duì)紋理坐標(biāo)進(jìn)行插補(bǔ),以坐標(biāo)除以它們的深度代替(相對(duì)于瀏覽器),以及深度值的倒數(shù)也用于插補(bǔ)和恢復(fù)正確的透視坐標(biāo)。此修正使得多邊形的更接近觀眾,從像素與像素之間的差異,這樣的紋理坐標(biāo)部分較?。ɡ旒y理更寬),這種差異有部分地方越拉越大。(壓縮紋理)。另一種技術(shù)是多邊形細(xì)

6、分成更小的多邊形,如在三維空間或屏幕空間廣場(chǎng)三角形,并利用他們的仿射貼圖。在仿射映射的圖像失真變得更加明顯減少對(duì)小多邊形。然而,另一種技術(shù)是使用速度更快的計(jì)算近似的觀點(diǎn),如多項(xiàng)式。還有另一種方法是使用1 / z在過去兩年繪制像素值進(jìn)行線性推斷下一個(gè)值。該部門開始,然后做那些值,這樣只有一小其余部分進(jìn)行劃分4,但這種方法使得簿記金額在大多數(shù)系統(tǒng)過于緩慢。最后,一些程序員的距離不斷擴(kuò)大招用的厄運(yùn)通過查找任意多邊形的距離不斷線,沿著它渲染。 在計(jì)算機(jī)圖形,環(huán)境貼圖,或反射貼圖,是一種有效的圖像為基礎(chǔ)的近似通過一個(gè)預(yù)先計(jì)算的紋理圖像是指一個(gè)反射面的外觀照明技術(shù)。紋理是用來存儲(chǔ)遙遠(yuǎn)的周邊環(huán)境形象呈現(xiàn)的對(duì)

7、象。好幾種方法均采用存儲(chǔ)周圍環(huán)境。第一個(gè)技術(shù)是球面映射,其中單一材質(zhì)含有圖像周圍發(fā)生的一切,如反映在鏡子上的球。它幾乎完全超越了由立方體映射,由環(huán)境被投射到一個(gè)立方體的六個(gè)面和儲(chǔ)存為6平方紋理或展開成六方地區(qū)一個(gè)單一的紋理。其他的預(yù)測(cè),有一些優(yōu)越的數(shù)學(xué)、計(jì)算性質(zhì)包括拋物面映射,金字塔的映射,八面體映射,healpix映射。反射映射方法比傳統(tǒng)的跟蹤計(jì)算的光線,通過跟蹤并按照它的反射光路方法精確射線更有效率。色的反射在用于計(jì)算一個(gè)像素著色通過計(jì)算確定在目標(biāo)點(diǎn)上的反射向量,并映射到環(huán)境中的地圖紋元。這種技術(shù)產(chǎn)生的結(jié)果往往是表面類似于光線追蹤所產(chǎn)生的,但是光線跟蹤太花費(fèi)資源,因?yàn)橛?jì)算的反射輻射值計(jì)算的

8、發(fā)病率和反射角度,然后紋理查找,而不是跟著來對(duì)場(chǎng)景幾何和計(jì)算的射線輻射,簡(jiǎn)化cpu的工作量。然而,在大多數(shù)情況下,反射映射只是一個(gè)近似的真實(shí)反映。環(huán)境映射依賴于兩個(gè)很少假設(shè)但很少完美: 1)所有物體輻射事件后,被陰影來自于一個(gè)無限的距離。當(dāng)這種情況并非如此,附近的幾何反射物體上的反映出現(xiàn)錯(cuò)誤的地方。當(dāng)這種情況下,沒有視差是出現(xiàn)在反射中。2)被遮蔽的對(duì)象是凸的,例如,它沒有自反光,這種情況對(duì)象不會(huì)出現(xiàn)在反射對(duì)象里,只有環(huán)境反射時(shí)才有。反射貼圖也是一種傳統(tǒng)的基于圖像的合成,用于創(chuàng)建對(duì)象對(duì)現(xiàn)實(shí)世界的背景反射照明技術(shù)。環(huán)境貼圖通常是呈現(xiàn)一個(gè)反射面最快速的方法。為了進(jìn)一步提高渲染速度,渲染可以計(jì)算每個(gè)頂

9、點(diǎn)的反射光線的位置。然后,位置是插在那一個(gè)頂點(diǎn)連接多邊形。這就避免了重新計(jì)算每個(gè)像素的反射方向的需要。反射類型球面映射代表了照明領(lǐng)域,仿佛它是在一個(gè)反射球反射透過一個(gè)虛擬攝像關(guān)。紋理圖像可以創(chuàng)建接近這個(gè)理想設(shè)置,或使用魚眼鏡頭或通過預(yù)渲染一個(gè)球形映射一個(gè)場(chǎng)景。球形映射遭受限制,減損產(chǎn)生的效果圖的真實(shí)感。由于球形地圖為代表的環(huán)境中,它們存儲(chǔ)方位推算,一個(gè)奇點(diǎn)(一個(gè)“黑洞”效應(yīng))突然在反射點(diǎn)上看到的地方或附近的地圖邊緣特克塞爾顏色被扭曲,由于對(duì)象不足的決議,代表點(diǎn)準(zhǔn)確。球形映射也浪費(fèi)像素,而不是在正方體是在球體。球面貼圖太強(qiáng)烈以至于只能看見高虛擬攝像頭近的物體。圖描繪一個(gè)明顯的反射立方體映射正在提

10、供的反映。這張地圖其實(shí)是投射到從觀察者的角度來看表面。主要挑染法線和角度和光線路徑所決定的光線蹤跡??梢浴澳笤臁保绻鼈兪鞘謩?dòng)繪制到紋理域(或已經(jīng)出現(xiàn)在那里,如果他們?nèi)绾潍@得不同紋理映射),從那里他們將預(yù)計(jì)出貼圖對(duì)象隨著其余的紋理細(xì)節(jié)。 多面體立方貼圖和其他貼圖存在嚴(yán)重的地址失真。如果立方體映射正確地創(chuàng)建和過濾,就不會(huì)出現(xiàn)明顯的接縫,并且可以使用的往往虛擬相機(jī)的視角無關(guān)。在大多數(shù)計(jì)算機(jī)圖像應(yīng)用立方體和其他多面體貼圖是由球面貼圖來復(fù)制,基于圖像照明的例外。一般來說,立方體映射使用相同的skybox是用在戶外渲染。立方反射映射是通過確定該對(duì)象正在瀏覽載體。這臺(tái)相機(jī)是反映了對(duì)光線的表面相交,相機(jī)矢

11、量對(duì)象正常。在光線的反射,這是然后傳遞給立方體映射結(jié)果得到紋素給它提供了在照明亮度值計(jì)算使用。這將創(chuàng)建該對(duì)象的反射效果。原文: mappingfrom wikipediabump mapping is a technique in computer graphics to make a rendered surface look more realistic by modeling the interaction of a bumpy surface texture with lights in the environment. bump mapping does this by changi

12、ng the brightness of the pixels on the surface in response to a heightmap that is specified for each surface.when rendering a 3d scene, the brightness and color of the pixels are determined by the interaction of a 3d model with lights in the scene. after it is determined that an object is visible, t

13、rigonometry is used to calculate the geometric surface normal of the object, defined as a vector at each pixel position on the object.the geometric surface normal then defines how strongly the object interacts with light coming from a given direction using phong shading or a similar lighting algorit

14、hm. light traveling perpendicular to a surface interacts more strongly than light that is more parallel to the surface. after the initial geometry calculations, a colored texture is often applied to the model to make the object appear more realistic.after texturing, a calculation is performed for ea

15、ch pixel on the objects surface: look up the position on the heightmap that corresponds to the position on the surface. 1. calculate the surface normal of the heightmap. 2. add the surface normal from step two to the geometric surface normal so that the normal points in a new direction. 3. calculate

16、 the interaction of the new bumpy surface with lights in the scene using, for example, the phong shading. the result is a surface that appears to have real depth. the algorithm also ensures that the surface appearance changes as lights in the scene are moved around. normal mapping is the most common

17、ly used bump mapping technique, but there are other alternatives, such as parallax mapping.a limitation with bump mapping is that it perturbs only the surface normals without changing the underlying surface itself.3 silhouettes and shadows therefore remain unaffected. this limitation can be overcome

18、 by techniques including the displacement mapping where bumps are actually applied to the surface or using an isosurface.for the purposes of rendering in real-time, bump mapping is often referred to as a pass, as in multi-pass rendering, and can be implemented as multiple passes (often three or four

19、) to reduce the number of trigonometric calculations that are required.texture mapping is a method for adding detail, surface texture (a bitmap or raster image), or color to a computer-generated graphic or 3d model. its application to 3d graphics was pioneered by dr edwin catmull in his ph.d. thesis

20、 of 1974. texture mappinga texture map is applied (mapped) to the surface of a shape or polygon.1 this process is akin to applying patterned paper to a plain white box. every vertex in a polygon is assigned a texture coordinate (which in the 2d case is also known as a uv coordinate) either via expli

21、cit assignment or by procedural definition. image sampling locations are then interpolated across the face of a polygon to produce a visual result that seems to have more richness than could otherwise be achieved with a limited number of polygons. multitexturing is the use of more than one texture a

22、t a time on a polygon.2 for instance, a light map texture may be used to light a surface as an alternative to recalculating that lighting every time the surface is rendered. another multitexture technique is bump mapping, which allows a texture to directly control the facing direction of a surface f

23、or the purposes of its lighting calculations; it can give a very good appearance of a complex surface, such as tree bark or rough concrete, that takes on lighting detail in addition to the usual detailed coloring. bump mapping has become popular in recent video games as graphics hardware has become

24、powerful enough to accommodate it in real-time.the way the resulting pixels on the screen are calculated from the texels (texture pixels) is governed by texture filtering. the fastest method is to use the nearest-neighbour interpolation, but bilinear interpolation or trilinear interpolation between

25、mipmaps are two commonly used alternatives which reduce aliasing or jaggies. in the event of a texture coordinate being outside the texture, it is either clamped or wrappedperspective correctnesstexture coordinates are specified at each vertex of a given triangle, and these coordinates are interpola

26、ted using an extended bresenhams line algorithm. if these texture coordinates are linearly interpolated across the screen, the result is affine texture mapping. this is a fast calculation, but there can be a noticeable discontinuity between adjacent triangles when these triangles are at an angle to

27、the plane of the screen.perspective correct texturing accounts for the vertices positions in 3d space, rather than simply interpolating a 2d triangle. this achieves the correct visual effect, but it is slower to calculate. instead of interpolating the texture coordinates directly, the coordinates ar

28、e divided by their depth (relative to the viewer), and the reciprocal of the depth value is also interpolated and used to recover the perspective-correct coordinate. this correction makes it so that in parts of the polygon that are closer to the viewer the difference from pixel to pixel between text

29、ure coordinates is smaller (stretching the texture wider), and in parts that are farther away this difference is larger (compressing the texture). affine texture mapping directly interpolates a texture coordinate between two endpoints and : where perspective correct mapping interpolates after dividi

30、ng by depth , then uses its interpolated reciprocal to recover the correct coordinate: all modern 3d graphics hardware implements perspective correct texturing.doom renders vertical spans (walls) with affine texture mapping. screen space sub division techniques. top left: quake like, top right: bili

31、near, bottom left: const-zclassic texture mappers generally did only simple mapping with at most one lighting effect, and the perspective correctness was about 16 times more expensive. to achieve two goals - faster arithmetic results, and keeping the arithmetic mill busy at all times - every triangl

32、e is further subdivided into groups of about 16 pixels. for perspective texture mapping without hardware support, a triangle is broken down into smaller triangles for rendering, which improves details in non-architectural applications. software renderers generally preferred screen subdivision becaus

33、e it has less overhead. additionally they try to do linear interpolation along a line of pixels to simplify the set-up (compared to 2d affine interpolation) and thus again the overhead (also affine texture-mapping does not fit into the low number of registers of the x86 cpu; the 68000 or any risc is

34、 much more suited). for instance, doom restricted the world to vertical walls and horizontal floors/ceilings. this meant the walls would be a constant distance along a vertical line and the floors/ceilings would be a constant distance along a horizontal line. a fast affine mapping could be used alon

35、g those lines because it would be correct. a different approach was taken for quake, which would calculate perspective correct coordinates only once every 16 pixels of a scanline and linearly interpolate between them, effectively running at the speed of linear interpolation because the perspective c

36、orrect calculation runs in parallel on the co-processor 3. the polygons are rendered independently, hence it may be possible to switch between spans and columns or diagonal directions depending on the orientation of the polygon normal to achieve a more constant z, but the effort seems not to be wort

37、h it.another technique was subdividing the polygons into smaller polygons, like triangles in 3d-space or squares in screen space, and using an affine mapping on them. the distortion of affine mapping becomes much less noticeable on smaller polygons. yet another technique was approximating the perspe

38、ctive with a faster calculation, such as a polynomial. still another technique uses 1/z value of the last two drawn pixels to linearly extrapolate the next value. the division is then done starting from those values so that only a small remainder has to be divided 4, but the amount of bookkeeping ma

39、kes this method too slow on most systems. finally, some programmers extended the constant distance trick used for doom by finding the line of constant distance for arbitrary polygons and rendering along it.in computer graphics, environment mapping, or reflection mapping, is an efficient image-based

40、lighting technique for approximating the appearance of a reflective surface by means of a precomputed texture image. the texture is used to store the image of the distant environment surrounding the rendered object.several ways of storing the surrounding environment are employed. the first technique

41、 was sphere mapping, in which a single texture contains the image of the surroundings as reflected on a mirror ball. it has been almost entirely surpassed by cube mapping, in which the environment is projected onto the six faces of a cube and stored as six square textures or unfolded into six square

42、 regions of a single texture. other projections that have some superior mathematical or computational properties include the paraboloid mapping, the pyramid mapping, the octahedron mapping, and the healpix mapping.the reflection mapping approach is more efficient than the classical ray tracing appro

43、ach of computing the exact reflection by tracing a ray and following its optical path. the reflection color used in the shading computation at a pixel is determined by calculating the reflection vector at the point on the object and mapping it to the texel in the environment map. this technique ofte

44、n produces results that are superficially similar to those generated by raytracing, but is less computationally expensive since the radiance value of the reflection comes from calculating the angles of incidence and reflection, followed by a texture lookup, rather than followed by tracing a ray agai

45、nst the scene geometry and computing the radiance of the ray, simplifying the gpu workload.however in most circumstances a mapped reflection is only an approximation of the real reflection. environment mapping relies on two assumptions that are seldom satisfied:1) all radiance incident upon the obje

46、ct being shaded comes from an infinite distance. when this is not the case the reflection of nearby geometry appears in the wrong place on the reflected object. when this is the case, no parallax is seen in the reflection.2) the object being shaded is convex, such that it contains no self-interrefle

47、ctions. when this is not the case the object does not appear in the reflection; only the environment does.reflection mapping is also a traditional image-based lighting technique for creating reflections of real-world backgrounds on synthetic objects.environment mapping is generally the fastest metho

48、d of rendering a reflective surface. to further increase the speed of rendering, the renderer may calculate the position of the reflected ray at each vertex. then, the position is interpolated across polygons to which the vertex is attached. this eliminates the need for recalculating every pixels re

49、flection direction.if normal mapping is used, each polygon has many face normals (the direction a given point on a polygon is facing), which can be used in tandem with an environment map to produce a more realistic reflection. in this case, the angle of reflection at a given point on a polygon will

50、take the normal map into consideration. this technique is used to make an otherwise flat surface appear textured, for example corrugated metal, or brushed aluminium.types of reflection mappingsphere mapping represents the sphere of incident illumination as though it were seen in the reflection of a

51、reflective sphere through an orthographic camera. the texture image can be created by approximating this ideal setup, or using a fisheye lens or via prerendering a scene with a spherical mapping.the spherical mapping suffers from limitations that detract from the realism of resulting renderings. because spherical maps are stored as azimuthal projections of the environments they represent, an abrupt point of singularity (a “black hole” effect) is visible in the reflection on the object where texel colors at or near the edge of the map are distorted due to in

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫(kù)網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。

最新文檔

評(píng)論

0/150

提交評(píng)論