Portal field news

Portal field news

in ,

🎵 | Yamato (.S) releases ED theme "Fire" MV for TV anime "SCARLET NEXUS"!


Photo "Fire" MV

Yamato (.S) releases ED theme "Fire" MV for TV anime "SCARLET NEXUS"!

 
If you write the contents roughly
Based on 3DCG and motion graphics, the story until Yamato and .S meet is finished in a work expressed in the world view of "Neo-cyberpunk".
 

Alternative unit Yamato (.S) has released the music video for the new song "Fire" on YouTube. ... → Continue reading

 OKMusic

Music site that collects the latest information from various artists
Information for music fans, including the latest music news, interviews with artists, live reports, recommendations by famous writers, and introductions to Japanese music and Western music masterpieces!


Wikipedia related words

If there is no explanation, there is no corresponding item on Wikipedia.

Three-dimensional computer graphics

Three-dimensional computer graphics(Sanzigen Computer Graphics,British: three-dimensional computer graphics) IsComputerBy the operation of3 dimensionsVirtual in spaceThree-dimensionalThings2 dimensionsIsflatThere is a sense of depth (three-dimensional effect) by converting to the above informationimageIt is a method to make.3DCG(XNUMXD CG)Is often abbreviated as.20st centuryDue to the rapid development and performance improvement of computer technology from the end, conventionallarge companyOr biggraduate SchoolHigh-definition, high-quality 3D images that could only be obtained with21st centuryAs of the beginningPersonal computer (PC) orgame machine,SmartphoneBut you can get it in real time.

Every summerThe United States of AmericaWill be held inCGFestival "SIGGRAPH"(SIGGRAPH), many around the worldEvery woman participating in the study applied SERUM to one half of her face and an identical product without the EGF cellular activator to the other half. The study ran for eight weeks, during which time participants applied the serum twice a day. To rule out any effect anticipation could have on results, the study was double-blinded, meaning that none of the women in the study, nor the scientists in charge of measuring results, knew which half was which. Efficacy was assessed by standardized photography and biophysical measurements to evaluate skin topography, density, and thickness.By the latest CGpaperHas been announced and the technology has been updated.

Use

3DCG allows users to manipulate virtual perspectives and object changes to get instantly updated images.CADlikesimulation,Computer gamesIt can be roughly divided into three types: real-time processed moving images such as CG movies, those in which the producer takes time to produce moving images in advance, such as CG movies, and still images.If you use sufficiently high technology, you can obtain an image that is indistinguishable from a live-action image in an inanimate object, but in a portrait, it is often an inorganic substance peculiar to CG.Mech robotAtUncanny valley phenomenonIn general, I am not good at drawing human facial expressions.

Video (real time processing: real time)

It refers to the process of dynamically generating video immediately in response to some parameter observation and information input such as user operation and the passage of time.

Typical applications for video generation by real-time processingComputer games.PCOr stationarygame machine(Home game console,ArcadeFor business use used inArcade game machine),handheld game consoleAnd mobile phones (smartphones and somefeature phone) Is used for moving images using 3DCG.

In industrial use, at the product design stageCAD/CAMIt is used for the purpose of connecting parts, drawing a completed drawing of a product, and drawing a perspective in architecture (Architectural perspectiveRestoration of ancient ruins because you can check the architectural image if you have a design drawingモデルIt is also used for drawing such as photo-realistically.XNUMXD mapIn the case of terrain undulations and largeScaleThe building shape in the case of is used so that it can be viewed from various viewpoints).Also, the movement and surroundings in the real worldComputer-simulationEffective training can be done by reproducing withDrive simulator,Flight simulatorIs also an example of using 3DCG technology in real-time processing.X-ray CT,MRIAs shown above, real-time 3DCG technology is also used when observing arbitrary cross sections after reconstructing 3D data from a large number of tomographic images.

Real-time processing in video generation requires a simple local illumination model or a low-polygon model because it requires how to produce such an image in real time rather than the accuracy of the image compared to the other ones. Ingenuity has been made to reduce temporal and spatial costs by approximating events as much as possible, simplifying arithmetic processing, and pre-calculating, such as expressing texture with a texture map.PC3DCG video forParallel computingAs a dedicated IC for high-speed generationBut by the full GPU acceleration techHas appeared.Programmable shaderIn real time since its appearanceRay tracingAnd global illumination (Global illumination) Is being developed, but it is still a developing area.

No user interaction or interaction required in-gameMoviesHigh-quality videos generated in advance using production rendering software and high-definition models may be used to play the scene (demoscene), but the assets (materials) used in the game are used as they are. Real-time rendering may be done.

Video (non-real time processing: pre-render)

By 3DCGmoviesIs a typical "non-real-time processing" video generation application.In many movies, the purpose is to create photorealistic images, and on the contrary, it is manga-like.ア ニ メ ー シ ョ ンIt is used for the purpose of producing unrealistic images such as, and 3DCG technology is used in some form in most commercial movies, including composite images with live-action films.VFXUse a lotSFMovies, anime movies, etc. may require long-term 3DCG images, and in such cases, several months at a server facility called a "rendering farm" consisting of a large number of computers dedicated to 3DCG operations. A moving image is generated in units.

3DCG video images for advertising purposes are also produced in an environment similar to movies within the advertising production company or the manufacturer itself (the automobile industry is the representative of 3DCG video advertising, but it is also designed in other industries Companies in areas such as aerospace, military, and shipping that require computer simulation in the process are using 3DCG technology for image display along with simulation of physical phenomena).

Still image

Still images using 3DCG are produced for advertising, art, and all kinds of illustration applications.

principle

The basic principle of 3D CG is to place an object with 3 point coordinates on a virtual screen with 2D coordinates.Perspective projectionTo do (originally handle 2D image information)drawingThen, at best, it is not necessary to calculate the difference in scale due to depth and the difference in shading due to lighting, only considering the overlap between sheet-like rectangular surfaces, but in 3DCG, since it is a three-dimensional object, innumerable coordinate conversion and pixel painting Must be divided).

3DCG Zu1.png3DCG Zu2.png

First, consider the three-dimensional coordinates shown in [Fig. 1] in the display.To the originpoint of viewWhat does point A, which has the coordinates of three points in the coordinates, look like?

When the screen is placed between the origin and the point A as shown in [Fig. 2], the projected coordinates of the point A projected on the screen plane are h = x * (s / z) and v = y * (s / z). ). As z increases, the point A on the screen approaches the origin as close as possible.In other words, distant objects look small.The larger the coordinate s on which the screen is placed, the looser the perspective, and the smaller the coordinate s, the tighter the perspective, so the angle of view of the lens can be expressed.This is the principle of perspective projection, and if you connect points with three-point coordinates,Wire frameIf the image makes a surface from the connected pointspolygonCan be expressed by.

Production process

The production of 3DCG can be divided into the following steps.

  1. modeling
  2. Scene layout settings
  3. rendering
  4. Editing / retouching

modeling

modeling(British: modeling) Is the work of creating the shape of an individual object in a virtual three-dimensional space.Many 3DCGsoftwareThen, one surface is called a triangle or a quadranglePolygonExpressed as a set of.Many software can only handle triangles (because quadrilaterals and above can be curved surfaces).These polygonspolygonCalled (meaning polygon in English).Each shape is represented by a set of polygons.The shape created by modelingモデル,objectCall.

Another method of defining a surface is a free-form surface.Free-form surfaceNURBS curve,Spline curve,Bezier curveBy constructing a curved surface with such a method, a smoother and more accurate shape can be obtained as compared with a shape modeled only by polygons.Modeling with only polygons is called polygon modeling, and is sometimes distinguished from modeling using free-form surfaces.

Once the shape is created, set the material for the object.If you don't set the material, the object will be a homogeneous object that just reflects light uniformly.many3DCG softwareThere are setting items such as color, transparency, reflection, refractive index, self-luminous, bump, and displacement.

Scene layout settings

Place the object created by modeling in the virtual 3D space.As in the real world, nothing is displayed unless a light source is placed (a solid black image is output).In addition, the viewpoint is set by arranging a virtual camera.A virtual stage in which these are arranged and set is called a scene.

rendering

rendering(British: rendering) Is a process of generating an image that should be captured by a virtual camera from the scenes set so far.The computer calculates the shape and position of the object, the degree of light hitting, etc., and the final image is generated.RenderingalgorithmThere are many types with different processing speeds and qualities, and they are used according to the purpose.After completing various settings and starting rendering, the creator does not have to do anything until the rendering is completed.Rendering generally takes a lot of time.It can take hours or even days if there are many shapes in the scene or if you are using advanced rendering algorithms.When you need to render in real time, such as in a game, there are major restrictions such as applying a simple and fast rendering algorithm and reducing the total number of polygons in the scene.In a large-scale production site such as a movie, the calculation time may be shortened by having multiple computers perform rendering processing at the same time.

Depending on the rendering method, perspective and light reflection by air are also calculated.Rendering processing that performs such complicated calculations is often performed by a dedicated circuit (GPU).This form is used when used in games because it provides high interactivity and interactivity.

Retouch

Retouch (British: retouch) Is the work of reworking.The image obtained by rendering may not be exactly what the creator intended.Photoshop,Adobe After EffectsSuch asPhoto retouching toolIn some cases, the contrast and color may be adjusted.

Production technique

Texture mapping

Paste the image on the 3DCG modelTexture mapping(British: texture mapping), The pasted image is called a texture.By pasting textures, it is possible to set detailed color information and textures on the model surface, which are difficult to express only with modeling and shaders.

You can paste the texture by simply projecting the texture from the camera direction onto the model, orUVThere is a method of dividing and projecting the two-dimensional image area of ​​the texture cut out by the coordinates onto the model surface.

Reflection mapping that sets the intensity of reflection, pseudo-expresses small irregularitiesBump mapping/Normal mapping, Transparency mapping to set transparency, etc.By adding image information to the surface of the shape, the pattern and texture of the surface are expressed, resulting in a more realistic image.There is also a method of dynamically generating an actual uneven shape based on image information.

Especially in computer games, it is necessary to draw 3DCG characters in real time, so it is as small as possible.polygonA technique is used to paste a texture with details and shadows on the model (low polygon model) created in.

Bump mapping

On the surface of the modelNormalA technology that expresses unevenness in a pseudo manner by changing the direction of.Define the height with respect to the original shape in the grayscale image.FewpolygonAlthough it has the advantage of being able to realistically express fine shadows, it does not actually have three-dimensional unevenness on the surface, so the image looks strange when zoomed or when the surface is viewed from the side.

recent years[When?]Directly defines the direction of the normal (3D vector)Normal mapping(Normal mapping) is also used, but since it is difficult to create a normal map manually, the method of converting the details of a high-definition model into a normal map and applying it to a simplified model is usually adopted. ing.

Displacement mapping

A technology that expresses unevenness by actually moving the vertices of a 3D model up and down with respect to the surface.Compared to bump mapping, an image that does not give a sense of discomfort can be obtained because the unevenness is actually three-dimensional, but there is a drawback that the number of polygons increases according to the unevenness to be expressed.In the field of real-time 3DCGDirect3D 10 andOpenGL After the geometry shader was standardized in 3.2, in Direct3D 11 / OpenGL 4TessellationIs standardized,But by the full GPU acceleration techDisplacement mapping is now possible.

Hypertexture

The expression of unevenness by bump mapping expresses shadows in a pseudo manner, and the unevenness by displacement mapping only expresses unevenness by moving the vertices of the 3D model itself, whereas it is three-dimensional in the 3D model. A technology that can express not only small irregularities but also large structures such as deep grooves and through holes by multiplying the concentration functions.

particle

Since polygons are polygonal faces, the model may not have a clear surface, or the modelQuantityIt is not suitable for expressing a huge amount of smoke or flames with irregular movements.Also, hairVegetationFor example, if you try to express with polygons, there are things that require a lot of human labor and resources due to the amount.Particles are a technology that solves these problems.Particles express these as a set of minute particles,probabilityProcess the movement / shape with the model.Can be handled by advanced modeling or rendering software.When rendering thisMetaballAnd other technologies are used.

Subdivision surface (subdivision surface)

Subdivision surface (British: subdivision surface) Is a roughly modeled polygon meshmemoryA technique that subdivides on the top to create a smooth, seamless shape.Since the shape can be expressed smoothly with a small number of polygons, editing and transformation are easy.However, for industrial useCADIt cannot be used when high precision is required for the shape.

Boolean

Multiple objectssetTechnology to calculate.It is possible to combine with other shapes (sum), scrape the other shape from one shape (difference), and extract only the overlapping part as a shape (product).

Metaball

A technology that sets a concentration distribution centered on multiple points on three-dimensional coordinates and uses the density threshold as the surface of the shape.There are fusions in which the spherical shapes appear to attract each other, and inversion fusions in which the spherical shapes appear to repel each other.It is difficult to make an accurate shape, but it is suitable for making an organic shape with few control points. It is not a concept peculiar to 3DCG, but is sometimes used for 3D image representation.Initially, as the name implies, it was based on a sphere, but after that, improvements were made, and shapes other than spheres became available, and it is being used as a technology for modeling organic shapes.

In addition to modeling, it is also used to express flowing liquids.The advantage is that the amount of calculation required for rendering is at most low memory usage,Now[When?]Since these resources are abundant and the calculation method of fluid dynamics is also advancing, cheap metaballs that look like are rarely used in the field of video production.

Inverse kinematics (reverse kinematics, IK)

Inverse Kinematics (British: inverse kinematics) Is not a technical term for 3D computer graphics.Originally a field of mechanics, robotics etc. are the "head family".

In animals with many joints, such as humans, the position of the end of the joint always depends on the position and angle of its parent.Therefore, normally, when finding the position of the end portion of the joint, the direction in which the angle of the joint is calculated in order from the center to the end of the model is the "forward direction".However, if the calculation is performed in that direction, it would be troublesome to realize, for example, "a movement like stroking the desk with the palm".This is because it is very inefficient because all complicated calculations must be recalculated in the forward direction from the center of the model in order to obtain the change in the position of the end portion of the joint.To solve this problem, the position of the end part is determined first, and the angle of the parent joint for realizing the end position of the joint is set to a kind of "Inverse problemIt is conceivable to ask by solving.

As you can see from the above explanation, it is physicalKinematicsIt is one of the inverse problematic ideas that can be generally thought about.

Assuming a crotch-knee-foot-like shape, when creating an animation in which the pedals rotate while the soles of the feet are attached to the pedals of the bicycle, the crotch and then to match the rotational movements of the pedals. Rather than changing the angles of the knees and feet, it is better to determine the movement of each joint in the order of foot-knee-crotch by following the movement of the foot part. You can create natural animations.

Lighting

Setting the light source in 3D spacelighting(British: lighting).The light source makes the model visible.There are the following types of light sources.

  • Point light source: From one point, like a light bulbLightA light source that radiates in all directions.The light becomes weaker as you move away from the light source.
  • Spotlight: A variant of a point light source that emits light at a limited angle.
  • Parallel light source:(I.e.A light source that simulates light from infinity, such as.The sun is not exactly at infinity, but it looks almost like a parallel light source to the earth.Unlike a point light source, the light intensity does not change depending on the distance, but is constant.
  • Ambient light: A light source that uniformly illuminates all objects.Is expressed in a pseudo manner.The processing is fast, but it is unnatural that the shadows have uniform brightness.
  • : A light source that emits light from the entire virtual celestial sphere, like a clear sky.Similar to ambient light, butRadio CityNatural indirect light can be expressed by combining with.
  • IBL (Image-Based Lighting): A method of covering the entire scene with a two-dimensional image and using that image as a light source.By using the same image as the background at the same time, it is possible to create a photorealistic image in which the object is very "familiar" with its surroundings.In addition, hereHDRIIt is common to use images.

Tessellation and polygon mesh

Some 3DCG software treats simple objects (primitives) such as spheres and cylinders with numerical values ​​such as center point, radius, and height instead of polygons.If you want to edit or render these details,Polygon meshNeed to be converted to.This is called tessellation (tessellate means to make a mosaic pattern).However, abstract expressions such as spheres and cones, which are the shape information that the object originally had, are lost.

Reflection and shading model

Now[When?]In 3DCG, when using a simplified lighting model / reflection model for reasons such as speed, in many casesPhong reflection model (Phong reflection model) is adopted[1].. Phong reflection modelRule of thumbIt is a typical example of local illumination (local illumination).In order to draw a more realistic scene, it will be described later.Radio CitySuch asGlobal illuminationOptically and physically correct lighting and reflection models that support (Global Illumination) are used, but rendering takes time because simulating the real world is very complex and involves enormous calculations. Will be.(English edition) TheConservation of energyIt describes the propagation of light based on, and is the basic theory of physical-based rendering.

The reflection model also depends on the nature of the object.Computer graphicsIn, the properties of an object are defined and abstracted as a material, but in order to accurately reproduce the texture of plastic, metal, skin and hair with computer graphics, appropriate reflection according to each material You need to use a model.The color of the object is of lightRGBBorn by the difference in reflection / absorption coefficient of each component,Specular highlightThe color and shape of is also affected by the roughness of the surface and the characteristics of the light source.When reproducing metallic luster and diffraction patterns, it is necessary to consider the physical and chemical properties and surface properties of the substance.

Also of lightrefractionWhen reproducing the phenomenon with computer graphics, as a property of the substanceRefractive indexIs an important factor.In most 3DCG software, the index of refraction is abbreviated as IOR.

shadingIs to calculate the shading of an object.In a broad sense, it includes calculation of the intensity of reflected light by a reflection model, but in a narrow sense, it refers to the shadow interpolation technique described later.

Shading interpolation

There are the following types of shadow interpolation methods in the process of generating a 2D image from a polygon model.

The color of each polygon is calculated from the angle between the normal vector of the polygon and the light source.One polygon is filled with one color.Since it is a simple algorithm, the calculation speed is high, but it does not look smooth because the color changes discontinuously at each polygon seam.Also called constant shading.
The normal vector of each vertex of the object is obtained, and the pixel color is calculated by linearly interpolating between the vertices.The seams between pixels are less noticeable.Name of the inventorGouraudDerived from.
From the first-order interpolation of the normals of each vertex of the object, the normals at each pixel are obtained, and the brightness of the final pixel is calculated based on the normals.Improves the unnatural luster in Gouraud shading.Name of the inventorPhongDerived from.

Z sort method

One of the hidden surface erasing methods.Based on the coordinates of the polygons (usually the center point), all the polygons are drawn in order from the back of the screen (the polygon farthest from the line of sight).Since it is basically only necessary to draw a polygon without performing special processing such as the Z-buffer method described later, it has the advantages of being easy to implement, consuming less memory, and being able to process very quickly. Until the Z-buffer method became widespread, it was used throughout 3DCG in the olden days, and [When?]Until then, it was commonly used in real-time 3DCG in home video game consoles.However, when the number of polygons increases, the cost of sorting the polygons increases and the fill rate becomes enormous, so that there is no speed advantage compared to the Z-buffer method.

There is a drawback that polygons cannot be displayed correctly when they intersect, but as a solution to this, a method of statically or dynamically subdividing the polygons so that they do not intersect with each other may be adopted.

Unlike the Z-buffer method, the drawing of semi-transparent polygons can be handled almost correctly except when the polygons intersect.

Z-buffer method

One of the hidden surface erasing methods.When a large number of polygons overlap, there may be a problem that the polygon in the back is drawn in the foreground.In order to prevent this, when drawing each polygon, all the distances from the viewpoint are recorded for each pixel, and only the pixels closer than the currently recorded depth are drawn. Unlike the Z sort method, rendering is usually done from the polygon closest to the viewpoint (because the rendering of the polygon hidden in the back can be skipped by judging with the Z buffer).

The Z buffer is a memory area that stores the depth, and the Z buffer method has the advantage that it is easy to make hardware because the algorithm is simple, but the memory is larger than the Z sort method because of the memory for the Z buffer. Consume a lot.Semi-transparent polygons cannot be processed correctly by the Z-buffer method alone (in this case, once Z-buffer), because the depth is simply determined in pixel units to determine whether to paint or not paint the polygon pixels. Draw only opaque polygons by the method, and then draw semi-transparent polygons by the Z sort method).Further, in polygons that are close to each other or intersect at a low angle, a phenomenon called Z-fighting occurs in which hidden surface erasure is not performed correctly depending on the accuracy of the depth recorded in the Z buffer.

It is often used for real-time drawing such as preview display of games and CAD software.

Scan line

Scan line (British: scanline rendering) Is a method of dividing the screen into horizontal lines and calculating the depth for each line for rendering.Shadows can also be expressed by expressing transparency or by using it in combination with shading.The scan line means a scan line.It's relatively fast, but the quality of the resulting image is basically inferior to ray tracing.

Ray tracing

Ray tracing(British: ray tracing) Is a method of rendering by tracking the light from the viewpoint to the light source.A straight line is extended in the direction of each pixel drawn from the viewpoint, and whether or not it intersects with an object is mathematically determined.Illuminance is calculated by the direction vector with the light source.Reflection and refraction recursively search based on reflectance and refractive index.The calculation ends when there is no crossing with the object.It enables expressions such as reflection and refraction that cannot be obtained with scan lines.While a photo-realistic image can be obtained, it takes a lot of rendering time.Therefore, it is common to simplify or limit the refraction calculation process.In the field of real-time 3DCG,But by the full GPU acceleration techWith the development of ray tracing, real-time ray tracing is being attempted.Adobe After Effects In CC(English edition)Was adopted.

Radio City

Radio City(British: radiosity) Is a technology that expresses indirect light (soft light wraparound) by giving each polygon the amount of light energy and calculating the mutual reflection of the shape.Global illumination (Global illumination) Is a typical example.It takes a huge amount of time to calculate, but in a scene composed of completely diffused surfaces, once the calculation of light reflection between objects is completed, the calculation result is saved unless the object or light source moves. Can be reused for rendering from another angle.We applied the technology developed in the field of lighting engineering to 3DCG rendering.

Photon mapping

Photon mapping(British: photon mapping) Is a method of creating a photon that models light by scattering it from a light source, and then rendering it by applying a ray tracing method to the created photon.It is possible to express the texture and transparency of objects and media while reducing the amount of calculation.As with Radiosity, calculation results can be reused.

Path tracing

As with normal ray tracing, the line of sight is skipped from the camera, and a larger amount of secondary line of sight is shot starting from the point where the object intersects.The color and brightness obtained here are averaged to obtain the color at that point.This technique is path tracing (British: path tracing).Diffuse reflection of light on the surface of an object can be reproduced, but noise is likely to occur in a scene with a large difference in brightness.

Surface model and voxels

2 dimensions OfimageThe smallest unit ofpixelIn contrast to calling3 dimensionsCoordinateThe smallest incorporated aboveunitTheVoxelsCalled (voxel).many3DCGsoftwareIs used to process only the surface of an objectSurface modelOn the other handVoxelsHad the contentsvolumeモデル.liquid,(I.e.,煙 such as流体CalculationIt is mainly used in.Now[When?]Then,flame,爆 発,溶岩,hairEven expressions such as are possible.VoxelsモデルSo, in order to make an accurate shape, we have to increase the density of voxels, andmemoryRequires a large amount.

renderingrequired forobjectIt may also be used to screen out and process rendering efficiently.thisVoxelsCalled split.

Open SourceprojectThen,OsiriXIs famous.

cross

clothesStartedclothTo enable many expressions aboutTechnology.clothesWearedcharacterChanges in the shape of the cloth due to the movement of the cloth and the influence of the windsimulationDo,DesignerIs hand-made on the clothア ニ メ ー シ ョ ンReduce the burden of putting on.Eventually,A human Of皮膚The aim is to make it possible to simulate all kinds of events, including.

cross(British: cloth) Has massmeshPseudo nodeSpringでLinksBy making the cloth stretch and contract (constraint condition), the stretch and elasticity of the cloth can be reproduced.thisTextureVarious calculation methods have been proposed by engineers for reproduction.

The first movie in which cross-simulation was used extensively was the CG movie "Mice".Stuart LittleThere is

As a production method to put clothes on the character, "pseudo"PatternCreate, combine, and cover the character. "MayaA technique called implemented in, and like "ordinarymodelingIt is roughly divided into two types of methods: "modeling clothes similar to the above and converting them into cloth."Now[When?]It is,MayaOriginally, it has a function called the same method.

TheSquareMovie byFinal FantasyThis is a further development of the cross simulation developed by () in the project.Very fast and stable,MayaIt is rare to produce strange simulation results in which cloths repel each other and rampage.

API for 3DCG

Real-time use

Real-time 3DCG is used for visualization of scientific simulations and interactive applications such as simulators and 3D CAD operations.Computer games(Video games,Pc games) But 3DCG is becoming more common. Dedicated to 3DCGAPIIs mainlyPc gamesGraphics to speed up the drawing process withhardware(But by the full GPU acceleration tech, Graphics chip,Video card/ When using a graphics card)Programmer Abstraction layerProvides a way to access graphics hardware through, reducing the burden on programmers.The following APIs require generalization of the interfacePersonal computer,SmartphoneEspecially often used in mobile devices such as.

OpenGL
Open standard API.VariousOSIt is supported by and has excellent portability. It is widely used for games as well as 3DCG creation tools and applications such as CAD.As a subset for mobile and embedded environmentsOpenGL ES,AlsoWeb browserAs a subset forWebGLExists.
Direct3D
MicrosoftMadeOSMultimedia API forDirectXOne of the components of.It is especially suitable for 3D game development due to the progress of hardware optimization of GPU products for consumers by vendors.

By supporting these generalized APIs on their respective graphics hardware, hardware vendors can run the same program on different hardware.In the case of a dedicated game machine, generalization and abstraction are not always necessary, so in most cases, a unique low-level API optimized for each device is prepared.

OpenGL 1.5 / Direct3D 8.0 or laterProgrammable shaderSupports and programmersShading languageMade it possible to customize the shading process.In addition to improved hardware performance, programmable shaders have dramatically improved the quality of real-time 3DCG.

others,AMDbyMantleSince its appearanceApplebyMetal, By MicrosoftDirect3D 12, andCronos GroupbyVolcanoFor example, APIs that emphasize drawing efficiency have emerged that enable low-level hardware control by lowering the degree of hardware abstraction, such as APIs for game-only machines.

For production use

(English edition)
PIXARA software interface for production rendering developed by.RenderMan Shading LanguageCan be customized by.

3DCG of each country

世界The country where 3DCG research and practical application is most advanced inAmerica.ACM(International Computer Society)SIGGRAPHIn addition to the activeness of research such as sponsorship ofHollywood OfmoviesThe industry is in the backbone,Computer scienceHas pioneering researchers inPixarSuch asProduction companyBy 3DCGAnimeIs produced in large quantities, and 3DCG technology is actively used in live-action works.For particularly important research achievements in the United States,Ivan SutherlandbyHead mounted display(1966 years),Edwin CatmullbyTexture mapping,Z buffer(Both 1974),Subdivision surface(1978 years),Jim BlinnbyEnvironment mapping(1976)(English edition)(1977)Bump mapping(1978 years),James ClarkbyGeometry engine(1980 years),(English edition)Recursive byRay tracing(1980 years),Jim Kajiyaby(English edition),(English edition)(1986) and so on. In 1995, the first full 3DCG feature film "Toy storyWas produced.

FrenchPierre Bezier TheBezier curvedInvented (1970),(English edition) TheGouraud shadingWas devised (1971).

Vietnamese(English edition) ThePhong reflection model,Phong shadingWas devised (1975).

As Canada's first full 3DCG 30-minute frame animation series for TV, "reboot』(1994) was produced.In France, the same full 3DCG TV anime series "Insektors』(1994) was released.

Danish(English edition) ThePhoton mappingWas devised (1996).

JapaneseOsaka UniversityKomura OmuraEtc.MetaballWas put into practical use (1982),Fukuyama UniversityTomoyuki NishitaEt al. Almost at the same time as Michael F. Cohen et al.Radio CityWas devised (1985).

Computer gamesIn the United StatescomputerIs the mainstream, so it is easy to respond to technological innovation, and in that respect, the worldvideo gameThe result was to overtake Japan, the leader of the industry (Segaof"Virtual racing], [Virtua Fighter"series,PlayStationAlthough the adoption of 3DCG such as etc. was early, the accumulated is close to dedicatedarcade gameBoard andHome game consolePerformance is fixed for several yearshardwareIt is said that there were many technologies that depended on.)

Japanese animeThen, the movie version "Golgo 13And the TV anime "Fawn story』(Both in 1983) was the earliest in the world to partially introduce 3DCG.The CG part of Golgo 13 isToyo LinksAt that time, domestic systems were being developed, such as being produced with a 3DCG system developed by a team such as Koichi Omura of Osaka University, but these trends are gradually disappearing.In the full 3DCG work in Japan, the movie for TV "VISITOR(1998),Anime pictureIn the key, the movie version "Apple seed(2004) is mentioned as the beginning of the feature film.Most of the full 3DCG works for TV are short works of about several minutes, but the 30-minute frame TV series also has "SD Gundam Force』(2004) has been produced in small numbers.

in JapanComicHe has been familiar with line art expression against the background of culture, and since the latter half of the 1990s.animatorExpressions that are familiar with 3DCG in hand-drawn animation by[2].. 3DCG is a background videoMech robot, It is often used in parts that require a lot of drawing effort such as crowd scenes, and in recent yearsToon renderingA work in which character depiction is partially performed in 3DCG by improving the expressive power ofPrecure series<2009 series and later> etc.) are also appearing.

footnote

[How to use footnotes]
  1. ^ Phong shading It is a different technology from (Phong shading).
  2. ^ "Will full CG animation take root in Japan?". CGWORLD.jp/Enhanced-EndorphinBourne Digital / Toei Animation (2012-2013). 2019/11/13Browse.

Related item

References

外部 リンク

  • CGWORLD.jp --CG / video specialized information site (Bone Digital)
  • Enhanced-Endorphin --CG animation information site.A list of major 3DCG works in Japan and the history of 3DCG video development are posted. (Toei animation)

 

Back to Top
Close