Software

ACVT
Soumit par Administrateur le mardi, 01/10/2007
 Q&AACVT
Anton van den Hengel

Oct. 2007
 

"So VideoTrace is designed to work with whatever video you need to grab an object from, rather than specifically shot image sets. [...] This means that you can create an accurate model from a video in minutes with VideoTrace, whereas using PhotoModeller requires that you provide much more information manually, making it a much more arduous process. "

 

< VideoTrace

  
Q1Can you introduce the VideoTrace project and its purpose?
A1VideoTrace is a system for interactively generating realistic 3D models of objects from video?models that might be inserted into a video game, a simulation environment, or another video sequence. The user interacts with VideoTrace by tracing the shape of the object to be modelled over one or more frames of the video. By interpreting the sketch drawn by the user in light of 3D information obtained from computer vision techniques, a small number of simple 2D interactions can be used to generate a realistic 3D model. Each of the sketching operations in VideoTrace provides an intuitive and powerful means of modelling shape from video, and executes quickly enough to be used interactively. Immediate feedback allows the user to model rapidly those parts of the scene which are of interest and to the level of detail required. The combination of automated and manual reconstruction allows VideoTrace to model parts of the scene not visible, and to succeed in cases where purely automated approaches would fail.
  
Q2What is the difference between VideoTrace and existing 3D image-based reconstruction tools such as Realviz ImageModeler?
A2VideoTrace allows the user to model arbitrary objects in normal video. Pretty much any video that you can apply a camera tracker to you can use as input to VideoTrace. That makes it a lot more flexible than something like ImageModeller which requires that you take a set of photographs of the object that you want to model under a very controlled (and constrained) set of circumstances. So VideoTrace is designed to work with whatever video you need to grab an object from, rather than specifically shot image sets. There are some similarities with PhotoModeler, but the interaction in VideoTrace is much more intuitive, and more powerful. This means that you can create an accurate model from a video in minutes with VideoTrace, whereas using PhotoModeller requires that you provide much more information manually, making it a much more arduous process. 
The flexibility and power of the VideoTrace modelling process means that it can be used to generate models for all of the purposes you might find, or per-pixel depth maps for compositing, from whatever video you need to use it on. No special cameras are needed, no laser scans, no tape measures, just a simple intuitive tracing process.
  
Q3The VideoTrace demo is very impressive. How long has the system been developed?
A3The system has been in active development for 2 years, but it builds on work that has been carried out by the group over more than 10 years in the area. We've just (as in minutes ago) released the first beta version to a limited set of testers, so it's certainly a very usable system, which we're hoping to keep developing for a few years to come.
  
Q4Such an application must be very CPU-hungry. Can you tell us the hardware required to run VideoTrace? Does its architecture take advantage of multi-core processors?
A4It's certainly not going to run on your cell phone any time in the near future, but you really don't need to have all that powerful a machine to use it. Most of the work I've done with it has been on my laptop, which really isn't anything special. All you really need is a graphics card with OpenGL support, and it doesn't even need to be the latest version.
  
Q5In which format(s) does VideoTrace export the 3D models?
A5VideoTrace exports in a few formats, but the most useful one is VRML. Most packages import VRML, and there are translators for pretty much every other relevant format that you can think of. The current Beta has a limited set of import formats, but the next version will import from more of the camera trackers. We've implemented the functionality already, it's just a question of documenting it really.
  
Q6How did researchers from The Australian Centre for Visual Technologies and The Oxford Brookes Computer Vision Group meet around this project?
A6

We've known each other for a surprisingly long time, but really the primary motivator was that Phil Torr (from Oxford Brookes) partly supervised the PhD of Anthony Dick (from Adelaide). The collaboration has been extremely positive, and one for which we've just got and other 3 years research funding.

  
Q7Does VideoTrace make use of existing software libraries ? Can you tell us which ones?
A7It uses SSL and QT, but that's about it.
  
Q8VideoTrace technology could provide a tremendous addition to existing 3D modeling packages. Have you already been contacted by open source or commercial software providers?
A8Yes we're negotiating with quite a large number of companies about the future of VideoTrace, but we haven't decided on anything yet. It seems to take a while to get these things organised.
  
Q9What improvements are to be done on the VideoTrace system? Can you give us a brief roadmap?
A9We've got a lot planned for improving the fidelity and flexibility of VideoTrace over the next few years. We're looking initially at using interactive dense matching (a technique from computer vision) to improve the way we handle curved surfaces. We're looking at how we might interactively de- and re-light objects which are cut and paste between video sequences. We may look at interactive camera tracking, there's a long list.
Serious Games
Soumit par Administrateur le mardi, 01/10/2007
 Q&AVirginie Vega, 
Chef de projet Serious Games Sessions Europe,

Oct. 2007
 

"En plus des traditionnels secteurs de la défense et de la sécurité civile, les Serious Games sont désormais utilisés par les professionnels de la santé, de la Science, par les collectivités publiques mais aussi par de nombreux organismes de formation ou toute autre industrie généraliste."

Serious Games 2006

  
Q1A qui s'adresse le Serious Games Sessions Europe ?
A1Tout d?abord, il s?agit d?un salon professionnel qui se tiendra le 3 d√©cembre 2007 √† Lyon (Cit√© des Congr√®s). 
L'√©v√©nement cible en priorit√© le secteur du jeu vid√©o. Il concerne √©galement un large panorama venu des secteurs classiques ou non technologiques comme les Administrations militaires, Sanitaires, S√©curit√© civile, Collectivit√©s, Entreprises?
  
Q2Quels sont les points forts de la manifestation ?
A2L?objectif du Serious Games Sessions : √©changer, d√©battre, d√©couvrir et tester des solutions utilisant les technologies du jeu vid√©o.
La manifestation s?appuiera sur les points forts qui ont fait son succès : une série de conférences animée par les plus grands experts du secteur, un espace d?exposition ouvert à tous, et des démonstrations accessibles tout au long de l?événement. Le but étant d?offrir le maximum d?information aux visiteurs. En plus de découvrir un large panel d?applications, les visiteurs et participants auront la possibilité de rencontrer les plus grands professionnels du secteur.
  
Q3On conna√ģt les applications historiques du Serious Gaming (simulateurs). Quelles sont les nouvelles applications ?
A3

En effet l?idée n?est pas nouvelle. Largement utilisés par l?armée américaine afin de rendre le secteur de la défense plus attractif ou encore par l?industrie aéronautique ou automobile au travers de simulateurs virtuels, les Serious Games ne se réduisent plus à ces applications.
En plus des traditionnels secteurs de la défense et de la sécurité civile, les Serious Games sont désormais utilisés par les professionnels de la santé, de la Science, par les collectivités publiques mais aussi par de nombreux organismes de formation ou toute autre industrie généraliste. Les jeux éducatifs classiques, apparus dans les années 1970-1980, s'adressaient aux enfants et adolescents. Les Serious Games, eux, proposent une réelle formation qui ne s'adresse pas seulement aux enfants mais à une cible beaucoup plus large, y compris corporate. Notons que plus de 40% du marché américain de la formation utilisera la simulation en 2008, selon les estimations du cabinet d?études International Data Corporation. Il est important également de préciser que le Serious Gaming ne se limite plus à la simulation : l'utilisation des mondes multijoueurs en ligne pour la formation et l?apprentissage en est la principale illustration.

  
Q4Les éditeurs de jeux vidéo s'intéressent-t-ils à ce marché ?
A4

En réalité, non. Les Serious Games n?ont rien à voir avec le métier des éditeurs qui travaillent au service du divertissement et du grand public. Rappelons que les Serious Games sont des produits sur mesure qui s?adressent aux professionnels. Les logiciels sont développés à la demande des entreprises pour leurs salariés. Les studios de développement peuvent se repositionner sur le Serious Gaming et y percevoir des opportunités de croissance, mais les éditeurs, eux, ne sont pas concernés par ce marché.

  
Q5Quels acteurs seront présents dans cette édition ?
A5

Deux grands studios français de Serious Games participeront à cette 3e édition. Accompagnés de leurs clients (respectivement AXA et l?Oréal), Daesign et Net Division exploreront les enjeux du Serious Gaming et présenteront les résultats obtenus par les utilisateurs finaux de Serious Games en s?appuyant sur des démonstrations. Les deux sociétés tenteront de démontrer en quoi le Serious Gaming a répondu à leurs attentes en terme de formation, et comment elles comptent poursuivre dans cette nouvelle approche. L?ESC Chambéry sera également là afin d?exposer ses travaux de recherche sur cette thématique.

  
Q6Serious Games Session est un événement national ou bien international ?
A6Face au succès rencontré par les précédentes éditions, Lyon Game a décidé de développer la manifestation et de l?étendre à l?ensemble des développeurs nationaux et européens. Aujourd?hui le Serious Gaming draine un grand nombre d?acteurs nationaux et internationaux. Dans cette optique d?ouverture, l?appel à projets a été lancé afin de recruter les meilleurs intervenants internationaux. Cette année de nombreux experts tels que Doug Wathley (Etats-Unis), Kam Memarzia (UK), Per Backlund (Université de Skövde ? Suède), Wi Jong-Hyun (Chung-Ang University ? Corée du Sud) seront présents. Le Serious Games Session 2007 sera placé sous le signe de la diversité !
  
Q7Les applications "3D enrichie", "Réalité Virtuelle" sont-elles des Serious Games ? Comment définit-on ce nouveau concept ?
A7Les Réalités Virtuelles qui utilisent les technologies et savoir-faire issus du jeu vidéo (que ce soit le flash ou autre) peuvent effectivement être considérées comme des Serious Games. Cependant, certaines Realités Virtuelles ne font pas appel à ces technologies. Ces dernières n?appartiennent donc pas toutes au monde du Serious Gaming.
KWorld
Soumit par Administrateur le jeudi, 01/08/2007
 Q&AKWorld
Petr
August 2007
 

 

 

 

"You can create virtual 3D presentations of various environments, tools, working sets, etc. You can present your car or your kitchen in 3D as executable presentation, screensaver or even in html page."

 

< Editeur K-World

  
Q1Could you please give a brief decription of KWorld?
A1KWorld is a realtime 3D scene editor, oriented to small interactive environments. It is a scene editor, not a model editor, thus you only import models, create effects, make project logic and present the result.
  
Q2What kind of 3D content can be created with KWorld?
A2You can create virtual 3D presentations of various environments, tools, working sets, etc. You can present your car or your kitchen in 3D as executable presentation, screensaver or even in html page.
  
Q3Is it user-freindly, what is the learning curve of KWorld?
A3It is targeted for non programmer users, I hope that everyone can make some nice scene himself after completing tutorials on the webpage.
  
Q4How can a graphist artist create interactions with objects?
A4All interaction and scene logic is done visually in KWorld, thus he dont need to know any scripting or programming language. He simply drag & drop the 3D object to logic view in KWorld and connect proper connectors of entities to make requested interaction.
  
Q5KWorld 3D presentation can be embedded in a web page, do you plan to support browser surch as Firefox?
A5I was not able to make it work in Firefox. If you can make ActiveX working in FF, your KW presentation should work. This page could help http://www.iol.ie/~locka/mozilla/plugin.htm .
  
Q6KWorld has an open interface with plugin, does it mean that it came with a full SDK?
A6I am planning to publish all plugins source codes, but first I have to clean the source code from some nasty words ;) A lot of things is made by plugins in KW and I hope this feature can help many people with their own specialized 3D presentation needs.
  
Q7Kworld import geometry as DirectX. Does it support characters with animated bones?
A7Yes, it support character import and animations. It does not support bone editing.
  
Q8What are the main features of Kworld regarding 3D rendering (antiliasing, shaders...)?
A8These features are mainly made by plugins. Antialising is not supported by default, but can be enabled in a config file. Main current features can be particle system, shaders, sky boxes, volume textures or PRT. Nice effect is also 3D sound object.
  
Q9KWorld is a freeware, can it be used for commercial projects?
A9Sure, but I would like to know it :)
AMD : Making of Ryby
Soumit par Administrateur le lundi, 01/07/2007
 Q&ACallan McInally from AMD
ATI/AMD Ruby Demo
july 2007
 

"Our demo engine, called the Sushi Engine, was designed and developed in house. Creating our own engine allows us to target and focus on the cutting edge of real-time computer graphics hardware (unlike game engines that have to support multiple generations) and also allows us to gain valuable insights into the fine art of graphics engine architecting and development. "

I

Q1The latest ATI Radeon Ruby:Whiteout demo pushes the boundaries of ultra realism. What is ATI's aim in producing such technology demos?
A1

Get to know the challenges that next-gen game developers will be facing, solve some of the problems that current-gen game developers are already facing, show case the power and features of our latest GPUs, but most of all? we do it because we love graphics and it?s just too much fun to pass up.

  
Q2How much time did the making of the demo take?
A2About 2 years (Spring 2005 -> Spring 2007) but this also includes developing a new engine and new art tools (which we will re-use for the next few years/demos/etc).

 

  
Q3Does the demo make use of a home-made, ATI-internal 3D engine? Or does it leverage a game engine such as the Unreal Engine or the CryEngine?
A3Our demo engine, called the Sushi Engine, was designed and developed in house. Creating our own engine allows us to target and focus on the cutting edge of real-time computer graphics hardware (unlike game engines that have to support multiple generations) and also allows us to gain valuable insights into the fine art of graphics engine architecting and development. We can experiment with new rendering techniques in a relatively low-risk environment (a game production environment is often hectic and it can be difficult to find time to try new things that may not work out in the end). The lessons we learn become valuable information for our hardware and software architects as well as external game developers.
  
Q4In the demo, the snow's rendering is quite astonishing. How did you reach such quality level?
A4First, we have fantastic artists who aren?t scared of diving into the shader code to make adjustments if they see fit. Second, the HD 2xxx series provides enormous amounts of compute power which enabled us to create highly configurable, procedural shaders which gives the mountains a very natural and non-repetitive look. If the mountains were to be hand-painted, the quality would have been much lower? there simply isn?t enough time in the day to paint all that detail into all those mountains. This kind of technique is exactly how such a virtual landscape would be created for a feature film.

In order to achieve the right look, we used subsurface scattering techniques to simulate the complex interactions that occur between snow and light. In addition to subsurface scattering, more advanced lighting models were used. Anytime you are lighting something outdoors it is important to take sky light into account. When light from the sun reaches earth, it scatters due to the gases and other particles that make up our atmosphere. So when you place an object outdoors, there?s light coming at it from all different directions (not just the direction of the sun) and so the sky itself acts as a giant blue-ish area light source.

  
Q5The skin and lips of the Ruby character seem to come right from an animation feature film. Is it difficult to achieve?
A5Animated characters are always a challenge. The Ruby character uses many of the same animation techniques that are employed by the film industry. Our artists created an enormous set of face shapes (which you can think of as poses or expressions) and then they pick and choose from these shapes blending in a smile here and a wink there to build up each and every frame of animation. To speed up the process we partnered with ImageMetrics a company that captures the facial performance of real actors using digital video and then uses complex algorithms (similar to facial recognition) to pick and chose the artist created face shapes and then blend them together to mimic our actress?s performance. In addition to all this, we also developed a technique that allows artists to animate facial wrinkles on Ruby. Though it may seem like a minor detail, facial wrinkles are an important part of facial expression. A wrinkled brow is all that?s needed to let you know that a person is deep in thought and a small crease on Ruby?s cheek helps let you know that her smile is genuine.
  
Q6Some methods used to be available only in pre-calculated rendering, i.e. Sub-Surface Scattering, Ambient Occlusion, Environment Reflection. Are they now available to realtime content makers?
A6Yes, in fact all three of your examples are used in the Whiteout demo. These technologies are available to realtime content makers but these techniques are also constantly improving. For example, we are on to our 4th generation of subsurface scattering technology for human skin (Ruby1 -> Ruby4). All three of those techniques fall under the ?global illumination? umbrella (where the lighting calculation performed at the surface of an object depends on the global, surrounding scene) and this area of real-time computer graphics continues to be a hot area of research with new advances being developed all the time.
  
Q7Which DCC tools were used in the making of this demo?
A7Maya, ZBrush, Modo, World Machine, Photoshop, and some of our own custom tools such as CubeMapGen, ATI Normal Mapper, and Tootle.
  
Q8Can this demo run in real time on a Radeon HD 2900XT?
A8Yes.
  
Q9Can we expect games and real time productions to reach such visual excellence? What will be left to the pre-calculated 3D field?
A9Absolutely.There?s still plenty for offline simulation problems to tackle. Rendering is one aspect of offline CG. There?s also physics and other forms of simulation. Interestingly, one shift we are seeing in the offline rendering field is that large scale CPU based render-farms are being replaced by arrays of GPUs. This makes sense because of the vast compute power offered by GPUs.
AMD : Making of Ryby
Soumit par Administrateur le lundi, 01/07/2007
 Q&ACallan McInally from AMD
ATI/AMD Ruby Demo
july 2007
 

"Our demo engine, called the Sushi Engine, was designed and developed in house. Creating our own engine allows us to target and focus on the cutting edge of real-time computer graphics hardware (unlike game engines that have to support multiple generations) and also allows us to gain valuable insights into the fine art of graphics engine architecting and development. "

I

Q1The latest ATI Radeon Ruby:Whiteout demo pushes the boundaries of ultra realism. What is ATI's aim in producing such technology demos?
A1

Get to know the challenges that next-gen game developers will be facing, solve some of the problems that current-gen game developers are already facing, show case the power and features of our latest GPUs, but most of all? we do it because we love graphics and it?s just too much fun to pass up.

  
Q2How much time did the making of the demo take?
A2About 2 years (Spring 2005 -> Spring 2007) but this also includes developing a new engine and new art tools (which we will re-use for the next few years/demos/etc).

 

  
Q3Does the demo make use of a home-made, ATI-internal 3D engine? Or does it leverage a game engine such as the Unreal Engine or the CryEngine?
A3Our demo engine, called the Sushi Engine, was designed and developed in house. Creating our own engine allows us to target and focus on the cutting edge of real-time computer graphics hardware (unlike game engines that have to support multiple generations) and also allows us to gain valuable insights into the fine art of graphics engine architecting and development. We can experiment with new rendering techniques in a relatively low-risk environment (a game production environment is often hectic and it can be difficult to find time to try new things that may not work out in the end). The lessons we learn become valuable information for our hardware and software architects as well as external game developers.
  
Q4In the demo, the snow's rendering is quite astonishing. How did you reach such quality level?
A4First, we have fantastic artists who aren?t scared of diving into the shader code to make adjustments if they see fit. Second, the HD 2xxx series provides enormous amounts of compute power which enabled us to create highly configurable, procedural shaders which gives the mountains a very natural and non-repetitive look. If the mountains were to be hand-painted, the quality would have been much lower? there simply isn?t enough time in the day to paint all that detail into all those mountains. This kind of technique is exactly how such a virtual landscape would be created for a feature film.

In order to achieve the right look, we used subsurface scattering techniques to simulate the complex interactions that occur between snow and light. In addition to subsurface scattering, more advanced lighting models were used. Anytime you are lighting something outdoors it is important to take sky light into account. When light from the sun reaches earth, it scatters due to the gases and other particles that make up our atmosphere. So when you place an object outdoors, there?s light coming at it from all different directions (not just the direction of the sun) and so the sky itself acts as a giant blue-ish area light source.

  
Q5The skin and lips of the Ruby character seem to come right from an animation feature film. Is it difficult to achieve?
A5Animated characters are always a challenge. The Ruby character uses many of the same animation techniques that are employed by the film industry. Our artists created an enormous set of face shapes (which you can think of as poses or expressions) and then they pick and choose from these shapes blending in a smile here and a wink there to build up each and every frame of animation. To speed up the process we partnered with ImageMetrics a company that captures the facial performance of real actors using digital video and then uses complex algorithms (similar to facial recognition) to pick and chose the artist created face shapes and then blend them together to mimic our actress?s performance. In addition to all this, we also developed a technique that allows artists to animate facial wrinkles on Ruby. Though it may seem like a minor detail, facial wrinkles are an important part of facial expression. A wrinkled brow is all that?s needed to let you know that a person is deep in thought and a small crease on Ruby?s cheek helps let you know that her smile is genuine.
  
Q6Some methods used to be available only in pre-calculated rendering, i.e. Sub-Surface Scattering, Ambient Occlusion, Environment Reflection. Are they now available to realtime content makers?
A6Yes, in fact all three of your examples are used in the Whiteout demo. These technologies are available to realtime content makers but these techniques are also constantly improving. For example, we are on to our 4th generation of subsurface scattering technology for human skin (Ruby1 -> Ruby4). All three of those techniques fall under the ?global illumination? umbrella (where the lighting calculation performed at the surface of an object depends on the global, surrounding scene) and this area of real-time computer graphics continues to be a hot area of research with new advances being developed all the time.
  
Q7Which DCC tools were used in the making of this demo?
A7Maya, ZBrush, Modo, World Machine, Photoshop, and some of our own custom tools such as CubeMapGen, ATI Normal Mapper, and Tootle.
  
Q8Can this demo run in real time on a Radeon HD 2900XT?
A8Yes.
  
Q9Can we expect games and real time productions to reach such visual excellence? What will be left to the pre-calculated 3D field?
A9Absolutely.There?s still plenty for offline simulation problems to tackle. Rendering is one aspect of offline CG. There?s also physics and other forms of simulation. Interestingly, one shift we are seeing in the offline rendering field is that large scale CPU based render-farms are being replaced by arrays of GPUs. This makes sense because of the vast compute power offered by GPUs.
MADLIX
Soumit par Administrateur le mercredi, 01/05/2007
 Q&AKhashayar Farmanbar
Chief Executive Officer
Agency9
May 2007
 

 

 

"The idea is to let users insert 3D into their own spaces, such as web pages, google pages, blogs, portfolio pages, fan pages and more."

 

MADLIX.com

  
Q1What is MADLIX?
A1The idea is to let users insert 3D into their own spaces, such as web pages, google pages, blogs, portfolio pages, fan pages and more. We want the whole internet community to be able to take advantage of the solution; hence we made sure to find a way of connecting 3D artists with end-users.
MADLIX consists of a 3D player that runs smoothly inside all Java-enabled browsers with no need for custom plug-in or application installation. It uses OpenGL to ensure high performance but also offer a fall-back to software rendering if the hardware doesn't support 3D acceleration.
The MADLIX gallery at www.madlix.com is the heart of the product, where 3D artists can submit their artwork to and it is free for everyone to insert the 3D content of their choice into their web space.
  
Q2How can I publish my 3d models on the web?
A2We provide a simple and powerful tool for artists to publish their 3D artwork on MADLIX. It is available for download at the MADLIX website.
The exporter tool handles COLLADA files and also contains a plug-in for Autodesk Maya. The tool also includes a standalone viewer that handles MADLIX files (.mlx) as well as COLLADA files (.dae).
We?ve chosen to use the MADLIX file format, which is secured and encrypted in order to ensure file integrity and security for artists who do not want to see their artwork wonder around. Gradually we will add more functionality; one will probably be the option of publishing 3D artwork in the COLLADA format.
  
Q3Can I publish the 3D models on my blog?
A3

Yes. That is one of the main features with MADLIX.

We discovered that many web sites were like a candy display window. You can look at the content, but not take them with you. MADLIX is the candy shop where you can take what you see with you to any web space. We?re constantly adding support for communities and sites, and you can easily insert the artwork by copying the embed tag and paste it into the html-code of any web page.

  
Q4Why using COLLADA?
A4We want to offer as easy and reliable way as possible for 3D artists to work with MADLIX. During the last year COLLADA has gained massive support and there are tools available for almost all major 3D DCC tools to export content to the COLLADA format. Agency9 was one of the first companies to fully commit to COLLADA. We believe that standards that are available to anyone help creating a healthy industry.
  
Q5I'm using Maya, what are the different steps for publishing 3D?
A51. Download and install the MADLIX export wizard. 

2. Create you 3D model, animations, etc. Click the preview button to see how the result will be, you can preview the whole scene or only selected objects.

3. When you are satisfied you can click the export button, again you can choose export all or export selected. Follow the instructions on the screen.
  
Q6Can I upload very large files?
A6Yes, we have no limitations on file size. The only drawback is that the model download time gets longer, and that lower end machines might have trouble rendering the model at decent frame rates. Generally we recommend keeping the size around 0.5 - 1 MB.
  
Q7Is it possible to publish animate characters?
A7Yes, character skinning and animation is fully supported. Characters and meshes that moves might wonder off the screen of course. For now we have not engaged tracking, so users might have to keep track of the characters themselves.
  
Q8What would be the extended features of MADLIX pro?
A8All the features for the coming MADLIX Professional are not set yet. They will be decided and finalized based on the feedback we receive from users and partners that use MADLIX. One of the features that we know is the possibility to publish 3D content without submitting it to the MADLIX content gallery. We would appreciate feedback on what features professional users would like to see in the coming MADLIX Pro.
  
Q9About AgentFX, is it possible to create online games with modern features: physics, shaders, shadow, AI?
A9Yes, definitely. AgentFX v.3 has built in support for most major shading langues such as Cg and GLSL as well as other high-end feature such as shadows, HDR and water rendering.
AgentFX is a graphics engine and for features that lay outside of the graphics domain you may need to use other 3rd party tools. As an example AgentFX has been successfully used with both PhysX and ODE for physics as well as with Mathlab for more advanced simulations.
  
Q10 AgentFX is based on Java. Are developing costs cheaper than C++ engines?
A10From our own experience developing in Java is much faster than traditional languages like C/C++. Development in a memory managed environments often tend to be much less error prone and makes debugging easier.
  
Q11Is it possible to publish AgentFX content in a web page (Applet) like MADLIX?
A11Yes, with AgentFX v.3 which powers MADLIX.
MADLIX
Soumit par Administrateur le mercredi, 01/05/2007
 Q&AKhashayar Farmanbar
Chief Executive Officer
Agency9
May 2007
 

 

 

"The idea is to let users insert 3D into their own spaces, such as web pages, google pages, blogs, portfolio pages, fan pages and more."

 

MADLIX.com

  
Q1What is MADLIX?
A1The idea is to let users insert 3D into their own spaces, such as web pages, google pages, blogs, portfolio pages, fan pages and more. We want the whole internet community to be able to take advantage of the solution; hence we made sure to find a way of connecting 3D artists with end-users.
MADLIX consists of a 3D player that runs smoothly inside all Java-enabled browsers with no need for custom plug-in or application installation. It uses OpenGL to ensure high performance but also offer a fall-back to software rendering if the hardware doesn't support 3D acceleration.
The MADLIX gallery at www.madlix.com is the heart of the product, where 3D artists can submit their artwork to and it is free for everyone to insert the 3D content of their choice into their web space.
  
Q2How can I publish my 3d models on the web?
A2We provide a simple and powerful tool for artists to publish their 3D artwork on MADLIX. It is available for download at the MADLIX website.
The exporter tool handles COLLADA files and also contains a plug-in for Autodesk Maya. The tool also includes a standalone viewer that handles MADLIX files (.mlx) as well as COLLADA files (.dae).
We?ve chosen to use the MADLIX file format, which is secured and encrypted in order to ensure file integrity and security for artists who do not want to see their artwork wonder around. Gradually we will add more functionality; one will probably be the option of publishing 3D artwork in the COLLADA format.
  
Q3Can I publish the 3D models on my blog?
A3

Yes. That is one of the main features with MADLIX.

We discovered that many web sites were like a candy display window. You can look at the content, but not take them with you. MADLIX is the candy shop where you can take what you see with you to any web space. We?re constantly adding support for communities and sites, and you can easily insert the artwork by copying the embed tag and paste it into the html-code of any web page.

  
Q4Why using COLLADA?
A4We want to offer as easy and reliable way as possible for 3D artists to work with MADLIX. During the last year COLLADA has gained massive support and there are tools available for almost all major 3D DCC tools to export content to the COLLADA format. Agency9 was one of the first companies to fully commit to COLLADA. We believe that standards that are available to anyone help creating a healthy industry.
  
Q5I'm using Maya, what are the different steps for publishing 3D?
A51. Download and install the MADLIX export wizard. 

2. Create you 3D model, animations, etc. Click the preview button to see how the result will be, you can preview the whole scene or only selected objects.

3. When you are satisfied you can click the export button, again you can choose export all or export selected. Follow the instructions on the screen.
  
Q6Can I upload very large files?
A6Yes, we have no limitations on file size. The only drawback is that the model download time gets longer, and that lower end machines might have trouble rendering the model at decent frame rates. Generally we recommend keeping the size around 0.5 - 1 MB.
  
Q7Is it possible to publish animate characters?
A7Yes, character skinning and animation is fully supported. Characters and meshes that moves might wonder off the screen of course. For now we have not engaged tracking, so users might have to keep track of the characters themselves.
  
Q8What would be the extended features of MADLIX pro?
A8All the features for the coming MADLIX Professional are not set yet. They will be decided and finalized based on the feedback we receive from users and partners that use MADLIX. One of the features that we know is the possibility to publish 3D content without submitting it to the MADLIX content gallery. We would appreciate feedback on what features professional users would like to see in the coming MADLIX Pro.
  
Q9About AgentFX, is it possible to create online games with modern features: physics, shaders, shadow, AI?
A9Yes, definitely. AgentFX v.3 has built in support for most major shading langues such as Cg and GLSL as well as other high-end feature such as shadows, HDR and water rendering.
AgentFX is a graphics engine and for features that lay outside of the graphics domain you may need to use other 3rd party tools. As an example AgentFX has been successfully used with both PhysX and ODE for physics as well as with Mathlab for more advanced simulations.
  
Q10 AgentFX is based on Java. Are developing costs cheaper than C++ engines?
A10From our own experience developing in Java is much faster than traditional languages like C/C++. Development in a memory managed environments often tend to be much less error prone and makes debugging easier.
  
Q11Is it possible to publish AgentFX content in a web page (Applet) like MADLIX?
A11Yes, with AgentFX v.3 which powers MADLIX.
Connexion & inscription | Mentions L√©gales | A propos de 3d-test et contact | ¬ģ 2001 2010 3d-test.com