Golden Week
This Week is golden week, so we will be taking a break of the blog, but do not worry, we will come back next week with more interesting articles for you to read.
This Week is golden week, so we will be taking a break of the blog, but do not worry, we will come back next week with more interesting articles for you to read.
By Takeshi Sasaki
Going back to explaining a little bit on what we do here in the lab, we have a great introduction and a little video of what RT Middleware is by our resident researcher.
Recently, OMG (Object Management Group, http://www.omg.org/) adopted the RTC (Robotic Technology Component) Specification that defined a component model for development of robotic systems.
In module or component based systems, independent elements (module or component) of functions of the systems are first developed and the systems are then built by combining the modules.
The modularization increases maintainability and reusability of the elements.
Moreover, flexible and scalable system can be realized since the system is reconfigured by adding or replacing only related components.
OpenRTM-aist (http://www.is.aist.go.jp/rt/OpenRTM-aist/) is a middleware that complies with the standard.
It also includes a template code generator which makes a source code of a component from the specification of the component (e.g. number of I/O ports, etc.) and a system design tool which provides graphical user interface to change the connection of components and start/stop the system.
The attached video shows an example of system construction using the OpenRTM-aist.
The bare sound of the word robotics makes us think in wires, electronics and artificial intelligence. But what would happen if we were to extend our imagination further than this concepts.
Theo Jansen calls himself a kinetic sculptor and creates a variety of creatures that use wind to move.
Presented in TED in 2007 and in a BMW advertisement his work is inspiring and thrives us to search further answers in unlikely environments.
Labels: Robotics
Any of you remember the movie Minority Report. DO you remember the user interface that was used to get the information out of the computer.
A Lab at MIT is working jointly with some companies into developing such interface, which was inspired precisely in this laboratory's work.
We are expecting to create something similar but maybe using a different user interface system, who knows, maybe we are on the brink of something great.
System name is G-Speak.
Labels: AugmentedReality, Media
Hello readers:
This time with a new infomercial of a company called Festo, the idea is not to promote, but to show what have they managed to build in a controlled environment.
The idea is very cool and it reminds me of the Fantasia 2000 movie, you should just watch it and with this in mind try to expand your brains.
Ahh and by the way we have our semi weekly meeting just tomorrow. Be sure to keep you posted
Labels: Robotics
Yokohama port was one of the first five ports that opened to external influence, and currently is celebrating its 150th anniversary.
A lot of events are taking place, but of special interest is a giant robotic spider that is crawling the streets since the past three days.
The spider is human controlled, but it would be great if it could be automatically controlled via sensors and a path recognition system.
I leave you with a video.
Labels: Robotics
Recently over Facebook a friend showed me a link of great interest. Its name is http://academicearth.org/, this site contains also a great amount of free lectures in what appear to be a more orderly and clean way than the ones previously posted.
The other link is YouTube very own YouTubeEdu http://www.youtube.com/edu, here they compile all the content that different universities have over the site.
So you know, if you have some free time, or want to learn a little bit more. Give yourself the luxury of taking classes in some of the best universities.
Jennifer Raymond at Stanford University has a deep interest in how the brain works, she is currently developing an electric model in which we will be able to simulate the way it works.
Where this attempt prove to be successful it is obvious we will be able to finally understand the way we either remember someone's name or address or to record the way we move and process information.
Stanford online talks brings to us a brief explanation in this topic.
By Adm Csapo
This time we will introduce one of the research topics in our lab.
When a remotely teleoperated robot touches a surface, the generallywidespread solution today is to use haptic feedback devices to conveythe haptic sensations to the user. However, even state-of-the-arthaptic feedback devices have limitations in terms of portability andin terms of levels of operation. This is in contrast with the humanauditory system, which is highly sensitive to even minute variationsin sound, and which can be put to use through simple, commerciallyavailable headsets. The question arises: to what extent can we substitute haptic sensations through audio? If such substitution werepossible, it would be useful not only because tactile experience isimportant in our daily lives when we manipulate objects, but alsobecause ideas can hopefully be generalized and extended to other typesof feedback parameters – not just those derived from surface textures.
The goal of this research in the long term is to find out more aboutthe limits of conveying tactile feedback information using sound. Tothis end, we intend to create a hardware-software framework - referredto as HaptaSone - which will allow users to experiment with differentkinds of sounds that are meaningful to them, and test whether or notthey can be used to provide percepts comparable to those perceivedwhen touching surfaces. In a way similar to Bach y Rita's findings inthe haptic feedback of visual information [1], such percepts can hopefully be achieved thanks to to the extremely high plasticity ofthe brain, once users perceive the audio output of HaptaSone as adirect consequence of their own actions (i.e., touching differentsurfaces).
HaptaSone is intended to be an interactive framework which will allowusers to attach measuring devices such as laser profilometers andinfrared temperature sensors to their hands and listen to audiofeedback generated based on texture and temperature properties of thesurface they are touching. The block diagram of HaptaSone can be seenin figure 1. As can be seen in the diagram, after the acquisition ofthe sensory data, soft computing methods such as artificial neural networks will be used to infer the perception that the user would getwhen directly touching the given surface. Ideally, this inferencewill be based on training data acquired from actual psychophysicalexperiments in which users will be asked to compare different surfacesin terms of hardness, roughness, sharpness, and temperature.
Based on the acquired surface descriptors, the Matching and Pairingmodule will select a set of sounds from the sound database (whichcontains predefined sounds as well as labeled sounds that can besupplied by the user) and pairs each surface parameter with a certainauditory cognitive communication channel. This pairing is notarbitrary and should be supported by psychophysical models of humanauditory perception.
Submitted by Lazlo Jeni
Emotiv is a company that devoted itself to build a very good user interface that allows us to control videogames with our minds.
Now, imagine what we may do if we build this interface in an Intelligent Space such as ours, quite interesting applications may be achieved.
Do you have any ideas, please put it in the comment section.
Labels: AugmentedReality, LAZLO, Media
Submitted by Lazlo Jeni
Robotics is not always just about how to make the best interaction, or the best interface. Sometimes it is also about having people to have some kind of reaction for the robot.
In NYU an Art student build a somehow simple robot and made it walk through the streets of New York. It did not had any kind of sensor, it only walked in a straight line.
The objective was to watch what would be people’s reaction when they see the robot encounter some obstacle, whether they help or just stand by.
Thank You all, in one week we have reached 100 views, lets hope we gather more readers and in order to do that we will put interesting stories for you
Submitted by Peshala
We know we have not talked a lot about augmented reality and is because of this that this time we are going to present something about that.
Augmented Reality has always been a hot topic not only to those who conduct research at Hashimoto lab but also to those crazy minds out there who dream about future technology.
It seems like this is not far from reality as some students at the MIT Media Lab have developed a wearable computing system that turns any surface into an interactive display screen and thereby providing the user with all information he needs.
A picture is worth a thousand words. And a video, of course, more than that.
So, Let's have a look at this awesome presentation.
Labels: AugmentedReality, Media, PESHALA
Labels: MEETING
Labels: HLAB