Thursday, December 17, 2009

Some Perspective



I think I will end this blog with this post for it is a good way to end with some perspective. With all this technology, I think we sometimes forget to realize the full perspective of things. We are just the smallest of things in the very grand universe. There is just so much out there to be explored.

We shouldn't waste time complaining about why our iPhone application isn't streamlined enough, but instead doing great things - like helping for great causes like my prior post.

Technology is an incredibly great interest of mine. And through technology I think great things can happen.

It is just up to me, up to all of us, to make it so.

Thank You Sincerely For Reading,
-brandon

Project For Awesome

The Project for Awesome, originally organized by the vlogbrothers on youtube officially starts today. I've attached a few of the many submission videos below.







That which makes The Project for Awesome so.. well... awesome, is that it is able to bring so many people together in a digital community. Through shared morale alone, hundreds of thousands of video makers and subscribers are able to work together for a wonderful cause.

Basically, the project works by having everyone rate and comment all of these Project for Awesome videos that are posted on the same day - today, Thursday December 17, 2009. Through rating and commenting these videos, of which all revolve around helping make the world a more awesome place, are able to completely take over the youtube home page, from which millions of other can see them.

The Project for Awesome is an amazing example of how the digital world is helping to connect us all.

Everyone please go, rate, comment, subscribe. Yet more importantly, help the world become a more awesome place.

nyti.ms



I really don't understand this shortened url business. As of recent, the start up company, bit.ly, has become the go-to service for acquiring shortened url's - basically the trick is to register url's in other countries (which you can do), and them forward them to your regular url.

Now, I understand the point of shortening urls for ease of use, but does this not contradict the ease of remembering? Everything knows that if you time in google.com, facebook.com, apple.com, nytimes.com, cnn.com, etc. where it's gonna take us. Primarily because we may use them all the time, but also because its usually just... the name of the company... then .com.

Now if were were to try to find nyti.ms, would it not be equally plausible to search for nytim.es. The problem that has arisen is that companies think we want shortened urls just for the heck of it... and because of Twitter's character limit. Yet two things of which I am certain: Twitter's character limit will be expanded, and urls with an of various domain names are hard as heck to remember?!

Shortening a url just for the sake of making it shorter is kin of silly, and honestly, not that much shorter. In a world of shortened text-speech, I understand fully what these major corporations are thinking. But honestly, I've not once felt that nytimes.com was just too much to type. If I go to a website I just bookmark it and call it a day, otherwise its pretty much a guessing game anyway. And on an ironic note, the only companies trying to shorten their names are the ones that everyone already knows!

Am I alone on the absurdity of this new trend?

Oh 3D...



Now I know I seem to be bringing 3D up a lot lately, but the fact of the matter is, it is like the next new big thing - only not that new, nor that great, nor that big, just kinda... okay.

Regardless, it yet another misuse of 3D, the picture above is from the Cowboys Stadium, in which all fans were given 3D glasses to experience "feeling like you're right in the game"... while watching the game. Ha!

Now don't get me wrong, I understand the push for the technology is relentless, but who really thought this was a good idea?! Wearing 3D glasses may give a little bit of nausea-inducing depth to their huge television, but at a cost of what you may ask? Watching the very game in front of you!

The problem with technology, is that sometimes it fails to realize that it too has specific purposes. In many cases it tries to bring the "real-life" effect to non real life activities (I am still working on my impending haptic technologies write-up). Yet bringing "real-life" to real life?! This just makes no sense to me whatsoever.

If the technology were implemented without a need for glasses (thus allowing for simultaneous watching of the actual game - actually live right in front of you) then I would have considered this idea somewhat respectable.)

But alas, after six minutes of booing fans, maybe HDLogix learned this the hard way.

Mag+ Digital Magazine Concept

Mag+ from Bonnier on Vimeo.



The magazine concept described above touches on quite a few interesting topics. Most centrally, the discussion falls on the understanding, or lackthereof, of how a person interacts and with a magazine construct. When trying to replicate this interaction in a digital manner, Bonnier chose not to simplify 'digify' the magazine-reading process, yet instead to bring out the key distinctions that make a magazine a magazine, while taking advantage of a central digital construct.

One of the continuing failures in many electronic magazine concepts, is the misconception that a new magazine, must be just like the old, but why?! As discussed in the video above, they looked at that which is most important - image integration and text. Magazines central role is of course the focus of the many photos -yet these photos too are supplied with ample text.

Text, as the internet has shown, is easiest to read in a scrolling manner. Sure flipping pages makes it feel like the real thing, yet ultimately, it's nothing more than an unneeded nuance. If instead, one was to look at how the text and images interact, that is we typically look at the photos first, find that which is interesting, then read heading information, and if we are intersting we then dive into the suplemental text - we could then find a much more efficient design for a digital magazine concept.

The concept above brings up these points and finds solutions that not only seem very efficient, but also intuitive.

As technology comes to terms with this realization, I question if e-book readers themselves will be forced to make a transition as well. Sure e-books are supposed to be just like book, but as of this writing - the refresh rate upon pressing the next-page button is terribly slow. Granted scrolling as of now is not yet readily possible.

Regardless, the folks over at Bonnier seem to be looking at technology from the right viewpoint. As the digital age continues to take over all of our old media platforms, we must take advantage of an opportunity to improve upon these platforms - not just move them to LCD screens.

Gustav Eiffel



I ran across an interesting article yesterday while scanning the internets. Apparently Wednesday December 16, 1832 commemorates the birth and life of Gustav Eiffel. As it is probably already apparent, Eiffel was the architect and designer of such amazing structures as the Garabit viaduct, the iron structure of the Statue of Liberty (Enlightening the World), and of course the Eiffel Tower. Accomplishing all of these feats before the turning of the 20th century, landmarks that remain visible nearly a century thereafter is amazing to me.

As technology continually progresses we see the lifespan of these enormously complex system dwindle from a few years, to sometimes just one or less. Sure some people may carry such devices around for a few years thereafter, but for a device to last for over a century! - impossible.

Don't get me wrong, designing a device to be sold on the market as opposed to designing a building are two very different tasks with two very different purposes.

It's just amazing. That a single person could design two very different things with their very hands. In 100 years one thing may never be thought of, while another stands in the minds of millions.

Yet I suppose this is the continual endeavor of art.

Technology is not art, though technology can be used in artistic ways, as I have analyzed on a number of ocassions. Regardless, the point of this blog is to bring up the memory of an incredible designer, and just as importantly the idea of time in what we create. Tech niches will only last for a very short period of time, whether we are referring to the iPhone or Facebook, but designs with a much deeper, significant meaning - may be our only means of standing the test of time.

Wednesday, December 16, 2009

Google Phone



As the perpetual rumor mill continues, it seems quite an interesting product has been designed by none other than Google itself. The device you see above, is ironically enough, a "Google Phone" dubbed the Nexus One. The irony comes in the fact that Google designed Android, its open-sourced operating system, as just that - an open-sourced option for a multitude of other carriers. Though the idea of Google bringing its own phone to the market may sound interesting, realize its resounding effects.

Bringing such a phone to market would directly contradict the purpose of Android. If this phone were to run on this software, than what advantage would Google have in selling this phone over existing handsets - especially if it is going to be offered as an unsubsidized, unlocked option, as discussed.

If this were the case, the the only successful model for selling such a phone would mean that Google would need to make its device somewhat distinct from rest of the competition - that is incorporate qualities not yet found in other devices. Incorporating such technologies would directly contradict what the Open Handset Alliance and Android is all about!

I hope, as is rumored, this is just a developer phone for testing software and nothing more. I sure hope so, else I foresee a very unstable Android market in the near future.

Keystick: Collapsing Keyboard Concept



I love this design. Honestly, there have been a number of failed attempts at designing an easily portable keyboard - the most successful of those of course being that of a foldable (rollable) soft-keyboard (which has very little feedback resemblance to that of a normal keyboard) and an laser based projection keyboard (which has no haptic feedback whatsoever, and is quite expensive). This design manages to combat both of these design issues in a wonderfully original form factor.

The only problem; is the market for portable keyboards. Nowadays who really needs them? Back a few years ago, the need was a little greater, for resistive touchscreens weren't very accurate, capacitive touchscreens were still on the horizon, and well, most people didn't have smartphones. Instead such a device might be used with a pda to take notes or something. Yet is there a huge need now? Honestly, I think not.

One could buy a netbook for a couple hundred dollars, a smartphone for nearly nothing on contract, and how often do you find yourself using a computer that has no keyboard?

Again, I love this design. Very great concept, but will this technology go anywhere? I'm afraid not. But could it be adopted into other technologies?! Exactly. Given the foldability of OLED displays, could you imagine the possibilities of portable screen, if we could just fan it out when we use it, and otherwise just carry around an electronic stick?

Reinventing the Wheel



Researchers at MIT have done the inevitable, they have attempted to reinvent the wheel - or maybe more specifically the bike wheel. The tech behind this wheel is actually quite exciting including: a sensors for detecting speed, distance, and direction, which are sent via Bluetooth to the rider's iPhone. Also, the system has a built in lock that sends the owner a text when tampered with. Granted as fun as this technology is, that which makes this Copenhagen Wheel most marketable is its KERS, Kinetic Energy Recovery System. This mechanism recovers energy from braking to give riders help when going up hills or riding particularly fast through traffic.

Such high-tech implementations as your everyday bicycle wheel is quite a feat. The KERS system in particular could greatly help in gathering new adopters in the somewhat current slim market (granted I'm sure plenty of bicycle purist out there aren't too happy about it.) Yet such a system, would definitely have its benefits, especially if the bicycle is being used as a primary means of transportation, as in many cities.

I have started to note an abundance of electric and kinetic-energy type bicycles in the last few months. Hopefully the technology is able to keep the sector alive.

It is definitely a healthy, and environmentally friendly, alternative.

Tuesday, December 15, 2009

Digital 3D Video



Given the release of the new Avatar film on Friday, I felt it a good discussion point to bring up 3D. 3D technology has definitely progressed a long way from the original days of red and blue glasses, yet it is still with its limitations.

Regardless, many producers have finally started jumping onto the 3D bandwagon - from sports segments to video games. Now don't get me wrong, I understand the point, the want, the need for 3D - I just feel that the illusion is being produced in the wrong manner.

Current technologies typically use polarized lenses - only allowing each eye to detect specific parts of the overall image. These parts are then combined into your brain, resulting in a final 3D composite. Yet the most common complaint is the inefficiency of the system. The images are not separated exactly how one's eyes would separate them, thus many complain of eye strain, headaches, discomfort, etc.

Replicating 3D, irregardless of medium, is always going to run into this problem. Yet there is of course a competing method - motion tracking. As I had discussed earlier, this is a method of altering the image with respect to your position, resulting in a 3D illusion.

I think motion tracking is our answer for gaming, for gaming typical calls for the active participation of the user - most user would not mind having to duck to dodge bullets, etc.

But for the film industry, and television, such an implementation does not seem quite as plausible. On top of the inability to actually move from your seats in a movie theatre, there are practical limitations to video render speeds. Also, motion tracking only really works with respect to a single user.

Ultimately, the answer for film seems to continue to allude. Some of the technologies in Avatar attempt to better replicate the eyes - typically focusing techniques - such as moving closer together and farther apart in reference to distance of object at focus. Regardless, one inevitable problem still remains. If you cannot control exactly where the user is looking and focusing at all times, how can you guarantee the effect.

This problem is inherent, and currently without a good answer. Video games have an option... Microsoft and Sony see this - thank goodness. The film industry, I still question.

Avatar is a huge push towards these technologies. I openly welcome anything that will better draw the viewer into the film - yet I also wildly disagree with any technologies that may impede the viewing process - ie headaches, annoying glasses, etc.

The final verdict? Remains unknown. Currently I feel that 3D has been pushed and remains, a gimmick, yet to be fully developed to its potential. But maybe, just maybe, Avatar impact will push the industry to more streamlined technologies.

We need someone studying eye tracking. These equations have been developed, I know of them first hand after all as a biomedical engineering major. Now let us employ these real human systems into our cameras... maybe then the experience will be a bit closer.

Gesture Based Computing on LCD



The students at MIT are at it again, turning your everday LCD screen into a low-cost, 3-D gestural computing system.

As you can see, users can not only control the system by touching the screen along its x and y axis, but also along the z. That is you hand can be tracked based on relative distance away from the screen! Though such technologies have been implmneted before, using a variety of tracking cameras, or special gloves, a low cost system has yet to be devised.

Ultimately, this technology could prove quite a promising alternative to our everyday user interface needs. Particularly, the availability of such technologies on our smaller devices, such as laptops and cellphones, would greatly increase usability.

The human body is obviously limited in range of size, thus designing our devices relative to just two-dimensional space seems inefficient. If instead we were to design more devices that allow for a 3rd dimension of input, a whole new foundation of digital input could be made.

It is not just that we won't have our fingers on the screens anymore, as it is, the dimension of depth opens up a whole new opportunity to immersive technology interfaces. I will include a deeper exploration on such possibilities in my impending haptics technology article.

Source

OLED Dress



The dress in the video above was completely designed out of flexible OLED (organic light emitting diode) displays. The video itself brings up a lot of very important ideas in design. With our increases in technology, are we becoming frivolous, wasteful?

Personally, I love and hate this design. The impact upon walking onto the floor - quite impressive. The idea of designing and powering such a device, a dress, is truly an impressive feat in itself.

But just because we can use flexible screens as cloth, should we?! This question of course has no succinct answer. From a tech geeks point of view, of course! All advances in technology are welcome. And true, we probably won't be seeing many OLED dresses in the near future, but who is to say we won't start seeing OLED technologies incorporated in shirt front. Ha! Imagine having the ability to change that clever comment on your favorite T whenever you like!

Yet even with such "cool" opportunities, I ask about the necessity. Even with advances in technology, most lasting technology has to meet some kind of need (or at least introduce to a need we weren't aware of needing - like social applications such as Facebook and Twitter.) With limited resources, designing such a dress must have cost thousands upon thousands of dollars. Is such money well spent?

This question is highly debatable, but the bottom line is this.

All artistic endeavor imparts some sort of thought on the audience. This dress, as awesome or wasteful as it is, imparts quite a few questions.

Great Gadget Design



Three books have recently been published by Phaidron - encompassing 999 great gadget designs in history (Pioneers, Mass Production, and New Technologies). One of my favorite designs on their list is the early 1910's prototype camera for what is assumed to be the early Lecia 1 in the 1920's.

When I saw this image, my first thought... beautiful. I don't know what it is about classic technologies like this, but then again maybe this is why 'steampunk' design has grown so popular as technology ever-evolves.

The most amazing thing about this image... is how similar cameras still look! Take a picture of a really old computer for instance, and it filled an entire room, or more. But a camera... just about the same. As I had referenced in a post earlier, camera design was largely defined by the need for film. Yet now with this advent of digital sensors a question about whether this design will will finally... after nearly a century... finally pass.

A part of me insists that technology must continually evolve, to be more efficient. But a small part of me wishes we could remember the past - and not just through "retro" redesigns meant to be trendy. Don't get me wrong, technology has allowed for some incredible opportunities in digital design.

I remember only 5 years ago, while in a group photo for one of my scholarship, the photographer made the comment, "It's crazy we can see what we took instantly now. Sure makes my life a whole lot easier." It's astonishing really. In just 5 short years we've gone from the relatively new technology of DSLR LCD's and live-view, to it being nothing more than just an expectation. Heck, even some manufacturers are attempting to completely nix buttons alltogether! (as I referenced a couple posts back).

Ultimately, technology will evolve, as it always does. Some designs we should trash, yet we should never forget those original designs, and likewise their purpose.

New DSLR's designed as if they were film cameras honestly doesn't make a whole lot of sense. But initial designs, designed as they meant - make them great - as the need was met in a manner necessary.

We must always keep purpose in mind when making new designs. Looking at some of these classic designs is a wonderful way to remind ourselves of where innovation takes place. Not with continual redesigns and adaptations of older products, but instead a truly original piece.

Steon Orbo's "Technology"

Take a look at an add by Steon's Orbo involving an impending breakthrough technology:



You can find a better explanation here

Now, I'm not one to judge too quickly. And honestly, I'm a quite gullible person. Yet this add has fraud written all over it... literally. Aside from debuting the concept with its barrage of "haters" I find a lack of true conceptual reasoning to be the heart of this ideas future failure. Do head of the company claims to have found a loophole in magnetism that makes this possible. Don't get me wrong, the idea of producing energy from nothing sounds pretty awesome, yet it defies the law of conservation of energy. Energy cannot be either created, nor destroyed, only transitioned into different states. This article claims to be producing something from nothing, which well, sound pretty friggin' outlandish.

Regardless, if we were to accept this idea as plausible or not, its resounding effects (if even possible make me worry even more.) As the ceo referenced on their website, the idea of creating this energy is offset by a constant dissipation of thermal energy.

Now if we were to produce energy from nothing, and likewise produce heat from nothing. Would this not overall add to the heat of the system - for these purposes let us claim our planet to be the solitary system. Now if heat were to increase would that not impart similar effects to that of global warming. Granted global warming is about trapped energy due to light radiation not exiting the atmosphere, but the overall effect is quite similar. Granted the sun is on a much greater scale, and well, this "technology" seems quite...questionable.

Ultimately, my point is this. Sure producing something from nothing sounds like an awesome concept - think about it, never having to charge your phone again! But as we know it, our very environment depends on this balance in energy (granted this balance as we know it is being pushed and pulled at this very moment from global warming... but that is a topic I will leave to discuss at another time).

Wednesday, December 9, 2009

Avatar

An interesting behind the scenes look at the new Avatar film:




Personally, I am very excited to see this film this Friday. Granted it has been hyped to no end, and well I find 3d quite a gimmick. Yet the other technologies used in this film... awesome! The idea of creating a digital world, and then creating a film within this world, instead of the typical reverse, is quite a revolution in thinking. Of course there are many costs... like over $500,000,000.00 to be precise!

Regardless, I admire James Cameron's brave departure from the safe and tried methods.

The different techniques discussed above, ultimately result in a film that is less a computer graphic simulated world overlayed on top of real world as it is real world overlayed on top of a computer graphic simulated world. I hope, the resultants will be as astonishing as they seem.

A film is meant to effectively communicate a message to the audience. The story is the typical means from which this communication is met. Yet with increases in digital technologies, often the story, acting, and depth of connection between character and audience remain unmet.

James Cameron is trying to narrow this gap, by forcing a real life characteristics into this artificial world.

Success may lead to a whole new genre of film. A genre that not only live on creating new worlds (science fiction) but also remain as connected with our audiences as a documentary or drama. Film such as District 9 have attempted as well to do so.

I am deeply excited about how film will continue is perpetual march towards truly immersing the viewer in the story - not only visually, but also emotionally.

Friday, December 4, 2009

A Guide to Haptics



BACKGROUND

So what exactly is haptic technology? According to the International Society for Haptics, haptics in psychology and physiology are referred to as one's ability to actively explore the outside world - typically using touch. Though touch is not the only haptic feedback possible, "haptic touch" has often become synonymous with the term.

Today this term has evolved to mean "the science of touch in real and virtual environments." This includes not only the ideas of touch in relation to an organism, but also virtual environments currently being researched.

In essence haptic technology is attempting to become a metaphor for our real environmental feedback we encounter each and everyday.

THE PROBLEM

The idea of touch integration in technology is in parallel to the need for such touch sensor in our everyday lives. Think about it for a moment, how efficiently can you walk when your leg "goes to sleep"? Now imagine if your entire body were to "go to sleep" - exactly. That feedback is not only convenient, but necessary. Haptic feedback is so common in our lives that we often fail to notice, or appreciate its overwhelming necessity. Without it, such things as walking, handling objects, pretty much any physical action whatsoever would be greatly debilitated - if possible. Without feedback our muscles do not know how to compensate.

I will stop now to reiterate that haptic research is not for designing tactile sensors. Tactile sensors only sense touch, and then encode it into a digital sequence. Haptics is concerned with the reverse process - in essence the computer touching you back. Without this reversal of signal we are running our technology on a one-way path.

In technology, a need for a haptics is growing increasingly needed, for as its use becomes more and more prevalent in our everyday lives, a means of increasing efficiency needs met. Relying on one's eyes only is horribly inefficient.

Think about it. Why are our computer keyboards still purely mechanical? The technology exists that we could all have touch-sensitive keyboards - imagine how easy they would to be clean and their increased reliability! But the cost is much too great. Without feedback to our fingers, we are uncertain if we actual hit the key intended. Obviously, this feedback is done with little to no thought, nor visual inspection, but instead we rely solely on neural feedback loops and our Meissner's corpuscles for sensing.

True, people have become increasingly efficient at ignoring this need all together - take for instance the rapidity of some people's text messages via iPhone. Yet even this too isn't purely correct. Instead people are training the palms of their hands and the reaches of their fingers as signals for correct finger placement. Which, yes, does work to some extent, but the sensitivity of our fingers as compared to many other bodily parts - exponential.

Here is a physical representation of our sensory distribution:




The reality is, there are limitations to current strategies, predominantly of course that of our sight. These limitations can be overcome if we were to just better observe this oh so eloquent nervous system we already have!

Consider the blind for a moment please. Even without vision, much of the blind community can efficiently read with just their fingertips. If we can process and read information with just our fingertips (and brains of course), than why aren't we using such abilities to our advantage? As of now our fingers are stylus replacements. Developers make "touch-friendly" graphical user interfaces and we poke our way around, as it becomes more and more difficult to actually see where we are actually poking.

(consider the micro display I discussed in a prior blog: here)

THE SOLUTION

Researches should be developing an alternative to standard touch inputs - take for instance the blossoming multi-touch technology.

Mutli-Touch Gesturing



Granted, even though this technology is making huge strides, it still treats our fingers as if they were a mouse, not the complex input-output operators that they are. Ironically, enough it would not be a small stretch to say that even gaming controllers are slightly ahead of this curve, for these technologies enable the user to be completely engulfed in the task on the screen.

Regardless, as of now the easiest solution seems to be to keep our mechanical feedback found in our keyboards and controllers - which of course is how we've been getting by. Yet these inputs too could be improved. A number of companies are working on mechanical force feedback systems to give you a better sense of what you are feeling. I've attached two example videos below:

3D Controller Demo



Telepresence Demo




Yet even these designs have their limitations. The longevity of a mechanical device is almost always the limiting factor in a design - hence why all mechanical devices are measured by number of cycles. Thus even if such designs were made more efficient through lubrication techniques and better machining, they still remain the limiting factor. Yet even if we were to streamline manufacturing techniques that could give us reliability rates of years, one more major problem still remains - size.

The size of the mechanical feedback devices is often quite large - way too large to be used in a practical manner. The realm of microelectronics is growing at a much faster rate then micromachining. Consider the very computer you are reading this on - imagine if it still used all the mechanical parts that its grandfather had?

So how do we combat size? Well, we make our devices completely electronic of course. As of now, one of the most widely adopted haptic feedback mechanisms is the vibration that occurs (on some phones) when you press a button. Sure, this is a stride, well... maybe baby-step would be more accurate.

Vibration when pressing a button is about as useful itself as just pressing a single button. Which yes, of course this is better then pressing no button (or an imaginary button on your LCD screen) - but just one? This haptic feedback vibration relies upon existing technology that your phone already contains (which is ironically mechanical as well).

Regardless, there is no usable method from which to localize vibration to specific areas of such a uniform surface surface. And as long as we have LCD's, this will remain an issue.

Another option branching from my neck of the woods (in Biomedical Engineering) includes that of electronic feedback through neural stimulation. With advances and elctrostimulation techniques, small controlled current fields may be sent to one's fingers with results somewhat similar to that of you touching sandpaper.

A video of this work may be found below:



Yet even after seeing the advances made, the technology is still very much so in its infancy. The resolution of these 'tixels' as CEO Ville Makinen of Senseg calls them; they are still quite large - at the size of a single QWERTY button. He also noted an important point - the resolution of our eyes is much greater than our fingers, yet we must rely upon our fingers to do the inputting for us. Thus why not maximize our input capacities upon current technologies?

Ultimately, I feel that this technology, using electrostimulation, will be the future of haptic technologies. There has yet to be developed any other convenient, portable, way to bring this technology to an array of devices.

THE EFFECT

So what will this research bring for our future?

The answer is actually quite complicated. Succinctly, it will completely alter the way we interact with our electronic devices - resulting in much more efficient and immersive computing experiences.

Let us consider games. With such technologies in our games, these artificial worlds we've have created can become that much closer to real life. We will be able to truly engulf our audience in the action. They can feel the rifle kick or the grass scrape their legs. (Granted in such circumstances a true haptic suit would need to be developed to provide such an array of stimulations - which again reiterates the need for future electrostimulation research).

Similarly, this technology could be employed in video. Similar to a Disney World show, you could feel the wind blowing or the spider rub up against your legs. The technology could again completely immerse the viewer in the experience.

Consider virtual reality, and similarly augmented reality, applications. As I discussed here, augmented reality is at the forefront of immersing our two realities into one. Could you imagine feeling a brick, yet similarly feeling a digital image of a brick right beside it?!

If augmented reality were to incorporate such advances, the idea of our Internet world and our real worlds, will become evermore intertwined. As to whether this is a good thing, is always debatable. Yet to have a choice to take advantage of such an opportunity, to explore imaginary and real worlds alike, yes, of course, I think this could be a good thing.

Time. Time could prove a quite peculiar point of interest when such technology finally arrives. In essence, we could feel another time, a created time, a simulated time, or even a time at a readjusted rate. For instance, haptics could, be used to literally slow down actions that are to fast for us to normally see or feel. It could also be used in reverse to slow or increase the speed of our very actions.

For instance, take medicine - surgeons in particular. Doctors could wear special gloves that they use to control a robotically operated scalpel. Yet these gloves could be calibrated to operate at half the velocity of the surgeons hands themselves - thus literally slowing time for any given action. Such ability would aid greatly in precision techniques and operations.

In reference to medicine, haptic technology will also be employed for nerve-damaged patients. Through prosthetics, artificial limbs, and neural stimulator implants people may regain their sense of touch through very similar mechanisms. A computer and not the outside world will responsible for the final feeling the user gets.

This increase in accuracy could also be used just to overall our daily computing efficiency. Though our muscles are not optimized for using a single keyboard, our nerves are perfectly suitable for doing so. Yet again, more importantly than our ability to interact with our computers, is our computer's ability to interact with us. So how exactly does an interface feel? What about writing a word document or watching a video? These are very difficult questions. Some with no answers, though some that do.

Art for instance. Much artistic endeavor has moved to digital means for editing capabilities. Also, the precision and control on computers is much greater than that physically capable by both the human hand and eye. Yet as of now, most of these technologies of course have the one draw back of no force feedback. Drawing tablets such as Intuos Tablets have become wildly popular for drawers, for the sensation is quite similar to that of drawing on a piece of paper. Yet what about the painters, sculptors, and musicians? There are so many artistic endeavors that could benefit from such a feedback system. Imagine digital sculpting, or digital painting. Writing and performing digital music with something other than a piano keyboard.

I now wish to touch on the idea of sound. As I have discussed above, haptic technology is most readily being adapted to the sense of touch, though by definition - includes all feedback from the outside world. By definition, the very monitor from which you are reading this is a haptic device to your sense of sight, yet what about the other senses?

Touch is quite important, and one of the most obvious sectors to attack. Sound is typically transmitted with ease though sound playback technologies such as speakers, headphones, amplifiers, etc. Yet there is still a demand for such technologies in the medical world.

Cochlear implants attempt to transmit sound as neural encodings to be sent to the brain. This device is in essence a computer, attempting to communicate with our brains through sound. This haptic mechanism is far too often taken for granted, though improvements in the sector have come.

On a final note, haptic technologies should one day include smell and taste too. The ability to smell and taste the environment you are in is truly a wondrous idea - granted the obstacles that arise fall again in our realm of physiological understanding. How the body feels is relatively simplistic in comparison to trying to understand how the body encodes tastes and smells in our brains.

TO THE FUTURE

Haptic advancements will prove pivotal in our futures. From efficiency in our everyday computing to time-slowing medical procedure and new, immersive realities. The applications are extraordinary. Though inevitably, this too is just a stepping-stone, for our senses themselves are of course with limitations as compared to the processing abilities of the human brain.

Thursday, December 3, 2009

Say It 'Aint So!



Now I'm not one typically to complain about the evolution of our everyday gadgetry, but a touchscreen DSLR!? This just makes no sense to me. Don't get me wrong, I completely understand the applicability of a touchscreen as a supplementary input, but I touchscreen alone cannot encompass all input features necessary on high-end digital imaging devices. That which makes a DSLR so much different than your typical point and shoot (aside from a much larger sensor) is the viewfinder. With the viewfinder accuracy is upheld - which is of utmost importance in photography.

To add a touchscreen input, would mean we would need to sacrifice using the viewfinder as much, or develop some alternative that allows us to view the image and use input controls simultaneously. If such technologies were to be employed in a vastly different design (such as the concept I posted a few days ago) then I could see this as a viable inclusion. But without some very necessary structural changes, the incorporation is in essence pushing the evolution of the device backward.

Don't get me wrong, with or without such technologies pros will continue to use the device with phenomenal results, but the growing population of amateur photographers will find and use easier, and less accurate methods in which to take their photography. Granted I do see such technology as a good stepping stone from your typical point-and-shoot, I find it more consumer-friendly option than a true improvement in design.

Though haptic feedback is not always necessary, as the iPhone has showed us, there are limitations - particularly if we look at gaming for instance. As successful as games are on the iPhone, it is the type of game allowable by design that constricts these games. Yes, buttons are a thing of the past, something that one day will probably be replaced with fewer moving parts, meaning fewer complications in the future. But feedback in touch is still needed; we cannot remove the sense of touch by compensating more visuals. There comes a time and point where all of our sensing should be accounted for in digital design.

Digital doesn't mean only visual, yet for some reason it seems that many companies are migrating towards these designs - given slimmer form factors and customization abilities. Yet the need for haptic feedback remains!

I did a little research of haptics, and it seems a number of concepts are currently being developed by various companies to combat this problem. I will be posting a new entry with an overview and thoughts on the issue in the near future.

Wednesday, December 2, 2009

AR (Augmented Reality)



I've been trying to avoid mentioning this topic given its quite overwhelming publicity over the past few months, yet alas, I found myself reading a quite interesting cover story from Nikkei Electronics Asia via Tech-On. I was considering going into a drawn out ramble about what AR is and its many uses, but figured instead I would summarize it quickly, and discuss a quite interesting point made in this article. AR is basically the layering of digital information into our sensed worlds (ie touch, taste, sight, etc.).

Augmented Reality is in a way a midpoint between reality and VR (virtual reality), yet what seems to make it truly stand apart and drive its adoption in the years to come is its "ratio of real versus virtual information." As we all know, VR never truly caught on. Sure its a pretty enjoyable novelty, yet the need to create entire worlds from scratch to engulf the user in this world, and similarly leaving convince the user to leave one's own world behind has proven quite a daunting task to be fulfilled in all instances.

Instead AR is attempting to incorporate itself in our present world.

It actually makes quite a bit of sense when one realizes the initial applications for AR were to be used as replacements for tangible directions.

AR will succeed with VR failed, for it opens up a number of new doorways into altering our digital lives, while leaving our real lives intact. In essence AR is just replacing the interface. No longer will we have to push buttons to change the channel, instead make hand motions, or look in certain areas, or push imaginary buttons, or heck just speak the button.

Augmented Reality is just that, an augmentation of our current reality. As the internet has been so widely adopted, there still remains that line between our reality, and the reality of our internet worlds of our computer LCDs. Yet as technology continually progress, this divide will slowly become obsolete.

It isn't something that will occur overnight, but it is through AR that many of these transitions will occur. Think about it, what need is there for our tangible digital devices when we can incorporate them in some standalone augmentation of our lives. Televisions could be projected onto our glasses, controlled via voice command. Simultaneously phone calls, music, and shopping could all be integrated into a similar central manner.

As I've discussed before, this centralized device is increasingly becoming the "cloud" - ever-pushing our wireless lives.

Tuesday, December 1, 2009

Vanish



I highly suggest reading this article in Wired Magazine:


Evan Ratliff Tried to Vanish: Here's What Happened.


To summarize the story, for all those that wish to forgo 20 minutes of technology filled-manhunt-awesomeness, a writer by the name of Evan Ratliff challenged readers of Wired Magazine to find him within one month. The winner was to win $5,000.00. Ratliff told no one where he was going, gave himself a new identity, fake credit card, fake business cards, sold his car, and completely reinvented his life. Yet unlike the typical hide out in the woods story - Ratliff chose to live a new life as a new person. That which made this story most interesting, is that he continued to use technology we use everyday - ie Facebook and Twitter.

The story takes you on his very well planned escape - involving the use of double-masked IP addresses and lots of untraceable gift cards,and some very well placed misdirection.

The story itself is quite interesting, yet the resounding difficulty in becoming lost in our technology-filled world remains. Given a few faults of Ratliff, and a quite large group of online contributors he was eventually found.

It's somewhat frightening though. How easy it was for people to dig up information about Ratliff. In a matter of days they knew everything from his food allergies and soccer interests, to recent credit card purchases and license plate numbers. In such an information driven world it is becoming so hard to disassociate yourself from exactly that - your information.

Its ironic really. Upon the Internet's introduction and growth, we've been reminded of it's anonymity, yet is anything that private anymore?

Ratliff explored an interesting side-effect of our technology driven lifestyles. And though I'm not in any rush to absolutely disappear, its a bit disconcerting knowing just how difficult it really would be.

Video Chat, Coming to a Phone Near You?



Much media attention (well at least in the tech world) has recently surfaced in regards to a front-facing camera, or lack thereof, on the iPhone 3GS. The debates that follow bring up a good question. Is video chatting on our phones, like in those classic sci-fi movies, coming in the near future, or is it a niche market never to see mainstream use?

Initially, I thought, sure. Of course it will be adopted. I mean, why wouldn't it be? New technologies are often adopted, and video streaming capabilities and corresponding compression codecs are constantly being improved. Yet the resounding question still remains... would I use it?

iChat, a video chat messaging client provided on all Mac's for quite a few year now, has yet to truly take over communication in computing. I have iChat and Skype, I've used video chats before, yet honestly, there seem to be far more instances in which I would rather not use it than use it.

Of course I would love those face to face conversations with loved ones and friends, but on the phone with the At&t guy?! When I'm in my underwear? After I just woke up? Etc. The list of circumstances in which I would prefer not to use it are obviously quite large.. thus reiterating that resounding question... would I use it?

If it were to become commonplace, would we not feel as though we had to use it, even if we wished we didn't have to?

As technology progresses, it seems that most people are gravitating towards efficiency. Widespread tweets to all your friends about something or another. Sure individual phone calls or made, yet even those are being replaced by quicker, more efficient texts.

I'll be honest, I would much rather e-mail one of my professors than speak to them face to face. And definitely much more than talking to them on the phone!

So would this technology be adopted?.... hmmm...

After much thinking, I feel as though it is not on the immediate horizon. I do imagine it will eventually be adopted, yet possibly in a manner different than the one-on-one conversations we are expectant of. Instead I would imagine it would be used in large scale conference situations in which "real-time" interaction is optimal. I could see such technology being incorporated into a real-time chat medium in social networking utilities such a facebook.

Regardless, I think the technology is coming... and will come... but in the next iPhone iteration? Probably not. Do we have the technology? Well... yes... (providing At&t's network was as stable as Verizon's).. but do we have the medium from which to use it as efficiently as we oh-so-desire? No... not yet anyway...

Wikipedia



Along the topic of free knowledge. I sought out the largest database of everything imaginable - Wikipedia (as of this writing - 14,00,000 articles and ever-increasing have been published making it the largest encyclopedic work ever compiled.) Wikipedia's motive is as follows:

Wikipedia seeks to create a summary of all human knowledge: all of topics covered by a conventional print encyclopedia plus any other "notable" (therefore verifiable by published sources) topics, which are permitted by unlimited disk space.

Such a goal, though seemingly impossible to meet, could not have been even considered just over 10 years ago.

Yet now, honestly, I search for pretty much everything on Wikipedia. And yes, though there is a questionable accuracy to articles, a number of studies have proven its reliability to be similar to that of the Encyclopedia Brittanica. Regardless, how one argues for or against the site's reliability - Wikipedia has vastly enables the sharing of knowledge. It is an ever growing community of shared facts and ideas. There are thousands of discussion boards in which editors clarify ideas, revise posts, etc.

Wikipedia is becoming a knowledge base, that I image will one day compete with that of our schools. Heck, as of now I probably look for homework help about 50% of the time through Wikipedia articles.

As this knowledge base continues to grow and be refined I start to question, how exactly is the perception of knowledge going to change. Before, those highly held in society had a seemingly endless stream of facts in their heads. But with the world's information instantaneously at our finger tips, what need is there to memorize everything?

Instead I expect a gradual shift in the education process. No longer is it so important what you know, as it is what you can know, how fast you can learn it, and how fast you can adapt and incorporate these changes.

On a final note, and horrifying as it may seem, I imagine that as much as Wikipedia is a growing contribution of shared knowledge - it is exactly that which it claims - "a summary of all human knowledge." When our civilization is gone one day, will Wikipedia be our Rosetta Stone?

Royal Society Papers Available Online



The Royal Society, the worlds oldest science academy, marks its 350th anniversary in 2010. In celebration, the society has devised a quite monumental contribution in the form of an interactive timeline website called "Trailblazing." Trailblazing gives one the opportunity to explore major contributions in science and its corresponding publications by the society over the past 350 years. Yet as a perk to all knowledge-hungry explorers out there, the Royal Society has made available 60 monumental published works in history. And when I mean monumental I mean a digital copy of the original contributed article on Isaac newton's theory on light, Benjamin Franklin's experiments with a kite in an electrical storm, the development of Penicillin by Alexander Fleming!

The availability of such inspiring works, such incredible revelations in thought, is quite exciting! But more so than interesting, this society is breaking through a huge and ongoing wall in the sharing of knowledge.

Don't get me wrong, I understand why emerging technologies are often kept private under layers of patent law, yet once these patents are up, should the ideas not be available to everyone? Of course, there will always be conflict in this debate, yet this small website exemplifies a unifying dream I hope comes someday.

With the dawn of the internet age knowledge has no longer become that which is unatainable without thousands of dollars, connections, and a high IQ, but instead something readily available to anyone with a computer and a web browser.

Yet in this transition we often find major faults - particularly questionable accuracy in information, necessitated degrees, etc.

This website represents that ever-present march towards a fully graspable knowledge base. Sure many people could care less what exactly Fleming said about his new found antibiotic, but I image many others would find his real words inspiring.

Everyone needs inspiration from time to time.

"The Edge of Science and Art"



As with all technology, that which makes it most exciting, is often times an application outside that which was originally expected. For example in this rendering, by paleoartist Victor Deak. Deak uses various 3d modeling programs to bring the skulls anthropologists see to life. some of his most recent work may be found in the "Becoming Human" documentary - recently aired on PBS.

As Deak put it, "they look realer to me... for a couple seconds, people might say, 'What's that a photo of? Where'd you get that picture? There's that moment of belief when they're not looking at it as a painting or sculpture, but as a living thing."

Deak was able to combine his passion for visualization and animation into his working environment as a paleontologist. In doing so he has created a career for himself that continues to make very important contributions - yet now these contributions are heard not only throughout academia, but also seen by every day people. Deak's work is important, for the imaging techniques he currently uses will probably hold a founding basis for craniofacial reconstruction renderings. In improving such accuracies in computer-aided imaging we may get that much closer to truly "seeing" the past.

Such technologies could prove insurmountably helpful, not just in anthropology, but in all reconstructive means - such as in identification purposes, biometrics, historical databasing, etc.

Source

Saturday, November 28, 2009

Nova DSLR Concept



The Nova DSLR Concept takes the standard, age-old, SLR and likewise DSLR camera enclosure idea, and asks the oh so important question - why do DSLR's have to be shaped like SLRs? And the answer is, they don't! Aside from the sensor, and the lens itself, the camera enclosure itself need look nothing like its film counterparts, yet year after year we see revisions of the same old design.

Erin Fong has sought to change this stagnation of thinking. Instead he asks the question, what camera design would be most appropriate for a variety of holding styles? What design would make changing settings, zooming, etc. all readily available, at the tips of one's fingers?

The design above attempts to find a truly original answer to these ongoing questions. True, this concept is nothing more than that, just a concept, yet such "out of the box" thinking is quite promising. I'd never even considered the true necessity of my current DSLR design. As with many consumer electronics we assume they are shaped the way they are due to necessity, but often we fail to realize, with continual advances in technology, such barriers are constantly being altered.

Who said designing a new product actually had to mean designing a new device? How about redesigning a new product to make it more efficient? Sure, engineers should think of these things, but digital designers need to as well! Looking at what is already there provides much more doors to open than the single door into the truly unknown.

Source

Friday, November 20, 2009

Google Chrome OS

I suggest watching the video below about Google's impending Chrome OS. I will discuss thereafter.



When I saw this video, and finally completely understood what Google has been, and continues to oh so diligently work on, my draw dropped. After reading a number of viewer comments on Engadget, Gizmodo, and Google's blog I found a resounding opposition. So many people seem to be saying "Everything operated via internet connection...FAIL!"

Yet I see this as entirely the opposite. It is anything but failure, but a huge stride forward in where things are going. Don't get me wrong, I completely understand some people's opposition. The "power users" if you will (of which typically hang out quite a lot of the tech blog websites) - those that eat Photoshop, Final Cut, and Crysis 3 for breakfast - are in opposition to an idea with complete disregard to the target market.

Chrome OS is not (well at least currently) attempting to replace your desktop computer, yet instead trying to replace your netbook. Netbooks, as you may or may not know, are one of the fast growing sectors in the computer industry. As a whole everyone (aside from the mysterious Apple of course) has jumped on the Netbook bandwagon. And what are Netbooks for... well... the Internet of course!

Sure they can do a number of other tasks, yet when it comes down to it, most Netbooks are used for like three things - email, web browsing, and maybe some word processing stuff. This being the case Chrome OS can do all of this - in the browser!

The idea here is what I find so interesting. Ultimately, Google is stretching from the world of hard drives with their ever-present limitations to that of "the cloud." Which sure, it too has a bunch of hard drives, yet as internet streaming capabilities continue to increase, we may see smaller and smaller devices - is it not the hard drive that deems the size of most of our devices? And even if you were to counter with flash memory for example, think about power consumption? If we were to put everyting in the cloud, a streaming internet connection would of course be needed, but that's it! Your computer would no longer be running a continuous 50 background applications.

I find this concept quite exciting, for think of future applications. When internet truly does become all inclusive (which may be a while, yet LTE proves a quite promising not so distant future) one could effectively reduce their everyday browsing devices significantly.

Granted as smartphones continually progress, I imagine the Netbook market, though currently blossoming, will quickly again become obsolete - given impracticality of having a Netbook for web-browsing, a phone for web-browsing, and finally a computer for some hardcore functionality.

I would hope, and as many reports have suggested, this cloud transition will move to most of our wireless everyday devices - as both Apple and MIcrosoft have confirmed their future focus. The idea of moving to the cloud could open up a multitude of computing frontiers, thus I must give praise to Google - for it has taken that first leap needed, to push technology ever-further.

Google Blog

Friday, November 13, 2009

Sleepbox



The idea behind Sleepbox - a 2x1.4x2.3m cubicle lined with LCD screen, power outlets, and automatic sheet-changing bed - is that people could pay (in 15 minute chunks) for access to their very own tiny office or hotel room. The idea at first seems quite ludicrous, though its starts to make just a little bit of sense when analyzing the target audience.

The manufacturer plans to place these sleep cubicles in airport terminals - thus giving those waiting for, or between, their flight an alternative to crowded airport lounges.

It actually seems like a plausible idea (providing the cubicles are quite sound resistant of course.)

Ultimately, this concept conveys our cultural evolution. As our lives become ever increasingly busy, the idea of sleep is fading from a necessity to a convenience - to be had when time permits in our bustling lives. It may sound scary, granted I'm all for an anti-sleep movement. Sleeping 1/3 of everyday seems just so... wasteful.

Regardless, the idea of a Sleepbox is quite intriguing. If it were to become common place, I imagine many more would be needed. In effect we would be building tiny micro-hotels all around major airports! Its a tiny step closer to the sleeping chambers of sci-fi movies, yet one problem still persists. Amount of sleep is not as important as quality. If technology could find a way to improve quality of sleep, even at minimal durations, think of the changes in society itself that could occur!

Our days are ruled by the day-night cycle. Our circadian rhythms are calibrated by the sun (yet slightly off at 25 hours if you want to be technical). What if we could tap into this calibration?

Bustling cities at 3:00am! Mail on weekends! Movies on Wednesday nights!

The possibility of freeing our very restricted night and day cycle could lead to huge increases in productivity, while simultaneously greatly expanding one's free time to explore and experience in this great world.

Yet I've digressed...


Sleep Chamber Website

Information about Circadian Rhythms

Thursday, November 12, 2009

Metal-Air Battery



A Metal-Air Ionic Liquid Battery is being designed at Arizona State University under the guidance of Professor Cody Friesen. The design attempts to use ionic liquids as an electrolyte - such electrolyte would not evaporate - thus providing much longer battery life. Also, another issue involving dendritic growth, which causes a decrease in charging capacity of batteries has been resolved.

As research continues towards this battery, that which I am most excited is its possible applications. At an expected "11 times more energy than Lithium-Ion" our battery technologies will have finally approached a similar level of our electronics.

The introduction of such a better battery would be immediately seen in the electric car sector. At such a greater total charge, the practicality of driving somewhere and back solely on battery would become quite plausible.

These batteries would hugely impact the medical device market as well. As of now most problems with medical devices arise, for there is no easy way to charge for instance a pacemaker, or prostethic limb, or vagal nerve stimulator, or visual prosthetic. If batteries of such greater charges were designed, smaller batteries could be made and implanted in the human body - resulting in fewer battery changes, less chances of surgical complications, and of course an expanded opportunity for devices that have yet to find an answer to the eternal power source question.

Such batteries could remarkably extend the battery life of our everyday devices. Currently cellphones are not inhibited by processor technology, but instead the power needed to run such processors. Photo editing on my phone could be possible! Yet it is not necessarily the availability of these advances that is so important, as is the advancement of wireless technology as a whole. Laptops could be used for more than just a couple hours at a time outside, the hardwired idea of a business setting could be completely readapted.

These batteries could also provide a very positive impact on the environment, for if they were able to hold such long charges, yet provide continual consistent use, natural energy gathering methods (solar panels, windmills, hydroelectric, ect) could be used to charge a spare over the two or more days that one is using the original. Then switch. Such a method could greatly improve global electricity use and efficiency, while simultaneously lessening its negative impact on the environment.

Vercorin Gallery, Switzerland


(click image for larger version)

What you are seeing is not a trick of photoshop, but instead a work of art by Felice Varini, called "Cercle et suite d'éclats" in the town of Vercorin, Switzerland. Ultimately, the entire project is an optical illusion, that when viewed from certain angles, converges with a quite surreal effect.

As impressive an undertaking as this project was, that which I am most amazed is the support by so many locals. I cannot imagine readily agreeing to let an artist paint part of a circle across my roof - given the assurance that it'll look really neat when all put together!

One could only dream of future projects that could be developed if people were so willing to allow for artistic endeavors. I mean think about it, a local artist could paint all of the roofs of a town to converge into an image when flown above by helicopter. Or maybe cut all the grass in the town in a specific pattern to produce a specific shape. What if we were to build our towns and cities not as a group of individuals, yet instead as a representation of a culminating whole?Yet in this argument I find quite the irony. In such artistic endeavor, in such acts of creativity, we would resultantly need the conformity of many, many others.

Regardless, that which I like most about this artistic undertaking, is that is so well represents an act of camaraderie. Sure a single artist designed it, but it was with a widely held support that the final breathtaking effect was produced.

Digital Candy



Flickr has reprioritized the means of basic Internet usage. It isn't quite a networking site, nor is it a homepage into your soul. Flickr attempts to blur the line between portraying your images, but more so integrating your images with "contacts." That which makes Flicrk so unique is that it isn't a site for advertising, yet it is most definitely a way to have your voice, or maybe more specifically your vision, heard. Flickr is the YouTube of photography - it is open to everyone, everwhere. There are loads of professional bombarding the website with truly magnificent nature shots and 15 year olds alike taking pictures of their puppy with their iPhone. Flickr is an endless database into our culture's history - uploads in the thousands every minute! Just the other day I found myself browsing photos from the 1930's - free for my perusal!

Flickr, like YouTube, and even Facebook and Myspace for that matter, is not a means to make huge incomes, but instead a means for people to reach out towards one another. A way to share. A way to integrate our everyday lives and our digital lives. Photosharing has seemed quite an integral part of our American lifestyle (think of just about every family reunion, Thanksgiving, Christmas, etc.)

My point, is that people LOVE to share photos. Yet similarly, people LOVE to look at photos. Flickr gives us such an opportunity... wherever we may be (providing of course an internet connection is within reach!)

With the age of digital technology, and idea of printing photos has, and continues, to fade. Back in the day (three years ago) I worked in a photography lab. In just those 6 months there I noticed quite a rise in online photo prints, as compared to film - honestly, the only film I ever processed were from 35mm disposables, and the remaining few with those ever-so irritating APS formats.

Regardless, we did get quite a few digital orders, yet with the advent, and continual improvement of digital photo frames, I imagine the day will come, where 4x6's are more of a memory (or a commonly used OLED format!). When this day finally comes, flickr, as it is slowly becoming now, will be a centralized database for photo sharing. I will stream my photos to my phone, to my television, to my computer, to my digital photo frame, to my friends, etc.

I imagine Flickr has quite a stable life in front of it. As much as I love the quality of a print, digital RAW data files are in fact lossless - and uploads onto Flickr... well... are acceptable.

When the day comes, when internet databases allow for lossless photo uploads, will be the day memory constraints stop holding us back. We'll see if Flickr jumped on that bandwagon, or fades into memory.

I suppose similar things could be said for music. Even though mp3 are being popularized and largely distributed, the older formats remain. They remain, for at the end of the day, the true sound, the true data, need remain intact.

Extreme Sheep LED Art



Granted the act of herding a bunch of scared sheep around just for the fact that one can seems a little immoral, I have to give it to the creativity of these Wales natives. Given some very careful planning, time lapse photography, some battery powered LED's, and a geeky outlook, a truly unique art form has been created.

I scoured the interwebs to see if I could find more acts of such brilliance. As of this writing... nothing. This single viral video is in itself, a sole contribution to the art form.

This video, though arguably entertaining, raises a number of issues. Most significantly is that of course of morality. Granted none of the animals are getting physically hurt, they are being frightened into direction. I imagine similar tactics used on humans would be well... banned. Yet in a similar light what if we were to use such ideas to create a human scale artistic expression?

Well, such expression would probably be accepted as a dance. Drill charts for a symphony of movement. And when you really consider it, is that not what this Extreme Sheep Art is? It's a beautiful synchronized dance... without... well... free will.

Thursday, October 22, 2009

Erosion - by Michael Aranda



Erosion is one of the most successful video production pieces I have stumbled across on the YouTubes. Michael Aranda, a self proclaimed nerdfighter, and host of channel - Arandavision - has combined his interests in video and sound production, as well as editing, and musical talent in a number of creative endeavors.

This is a wonderful example of creating a piece specifically for a target audience. At 2:31, the duration is quite appropriate for the youtube community attention span - typically anything over 3:00 minutes gets significantly fewer views due to people not wanting to "waste 4:00 minutes of my life on such a(n) [insert slanderous remark] video." In contrast to most YouTube videos, the production value of the video is quite high. It appears as the Michael does not shoot for the typical YouTube video, but instead strives to provide his subscribers with a quality designed experience through a convenient medium. In doing so, Michael is able to set himself apart, yet remain within the confines of self-made associations of the YouTube community.

Michael's ability to so beautifully integrate natural sounds into the soundtrack of a wonderfully shot exploration of the unknown has earned my greatest admiration.

Source

Marco Tempest: Augmented Reality Magician



Marco Tempest takes a wonderful creative leap in taking one of the oldest of magic tricks - the everday card trick - and combines it with some very cutting edge technology - real time 3d graphics through augmented reality. Marco is known as a "virtual magician" or a "multimedia magician" for he continuously uses new technologies - from cell phone cameras, to stop motion, 3d graphics, and iPhone applications to betray our expectations in new and creative ways.

The story he tells is wonderful. The trick is very well planned out and structured, yet it feels as though he is telling it to you around a campfire. We are wowed by the realtime graphics, yet he couterbalances this quite complex technology with a simple story with a meaningful message. Yet ultimately, it is from a magicians old box of tricks in which he truly catches us when we least expect it.

If find it ironic, for much of Marco's magic uses the "magic" of digital graphics to give the trick depth and interest, yet it is still the old-style slight of hand magic from which he successfully performs his magic. All immediate rationale, would assume that he, like most magicians that use camera trickery, editing, and graphics, is using at a crutch to complete his task.

Marco realizes how to embrace new technologies, while simultaneously holding firmly to his past interests. He uses these technologies to capture his ever-evolving target audience's wants - and in doing so has found his niche in the entertainment community.

Source

A Twist to Closet Space

I found this to be a quite original idea. Closet space has always been something of limited volume - and according to a quite large % of the population, much too limited for one's taste. The prototype above designed by Irina Alexandru attempts to improve our closet packing efficiency. Inspired by the efficiency of packing of two strands of DNA, Alexandru decided to design his... well more coat-hanger than closet, in the shape of a helix - thus maximizing the number of hangers able to be hung in a given area.

Granted even though Alexandru is correct in stating one can hang a lot more hangers, the idea of hanging clothes - of which have varying lenghts - greatly complicates the matter. If everything were short - say for hanging hats or skirts, then this design world work wonderfully, but if we were to hang say... a coat... efficiency would be minimal at best.

I cannot deny that this was a wonderfully original idea, yet reality of the targeted use seems completely improper. Hat hanger.. sure. Clothes hanger... probably not.

This is yet another reminder that even with greatest design intent and creativity, one must never forget the ultimate purpose of the product - its use.

Source