Dr.Hariharan Ramamurthy.M.D. pl check www.indiabetes.net Big Spring,TX ,79720 ALL THING INTERESTING
Thursday, February 14, 2008
universal translator
I hope the NIIIT will start working on a good MT
not the laughable MT from telugu to english they have now
I saw this blog post which is interesting
http://blogoscoped.com/archive/2005-05-22-n83.html
Google Translator: The Universal Language
At the end of the 19th century, L. L. Zamenhof proposed Esperanto; it was intended as a global language to be spoken and understood by everyone. The inventor was hoping that a common language could resolve global problems that lead to conflict. Esperanto as a planned language might have had some success, but today, English is much more universal. 30 countries have it as an official language, and in many other countries it is taught in school and understood fairly well. The internet can be suspected to further increase the adoption of English.
Still, many people can’t speak English. The collected, shared knowledge that makes up the web is therefore only partly accessible to them. The reverse, of course, is true as well. When you surf the web, you will sometimes come across languages and characters you don’t understand – like Chinese, Arabic, Korean, French, German, Italian, Spanish, or Japanese. Would you be able to fluently read these languages, those sites wouldn’t be a dead end for you. You would discover a wealth of knowledge, and more importantly, opinions. If you’re an US citizen, how many Arabic, German or French sources do you read to get a good understanding of how the world sees the US? How many blogs do you read in foreign languages? Probably not many, unless you’re fluent in those languages.
At the recent web cast of the Google Factory Tour, researcher Franz Och presented the current state of the Google Machine Translation Systems. He compared translations of the current Google translator, and the status quo of the Google Research Lab’s activities. The results were highly impressive. A sentence in Arabic which is now being translated to a nonsensical “Alpine white new presence tape registered for coffee confirms Laden” is now in the Research Labs being translated to “The White House Confirmed the Existence of a New Bin Laden Tape.”
How do they do that? It’s certainly complex to program such a system, but the underlying principle is easy – so easy in fact that the researchers working on this enabled the system to translate from Chinese to English without any researcher being able to speak Chinese. To the translation system, any language is treated the same, and there is no manually created rule-set of grammar, metaphors and such. Instead, the system is learning from existing human translations. Google relies on a large corpus of texts which are available in multiple languages.
This is the Rosetta Stone approach of translation. Let’s take a simple example: if a book is titled “Thus Spoke Zarathustra” in English, and the German title is “Also sprach Zarathustra”, the system can begin to understand that “thus spoke” can be translated with “also sprach”. (This approach would even work for metaphors – surely, Google researchers will take the longest available phrase which has high statistical matches across different works.) All it needs is someone to feed the system the two books and to teach it the two are translations from language A to language B, and the translator can create what Franz Och called a “language model.” I suspect it’s crucial that the body of text is immensely large, or else the system in its task of translating would stumble upon too many unlearned phrases. Google used the United Nations Documents to train their machine, and all in all fed 200 billion words. This is brute force AI, if you want – it works on statistical learning theory only and has not much real “understanding” of anything but patterns.
One can suspect Google will release their new translation system soon (possibly, this or next year). The question is: what will they do with it – where will they integrate it – and what side-effects would it have? If via Google we get our universal language, would that resolve many global problems by fostering cross-cultural understanding, like Zamenhof was hoping for? Here is a speculative list of translation applications Google might implement; the key is auto-translation.
The Google Translation Service
This one is the most obvious: Google will still allow you to translate any document from their search results by the click of a link. What might be less obvious is that they might enable you to search foreign languages in your native language. All translating would be done behind-the-scenes, so that when you search for “thus spoke”, you might as well get results which only contain “also sprach.”
The Google Browser
If Google ever releases their own browser, they could seamlessly integrate translations of foreign languages; the user would just have to define what languages she reads fluently. It would be the Google Auto-translator (and surely it would be attacked using similar arguments than those brought forth against Google’s auto-linking.) And if it’s not a Google Browser, it would be a Google Toolbar feature.
Now imagine this: you specified you speak English only. What does the Google Browser do when it encounters a Japanese page? It will show you an English version of it. You wouldn’t even notice it’s Japanese, except for text contained within graphics or Flash, and a little icon Google might show that indicates Auto-translation has been triggered. After a while, you might even forget about the Auto-translation. To you, the web would just be all-English. Your surfing behavior could drastically change because you’re now reading many Japanese sources, as well as the ones in all other languages.
You are now enabled to get a better understanding of cultures outside your own country. Would there be any negative side-effects? Well, one for sure: people would have less incentive than before to learn foreign languages. And as soon as they encounter a foreign speaker in real life, they’d be just as lost as before.
The Google Instant Messenger
The GIM (Google’s Instant Messenger) could be a chat application – web based, of course – which automatically translates from and to any language. You can now chat all around the world and get to know people where before you’d have run against the language barrier.
The Google Babelfish
This would be the most advanced implementation of the Google Translator. It would be a smart device you plug-in to your ear, and it would have speech recognition and Auto-translation built in. You can now visit a foreign country and understand people who talk to you in languages you never learned. For them to understand you as well, either they would also have a Google Babelfish, or there would be the need of a second gadget you speak into, which then translates what you said. While the needed text-to-speech and speech-to-text technologies are far from perfect at the moment, they are still realistic possibilities.
[Thanks to Alex Ksikes and Dominik Schmid for our brainstorming session.]
not the laughable MT from telugu to english they have now
I saw this blog post which is interesting
http://blogoscoped.com/archive/2005-05-22-n83.html
Google Translator: The Universal Language
At the end of the 19th century, L. L. Zamenhof proposed Esperanto; it was intended as a global language to be spoken and understood by everyone. The inventor was hoping that a common language could resolve global problems that lead to conflict. Esperanto as a planned language might have had some success, but today, English is much more universal. 30 countries have it as an official language, and in many other countries it is taught in school and understood fairly well. The internet can be suspected to further increase the adoption of English.
Still, many people can’t speak English. The collected, shared knowledge that makes up the web is therefore only partly accessible to them. The reverse, of course, is true as well. When you surf the web, you will sometimes come across languages and characters you don’t understand – like Chinese, Arabic, Korean, French, German, Italian, Spanish, or Japanese. Would you be able to fluently read these languages, those sites wouldn’t be a dead end for you. You would discover a wealth of knowledge, and more importantly, opinions. If you’re an US citizen, how many Arabic, German or French sources do you read to get a good understanding of how the world sees the US? How many blogs do you read in foreign languages? Probably not many, unless you’re fluent in those languages.
At the recent web cast of the Google Factory Tour, researcher Franz Och presented the current state of the Google Machine Translation Systems. He compared translations of the current Google translator, and the status quo of the Google Research Lab’s activities. The results were highly impressive. A sentence in Arabic which is now being translated to a nonsensical “Alpine white new presence tape registered for coffee confirms Laden” is now in the Research Labs being translated to “The White House Confirmed the Existence of a New Bin Laden Tape.”
How do they do that? It’s certainly complex to program such a system, but the underlying principle is easy – so easy in fact that the researchers working on this enabled the system to translate from Chinese to English without any researcher being able to speak Chinese. To the translation system, any language is treated the same, and there is no manually created rule-set of grammar, metaphors and such. Instead, the system is learning from existing human translations. Google relies on a large corpus of texts which are available in multiple languages.
This is the Rosetta Stone approach of translation. Let’s take a simple example: if a book is titled “Thus Spoke Zarathustra” in English, and the German title is “Also sprach Zarathustra”, the system can begin to understand that “thus spoke” can be translated with “also sprach”. (This approach would even work for metaphors – surely, Google researchers will take the longest available phrase which has high statistical matches across different works.) All it needs is someone to feed the system the two books and to teach it the two are translations from language A to language B, and the translator can create what Franz Och called a “language model.” I suspect it’s crucial that the body of text is immensely large, or else the system in its task of translating would stumble upon too many unlearned phrases. Google used the United Nations Documents to train their machine, and all in all fed 200 billion words. This is brute force AI, if you want – it works on statistical learning theory only and has not much real “understanding” of anything but patterns.
One can suspect Google will release their new translation system soon (possibly, this or next year). The question is: what will they do with it – where will they integrate it – and what side-effects would it have? If via Google we get our universal language, would that resolve many global problems by fostering cross-cultural understanding, like Zamenhof was hoping for? Here is a speculative list of translation applications Google might implement; the key is auto-translation.
The Google Translation Service
This one is the most obvious: Google will still allow you to translate any document from their search results by the click of a link. What might be less obvious is that they might enable you to search foreign languages in your native language. All translating would be done behind-the-scenes, so that when you search for “thus spoke”, you might as well get results which only contain “also sprach.”
The Google Browser
If Google ever releases their own browser, they could seamlessly integrate translations of foreign languages; the user would just have to define what languages she reads fluently. It would be the Google Auto-translator (and surely it would be attacked using similar arguments than those brought forth against Google’s auto-linking.) And if it’s not a Google Browser, it would be a Google Toolbar feature.
Now imagine this: you specified you speak English only. What does the Google Browser do when it encounters a Japanese page? It will show you an English version of it. You wouldn’t even notice it’s Japanese, except for text contained within graphics or Flash, and a little icon Google might show that indicates Auto-translation has been triggered. After a while, you might even forget about the Auto-translation. To you, the web would just be all-English. Your surfing behavior could drastically change because you’re now reading many Japanese sources, as well as the ones in all other languages.
You are now enabled to get a better understanding of cultures outside your own country. Would there be any negative side-effects? Well, one for sure: people would have less incentive than before to learn foreign languages. And as soon as they encounter a foreign speaker in real life, they’d be just as lost as before.
The Google Instant Messenger
The GIM (Google’s Instant Messenger) could be a chat application – web based, of course – which automatically translates from and to any language. You can now chat all around the world and get to know people where before you’d have run against the language barrier.
The Google Babelfish
This would be the most advanced implementation of the Google Translator. It would be a smart device you plug-in to your ear, and it would have speech recognition and Auto-translation built in. You can now visit a foreign country and understand people who talk to you in languages you never learned. For them to understand you as well, either they would also have a Google Babelfish, or there would be the need of a second gadget you speak into, which then translates what you said. While the needed text-to-speech and speech-to-text technologies are far from perfect at the moment, they are still realistic possibilities.
[Thanks to Alex Ksikes and Dominik Schmid for our brainstorming session.]
Thursday, July 19, 2007
Now You're Talking
Voice-recognition technology is no longer stuttering - and that means huge opportunities for established players and newcomers alike.
By Jeanette Borzo, Business 2.0 Magazine
March 15 2007: 12:17 PM EDT
(Business 2.0 Magazine) -- As man-vs.-machine classics go, it had the crucial elements: The brash young champion. The new-and-improved computing powerhouse. That the champ was 17-year-old Ben Cook, anointed by the Guinness Book of World Records as the world's fastest text messager, and the machine was not a supercomputer but a cell phone, didn't detract from the drama - at least not to the crowd gathered at an Orlando voice-recognition software conference last fall.
Which would be faster at converting an elaborate sentence into text: Cook's flying thumbs or the elegant algorithms of new speech software from Nuance Communications? The harrowing test phrase - "The razor-toothed piranhas of the genera Serrasalmus and Pygocentrus are the most ferocious freshwater fish in the world. In reality they seldom attack a human" - flashed on a screen. Cook thumbed furiously. A Nuance staffer calmly dictated the phrase into a cell phone. It was a blowout: Nuance's software converted the phrase flawlessly in 16 seconds. Cook trudged home in 48 and was left mumbling in a dazed tone, "I don't know how you do that."
Loud and clear: TuVox VP Martin sells her system by showing call centers the flaws in theirs.
They did it with Nuance's recently launched Mobile Dictation software, which will be available through carriers as early as the first half of this year. There's also a broader explanation: Voice recognition, long ridiculed as one of those perpetually just-around-the-corner technologies like the personal jet pack or the Dick Tracy wristwatch, has finally arrived.
Advances in processing power, new software algorithms, and even better microphones have enabled established players like Nuance and a raft of startups to design systems that work - often at near 100 percent accuracy rates. And they're creating explosive potential for growth in markets for everything from handheld dictation devices to mobile phones to auto parts to battlefield translators.
Mobile's newest killer app: Voice
The overall market for voice-recognition technology topped $1 billion for the first time in 2006, a 100 percent increase in just two years. Within that broad market, there are numerous subsectors that are likewise surging: The market for server-based voice-recognition technology to power call centers and the like reached nearly $600 million in 2006 and is expected to double by 2009, according to Opus Research.
The market for speech technology embedded in devices such as phones and auto dashboards - worth about $125 million in 2006, according to research firm Datamonitor - is expected to quadruple to $500 million by 2010, powered by the rapid spread of voice-command features on phones and cars with increasing levels of "talking electronics," from music players to navigational systems. Ultimately, some experts say, voice-recognition systems are likely to be built into almost every gadget, appliance and machine that people use.
The surge in demand is already triggering investment from established voice players and newcomers alike. In 2006, Nuance (Charts) bought Dictaphone to enhance its presence in the health-care industry, even as Nuance's sales grew 20 percent to more than $300 million.
Microsoft's (Charts) new Vista operating system comes with voice technology that, after suffering embarrassing glitches, is now winning kudos from reviewers. Google (Charts) has said that it's studying technology to enable search-by-voice. Venture capitalists, meanwhile, are lining up to fund entrepreneurs with voice-recognition ideas all over Silicon Valley and beyond. "Speech technology," says Datamonitor analyst Daniel Hong, "is finally transitioning from a cool technology to a business solution."
The next generation
Voice-recognition technology dates to 1952, when Bell Labs researchers cobbled together a primitive system that could recognize numbers spoken over a telephone. Progress since has been halting, but with the advent of far more powerful computing components and years of plain old trial and error, systems today have finally reached the point where they can cope with innumerable accents, dialects and quirks of speech.
VoiceBox Technologies, a startup in Bellevue, Wash., in 2004 unveiled a prototype whose components had to be carried in a steamer trunk. Today roughly the same system fits on a device the size of a credit card and could be the brains of Toyota's voice-command dashboard systems (see correction below).
VoiceBox systems are now so sophisticated that they can analyze context to, say, figure out if the command "traffic" refers to road congestion, tunes from Steve Winwood's old band or a dope-smuggling film starring Michael Douglas.
Today's systems also have powerful capacity to essentially teach themselves. Tellme Networks, a startup in Mountain View, Calif., makes voice-recognition software used for corporate call centers and telecoms' 411 information systems. Tellme's platform captures some 10 billion utterances annually and constantly analyzes them, improving the system's precision literally every day. "Voice recognition is all about pattern recognition," says Tellme executive Jeff Kunins. "The more data you have, the better the recognition gets."
11 companies changing the world
And the more valuable voice recognition becomes as a customer tool. Call centers and customer service departments are notorious for the infuriating "Press or say 1" purgatories that older speech-recognition technologies created, but customer outrage isn't the only penalty: The average call-center call costs $5 if handled by an employee but 50 cents with a self-service, speech-enabled system, according to Data-monitor.
Online brokerage E-Trade Financial uses Tellme to field about 50,000 calls a day; half never go to an E-Trade employee. The company says Tellme's system is saving it at least $30 million annually.
Startup TuVox is also racking up customers in the call-center and corporate markets. Its VP for marketing, Azita Martin, has her team dial a call center and record the typically torturous, multistep efforts to, say, reach the billing department. Then they create an audio file that reveals what the interaction could sound like if Martin's target used TuVox's software for routing calls with advanced voice-recognition technology. She e-mails the two interchanges to the CEO of the company using the call center. The contrast has helped Martin sign up numerous clients during the past few months - one reason TuVox's annual revenue is growing at double-digit rates and its customer base has quadrupled in 12 months. Telecom New Zealand, one of its new customers, reports a tripling of call-center customer satisfaction since it installed a TuVox system.
While call centers and autos are expected to continue to be growth markets for voice recognition, the real bonanza will likely come in improved systems for all manner of mobile devices. Start with cell phones: Telecom companies think consumers will pay for a host of additional services such as dictating e-mail or searching for a restaurant if there's an easy-to-use voice interface on mobile phones. Indeed, Opus Research says telecoms expect to earn an additional $5 to $15 per month from every customer who opts for a voice-enabled phone.
Numerous startups are scrambling to provide that technology, including Promptu. Founded in 2000 by speech-technology veterans, the Menlo Park, Calif., startup has developed a package of voice-recognition features that will be offered through several carriers later this year. "The telecoms are calling us now," says Brady Bruce, a Promptu senior vice president. "I love that." Real
Creative uses
Other startups are developing voice features for everything from MP3 players to handheld GPS devices to laptops. Pluggd, founded last February by former Microsoft and Amazon engineer Alex Castro, has created a search engine that combines speech recognition with semantic analysis to, for instance, find the exact spot in a cooking podcast where soufflé techniques are discussed.
Vocera Communications, whose founders grew up on Star Trek reruns and named the conference rooms at their Silicon Valley headquarters after Capt. Kirk and other characters, got some attention two years ago when it unveiled a communicator badge inspired by the show that combines voice-recognition and wireless technologies. The device produced some snickers at the time but has found a growing following; among its customers are medical workers, who use it to search through a hospital directory by voice and find the right person to help with a patient problem or look up medical records.
Putting mobile services on the map
Vocera expects to turn profitable early next year. VoxTec International's Phraselator, a handheld gadget about the size of a checkbook, listens to requests for a phrase and then spits out a translation in any of 41 specified languages; it's currently being used by U.S. troops in Iraq and Afghanistan to provide on-the-fly translations in Arabic, Pashto and other local tongues. The Annapolis, Md., company, whose technology was originally developed for the Department of Defense back in 1997, won't disclose specific figures but says sales are way up.
Many experts expect voice technology to become almost ubiquitous someday, as speech recognition supplants typing, tapping, texting and touching as the primary interface with our machines. Rob Chambers, head of Microsoft's voice-recognition efforts, even foresees a day when the technology becomes powerful enough to correct mistakes in word choice or grammar - a kind of spell check for voice.
That may be decades away, but the technological improvements are nonetheless coming fast and furious, as was driven home in Orlando last fall.
The Nuance software that dusted the champion texter is roughly 25 percent more accurate than the company's best versions from a year ago, and Nuance researchers say next-generation products due to hit the market in just one year could produce 20 percent fewer errors than today's best systems. "Ben Cook is pretty incredible in how quickly he can type on his phone," says Nuance VP for worldwide marketing Peter Mahoney. "But this technology is just going to continue to get better and better."
Jeanette Borzo is a writer in San Francisco.
Correction: An earlier version of this story incorrectly stated that the Voicebox technology will be the brains in Toyota's new in-car navigation systems.
________________
Voice-recognition technology is no longer stuttering - and that means huge opportunities for established players and newcomers alike.
By Jeanette Borzo, Business 2.0 Magazine
March 15 2007: 12:17 PM EDT
(Business 2.0 Magazine) -- As man-vs.-machine classics go, it had the crucial elements: The brash young champion. The new-and-improved computing powerhouse. That the champ was 17-year-old Ben Cook, anointed by the Guinness Book of World Records as the world's fastest text messager, and the machine was not a supercomputer but a cell phone, didn't detract from the drama - at least not to the crowd gathered at an Orlando voice-recognition software conference last fall.
Which would be faster at converting an elaborate sentence into text: Cook's flying thumbs or the elegant algorithms of new speech software from Nuance Communications? The harrowing test phrase - "The razor-toothed piranhas of the genera Serrasalmus and Pygocentrus are the most ferocious freshwater fish in the world. In reality they seldom attack a human" - flashed on a screen. Cook thumbed furiously. A Nuance staffer calmly dictated the phrase into a cell phone. It was a blowout: Nuance's software converted the phrase flawlessly in 16 seconds. Cook trudged home in 48 and was left mumbling in a dazed tone, "I don't know how you do that."
Loud and clear: TuVox VP Martin sells her system by showing call centers the flaws in theirs.
They did it with Nuance's recently launched Mobile Dictation software, which will be available through carriers as early as the first half of this year. There's also a broader explanation: Voice recognition, long ridiculed as one of those perpetually just-around-the-corner technologies like the personal jet pack or the Dick Tracy wristwatch, has finally arrived.
Advances in processing power, new software algorithms, and even better microphones have enabled established players like Nuance and a raft of startups to design systems that work - often at near 100 percent accuracy rates. And they're creating explosive potential for growth in markets for everything from handheld dictation devices to mobile phones to auto parts to battlefield translators.
Mobile's newest killer app: Voice
The overall market for voice-recognition technology topped $1 billion for the first time in 2006, a 100 percent increase in just two years. Within that broad market, there are numerous subsectors that are likewise surging: The market for server-based voice-recognition technology to power call centers and the like reached nearly $600 million in 2006 and is expected to double by 2009, according to Opus Research.
The market for speech technology embedded in devices such as phones and auto dashboards - worth about $125 million in 2006, according to research firm Datamonitor - is expected to quadruple to $500 million by 2010, powered by the rapid spread of voice-command features on phones and cars with increasing levels of "talking electronics," from music players to navigational systems. Ultimately, some experts say, voice-recognition systems are likely to be built into almost every gadget, appliance and machine that people use.
The surge in demand is already triggering investment from established voice players and newcomers alike. In 2006, Nuance (Charts) bought Dictaphone to enhance its presence in the health-care industry, even as Nuance's sales grew 20 percent to more than $300 million.
Microsoft's (Charts) new Vista operating system comes with voice technology that, after suffering embarrassing glitches, is now winning kudos from reviewers. Google (Charts) has said that it's studying technology to enable search-by-voice. Venture capitalists, meanwhile, are lining up to fund entrepreneurs with voice-recognition ideas all over Silicon Valley and beyond. "Speech technology," says Datamonitor analyst Daniel Hong, "is finally transitioning from a cool technology to a business solution."
The next generation
Voice-recognition technology dates to 1952, when Bell Labs researchers cobbled together a primitive system that could recognize numbers spoken over a telephone. Progress since has been halting, but with the advent of far more powerful computing components and years of plain old trial and error, systems today have finally reached the point where they can cope with innumerable accents, dialects and quirks of speech.
VoiceBox Technologies, a startup in Bellevue, Wash., in 2004 unveiled a prototype whose components had to be carried in a steamer trunk. Today roughly the same system fits on a device the size of a credit card and could be the brains of Toyota's voice-command dashboard systems (see correction below).
VoiceBox systems are now so sophisticated that they can analyze context to, say, figure out if the command "traffic" refers to road congestion, tunes from Steve Winwood's old band or a dope-smuggling film starring Michael Douglas.
Today's systems also have powerful capacity to essentially teach themselves. Tellme Networks, a startup in Mountain View, Calif., makes voice-recognition software used for corporate call centers and telecoms' 411 information systems. Tellme's platform captures some 10 billion utterances annually and constantly analyzes them, improving the system's precision literally every day. "Voice recognition is all about pattern recognition," says Tellme executive Jeff Kunins. "The more data you have, the better the recognition gets."
11 companies changing the world
And the more valuable voice recognition becomes as a customer tool. Call centers and customer service departments are notorious for the infuriating "Press or say 1" purgatories that older speech-recognition technologies created, but customer outrage isn't the only penalty: The average call-center call costs $5 if handled by an employee but 50 cents with a self-service, speech-enabled system, according to Data-monitor.
Online brokerage E-Trade Financial uses Tellme to field about 50,000 calls a day; half never go to an E-Trade employee. The company says Tellme's system is saving it at least $30 million annually.
Startup TuVox is also racking up customers in the call-center and corporate markets. Its VP for marketing, Azita Martin, has her team dial a call center and record the typically torturous, multistep efforts to, say, reach the billing department. Then they create an audio file that reveals what the interaction could sound like if Martin's target used TuVox's software for routing calls with advanced voice-recognition technology. She e-mails the two interchanges to the CEO of the company using the call center. The contrast has helped Martin sign up numerous clients during the past few months - one reason TuVox's annual revenue is growing at double-digit rates and its customer base has quadrupled in 12 months. Telecom New Zealand, one of its new customers, reports a tripling of call-center customer satisfaction since it installed a TuVox system.
While call centers and autos are expected to continue to be growth markets for voice recognition, the real bonanza will likely come in improved systems for all manner of mobile devices. Start with cell phones: Telecom companies think consumers will pay for a host of additional services such as dictating e-mail or searching for a restaurant if there's an easy-to-use voice interface on mobile phones. Indeed, Opus Research says telecoms expect to earn an additional $5 to $15 per month from every customer who opts for a voice-enabled phone.
Numerous startups are scrambling to provide that technology, including Promptu. Founded in 2000 by speech-technology veterans, the Menlo Park, Calif., startup has developed a package of voice-recognition features that will be offered through several carriers later this year. "The telecoms are calling us now," says Brady Bruce, a Promptu senior vice president. "I love that." Real
Creative uses
Other startups are developing voice features for everything from MP3 players to handheld GPS devices to laptops. Pluggd, founded last February by former Microsoft and Amazon engineer Alex Castro, has created a search engine that combines speech recognition with semantic analysis to, for instance, find the exact spot in a cooking podcast where soufflé techniques are discussed.
Vocera Communications, whose founders grew up on Star Trek reruns and named the conference rooms at their Silicon Valley headquarters after Capt. Kirk and other characters, got some attention two years ago when it unveiled a communicator badge inspired by the show that combines voice-recognition and wireless technologies. The device produced some snickers at the time but has found a growing following; among its customers are medical workers, who use it to search through a hospital directory by voice and find the right person to help with a patient problem or look up medical records.
Putting mobile services on the map
Vocera expects to turn profitable early next year. VoxTec International's Phraselator, a handheld gadget about the size of a checkbook, listens to requests for a phrase and then spits out a translation in any of 41 specified languages; it's currently being used by U.S. troops in Iraq and Afghanistan to provide on-the-fly translations in Arabic, Pashto and other local tongues. The Annapolis, Md., company, whose technology was originally developed for the Department of Defense back in 1997, won't disclose specific figures but says sales are way up.
Many experts expect voice technology to become almost ubiquitous someday, as speech recognition supplants typing, tapping, texting and touching as the primary interface with our machines. Rob Chambers, head of Microsoft's voice-recognition efforts, even foresees a day when the technology becomes powerful enough to correct mistakes in word choice or grammar - a kind of spell check for voice.
That may be decades away, but the technological improvements are nonetheless coming fast and furious, as was driven home in Orlando last fall.
The Nuance software that dusted the champion texter is roughly 25 percent more accurate than the company's best versions from a year ago, and Nuance researchers say next-generation products due to hit the market in just one year could produce 20 percent fewer errors than today's best systems. "Ben Cook is pretty incredible in how quickly he can type on his phone," says Nuance VP for worldwide marketing Peter Mahoney. "But this technology is just going to continue to get better and better."
Jeanette Borzo is a writer in San Francisco.
Correction: An earlier version of this story incorrectly stated that the Voicebox technology will be the brains in Toyota's new in-car navigation systems.
________________
Wednesday, July 18, 2007
Random thought
Doctor Ramamurthy has a problem concerning diabetic patients one non compliant with the medical regimen, he tries to spend as much time as possible educating the patient on the need for a routine to be followed every day without fail. The need for good diabetic education material in the Indian regional languages will improve the compliance of the patients in the rural areas of India.
So far there has been no encouragement from anyone regarding these aspect of diabetes education which is sadly lacking.
All the possible Internet technologies are available What one needs is some manpower to translate what the deal from English to review the languages and key it in.
All the possible Internet technologies are available What one needs is some manpower to translate what the deal from English to review the languages and key it in.
Monday, July 16, 2007
stone age diet for DM2
there is some discussion as to why the Dm 2 rate is going up in present day world ,one contention is that it is diet
so some one has started a trial.
so some one has started a trial.
Publications
Tuesday, June 05, 2007
the wonder of technology
Sunday, May 27, 2007
dhanwantri
Dhanwantri is an organisation started with a narrow aim of uplifting brahmin community.
you may ask why am I a member of this community and why am I writing about this .
for a very long time I and my friends used to argue about the importance of good ends through non good means .
may be i am a supoprter of this concept if some educational institutions come up on the 1000 acres they have been able to collect through donations (actually every one donated half that would be about 500 acres.
it is for the good of every one.
so go ahead and check it out
http://hariwarangal.blogspot.com/
you may ask why am I a member of this community and why am I writing about this .
for a very long time I and my friends used to argue about the importance of good ends through non good means .
may be i am a supoprter of this concept if some educational institutions come up on the 1000 acres they have been able to collect through donations (actually every one donated half that would be about 500 acres.
it is for the good of every one.
so go ahead and check it out
http://hariwarangal.blogspot.com/
warangal-website_oldposts
It has been more than a year since I visited the warangal web blog I am planning atrip to india and will be going to warangal i hope to revive the blog bUt i am thinking of making every thing in to one blog .
the old warangal posts can be accesed here
http://warangal.blogspot.com/
the old warangal posts can be accesed here
http://warangal.blogspot.com/
Subscribe to:
Posts (Atom)
-
Approximate to Lisinopril 5mg Equivalent to Lisinopril 10mg Approximate to Lisinopril 20mg Approximate to Lisinopril 40mg Approximate to L...
-
డయాబెటిస్ స్వీయ-నిర్వహణ కు ముఖ్యమైన అడ్డంకులు 1) డయాబెటిస్ గురించి పరిజ్ఞానం మరియు అవగాహన లేకపోవడం 2) ఒక నిర్దిష...
-
Anemia Article Author: Jake Turner Article Editor: Steve Bhimji Editors In Chief: Mitchell Farrell Brian Froelke ...