New York, New York – Reported by Elite Traveler, the private jet lifestyle magazineLondon’s luxury fragrance house, Jo Malone, is setting up shop in New York’s vibrant West Village with a new boutique on Bleecker Street.The label’s latest outpost combines both classic and contemporary design elements for a unique, intimate shopping experience. The complete range of Jo Malone fragrances—including new scents like Wild Bluebell and old favorites like Lime Basel & Mandarin—will be available, as well as must-have bath and body products and the company’s signature candles.Visit www.jomalone.com.
April 18, 1997Two man roller hockey in the Vaults.
That’s the latest iteration of the CTC-iChip , created by a team of researchers from Massachusetts General Hospital and Harvard Medical School led by Mehmet Toner. The “chip” is about two inches long, one inch wide, and paper thin. It’s designed to capture what are known as circulating tumor cells (CTCs) to give doctors a way to diagnose and track cancer that is less invasive, cheaper, and more informative than a biopsy. CTCs are shed into the bloodstream by tumors, and their isolation and analysis could lead to early detection of invasive cancers—which is important, because the earlier a patient is diagnosed, the better his or her chances of survival—and help doctors develop better and more personalized treatment regimens. The problem is that these things are rare, typically just 1 to 10 CTCs per billion blood cells. Isolating them has proven difficult over the years. The new CTC-iChip combines multiple technologies like size separation (which takes advantage of the fact that CTCs are larger and stiffer than blood cells) and magnetic-tag separation (which involves tagging white blood cells with magnetic beads so they can be discarded using a magnetic field after the sample is run) to isolate the individual CTCs. Isolating the individual CTCs this way allows scientists to perform single-cell genomic analysis. And that’s important. Consider a cancer biopsy. You can look at the sample and see that the cells are different from one another, yet the way researchers further analyze the sample is by grinding up the tissue and examining the smearing of all the genetic signatures of the different individual cells. This provides you with a rough average of the genetics of all the cells in the sample, but it masks critical differences. For example, the genetics of the metastatic cells are quite different from the cells that won’t spread the disease; with conventional methods of analysis you can’t see that. So you won’t be able to understand what makes the cancer go from a dangerous to a deadly state. By employing single-cell analysis that’s facilitated by this microfluidic chip, physicians can develop a better understanding of the disease, which could lead to more effective personalized treatments. Pretty cool. But it doesn’t stop there. Another new and particularly interesting effort in the area of microfluidics is a play on the well-known system-on-a-chip (SoC) technology from the world of computers. It can be described as human-organs-on-a-chip and could eventually become an invaluable tool that leads to a more efficient drug-discovery process. The idea is not to make replacement organs for transplant, but to replicate enough of an organ’s functions to make the chips useful in testing substances for toxic and therapeutic effects. That has immediate applicability, because a major part of the preclinical phase of drug development involves assessing safety and biological activity in the laboratory—especially in animal studies. (It’s difficult to access reliable figures, but it’s safe to say that billions of dollars a year is spent on animal tests.) The problem with these animal models—without even touching on the various potential ethical issues involved—is that, although they have historically been one of the most trusted tools in drug development, they are not actually all that predictive of the human situation. Not only do animal models fail to identify numerous drugs that are toxic to humans, they also derail drugs that would have been efficacious. Of course this makes sense. Different animals evolved differently and have different biologies. Nevertheless, we continue to rely on expensive, time-consuming, and unreliable animal models in the drug-development process because they’re the best we have. But thanks to advancements in microfluidic technologies, human organs on chips could be a better way. The breakthrough in this area came in mid-2010, when researchers from the Wyss Institute for Biologically Inspired Engineering at Harvard announced they had successfully developed a lung-on-a-chip. The device, which is about the size of a rubber eraser and is made using human lung and blood-vessel cells, actually mimics a living, breathing human lung. It’s essentially a porous membrane with human cells from the lung’s air sac on one side and human capillary blood vessel cells on the other side. There’s air flowing through the channel on the lung side and a medium (like blood) with human blood cells in it flowing through the channel on the capillary side. The whole thing stretches and relaxes like our lungs do when we breathe. And it does a good job replicating the natural responses of living lungs to various stimuli. Just as the living lung-blood interface recognizes invaders such as inhaled bacteria or toxins and activates an immune response, so too does lung-on-a-chip. The researchers tested this by introducing E. coli bacteria into the air channel on the lung side of the device while concurrently adding white blood cells to the channel on the blood vessel side. The lung cells detected the bacteria and, through the porous membrane, activated the blood vessel cells, which in turn triggered an immune response that ultimately caused the white blood cells to move to the air chamber and destroy the bacteria. Lung-on-a-chip was just the beginning. The Wyss Institute also has kidney-on-a-chip, bone-marrow-on-a-chip, and gut-on-a-chip—a silicon polymer device about the size of a flash memory stick that mimics complex 3D features of the human intestine. All could prove to be valuable diagnostic tools in the development of safe and effective new therapeutics. We’re on the cusp of a revolution in life-science research. This revolution promises to bring with it better ways to detect cancer and other diseases, as well as a more efficient drug-discovery process. And it promises these benefits on the cheap—thanks in large part to what’s known as microfluidics. Let’s back up for a moment… back to December 29, 1959. It was then that physicist Richard Feynman gave his now-famous lecture titled There’s Plenty of Room at the Bottom, during which he essentially anticipated what we now call nanotechnology. Feynman actually never mentioned the word “nanotechnology” in his talk—and it wasn’t until the 1980s that nanotech researchers began regularly citing his lecture—but what he did do at that time was posit the amazing possibilities afforded by miniaturization, including “miniaturizing the computer.” He foresaw that the clunky “computing machines” of his day would be infinitely more useful if they could be shrunk. At the time of Feynman’s talk, although transistors were beginning to replace vacuum tubes, computers were still huge and grossly inefficient. The IBM Stretch computer of 1959 managed to fit a mere 150,000 transistors into its 33-foot length. Meanwhile, Feynman was talking about wires “that should be 10 or 100 atoms in diameter” and circuits that “should be a few thousand angstroms across.” (One thousand angstroms is equal to 100 nanometers.) By 2011, Intel was mass producing processors with 32-nanometer technology that contained 2.6 billion transistors. Intel’s Xeon server chip that’s due to be released this year has 4.31 billion transistors. And consider that one of today’s smartphones has significantly more computing power than all of NASA circa 1969, when it sent Neil Armstrong and Buzz Aldrin to the moon. While it’s true that we don’t yet have the capabilities Feynman envisioned—of building “a billion tiny factories, models of each other, which are manufacturing simultaneously” from the bottom up, atom by atom—his miniaturization-of-computers idea was clearly spot on. So what if we applied his approach to a different area of scientific study… say biology? After all, much of biology today is similar to where electronics was yesterday—except instead of vacuum tubes and cable wires, you have arrays of test tubes and hoses. What if all that “plumbing” used to study biological systems could be shrunk—would we reap the same benefits as we did in electronics? Turns out the answer is yes. And that’s where microfluidics comes in. Microfluidics is the science of fluid dynamics on the micro scale (i.e., millionths of a meter). We’ll spare you the details of the fluid mechanics at this scale—where things like laminar flow, diffusion, capillary effects, and surface tension dominate—and boil things down to one simple idea: Microfluidics and its application is all about conducting biological experiments and tests with really small plumbing. How small is the plumbing we’re talking about? The channels through which the fluids travel in the devices today are roughly the width of a human hair, and sometimes smaller. If you think you’re not already acquainted with the world of microfluidics, think again. Two very recognizable examples of microfluidic technologies are the glucometer to measure blood sugar levels and pregnancy tests. Basically, we’re talking about precisely manipulating fluids—to do things like blood screening for diseases and single-cell genomic analysis—using a microscale device built with technologies that were first developed by the semiconductor industry, and were later expanded into fluidics due to the benefits that accrue from shrinking things. For starters, miniaturization means lower costs, since researchers require much smaller volumes of samples and reagents to conduct experiments and run tests. There’s also the potential for running multiple experiments in parallel and cutting down on the number of steps required to run them. But microfluidics technologies also make novel tasks possible, like giving us the ability to interact with individual cells. Let’s look at another example to explain further. Lung-on-a-chip (top) and Gut-on-a-chip (bottom) The Wyss team’s ultimate goal is to build 10 different human-organs-on-chips and link them together on an automated instrument to mimic whole-body physiology. This could eventually lead to personalized chips that could predict a specific individual’s drug response. The bottom line: In theory, since these microfluidic human-organs-on-chips use human cells and mimic both the mechanics and biology of the organs they represent, they would be more predictive than animal models, so drug failure rates would be lower. Modeling with these chips would cut costs and reduce the time involved in the drug-discovery process. It’s still too early to tell how successful this field of research will be… but the prospects are exciting. Microfluidic technologies for many applications like this are still relatively early stage, but the above examples demonstrate how microfluidics should play an increasingly important role in disease detection and could ultimately disrupt the drug-discovery process for the better. This kind of game-changing technology is what we at Casey Extraordinary Technology specialize in finding and investing in. From cutting-edge biotech drug companies and molecular-diagnostic innovators to the firms that created the 3D printing industry and those that are building the smart grid, the track record of our investment recommendations stands out among all our competitors and truly speaks for itself, with an average gain per closed position during 2013 and 2014 of 66%. To become part of this track record of success, simply sign up for a 90-day, risk-free trial of Casey Extraordinary Technology.
Statements submitted to MPs have provided further evidence of widespread dishonesty among healthcare professionals who carry out disability benefit assessments, but their inquiry has had to be abandoned because of the prime minister’s decision to call a general election.Despite its inquiry into the personal independence payment (PIP) assessment process having to be scrapped, the Commons work and pensions select committee has published written evidence it has received from PIP claimants and disability organisations.The committee held an urgent evidence session about the assessment process in March, a hearing partly triggered by a Disability News Service (DNS) investigation, before seeking further written evidence.DNS had provided the committee with substantial evidence of widespread dishonesty among PIP assessors in the reports they prepare for government decision-makers.The DNS investigation revealed that assessors working for the outsourcing companies Capita and Atos – most of them nurses – had repeatedly lied, ignored written evidence and dishonestly reported the results of physical examinations.DNS has now collected nearly 200 examples of cases in which PIP claimants have said that healthcare professionals working for Capita and Atos produced dishonest assessment reports.DWP has consistently claimed that there is no dishonesty at all among its outsourced healthcare assessors.Inclusion London, the pan-London disabled people’s organisation, provided the most detailed written evidence of all the individuals and groups that contributed to the committee’s inquiry.It said in its evidence: “Again and again Disabled people are reporting that assessors have ignored written and verbal evidence and that reports do not reflect what occurred in the assessment.”Inclusion London quoted widely from evidence compiled by DNS, and concluded: “The extent to which false information is included in assessment reports cannot be attributed to one or two negligent assessors but indicates systemic failings with the current PIP assessment process.”It called for all assessments to be recorded, and for “a clear and accessible system for Disabled people to file complaints against assessors with an independent body and for complaint statistics to be made public”.It also called for a new PIP assessment, based on the social model of disability and created in co-production with disabled people, which focuses on “barriers and the impact of impairment on daily life rather than functionality”.Other written evidence submitted to the committee appears to confirm the conclusions of the DNS investigation.Among those who responded to a survey by Disability Rights UK (DR UK) was a healthcare professional with a first-class degree in physiotherapy.They said they had been “shocked by the level of errors, inaccuracies, omissions and, quite possibly, lies” in the assessment report compiled for their PIP claim, according to DR UK’s evidence to the committee.The respondent concluded that “the musculoskeletal assessment conducted was appalling and could not have provided sufficient information upon which a decision regarding my physical capabilities to carry out work for any period of time could be made.“Lies were also told about the content of the musculoskeletal assessment – data was recorded for tests which were not conducted.”Another DR UK survey respondent described how PIP decisions were often overturned on appeal due to “assessors making inaccurate statements, assessors making false statements, assessors incorrectly interpreting things the claimant said or did”.In its evidence to the committee, the mental health charity Rethink said that respondents to its own survey on PIP “felt that there was a discrepancy between what was discussed at the assessment and the content of the subsequent written report.“We received several examples of PIP applicants claiming that assessors had deliberately misinterpreted them and in… some cases included complete fabrications in their reports.”But the evidence compiled by the committee may now end up being discarded because the decision by Theresa May to call a general election on 8 June means that parliament was dissolved this week, leading to some committee inquiries having to be abandoned.Mark Lucas, a PIP claimant who has spoken out repeatedly about the “shockingly poor and dishonest” assessment system, and has given evidence to an inquiry into PIP assessments set up by Stoke-on-Trent City Council, said the decision to call an election was “another set back at the end of many set backs”.He said: “Clearly the health professionals have been dishonest and the government has gone to great lengths to ensure the PIP scam is kept quiet for as long as possible.“Everyone knows what has gone on is wrong but only few have voiced their concerns.“I am sure if we continue to have the same government the rights of persons with disabilities will be further abused.”A spokeswoman for the committee said the PIP investigation was “one of the inquiries that fell with the announcement of the election”.When the committee is reformed in the new parliament – probably in September – it could choose to relaunch the inquiry, but will be under no obligation to do so, but if it does it could choose to “keep and use the evidence they have now”, she said.
Technology Ray Hennessey 3 min read –shares Guest Writer Google May Have Violated Wiretap Laws Free Webinar | Sept 5: Tips and Tools for Making Progress Toward Important Goals Image credit: AFP Register Now » In a victory for online privacy advocates but a blow to advertisers, a federal judge in California has ruled Google may have violated wiretapping laws by scanning and reviewing users’ Gmails.Google has long scanned Gmail messages to then target advertising to its users. The company has argued the practice is perfectly within the confines of both federal and state eavesdropping laws because Gmail users give up their privacy as part of Gmail’s Terms of Service contract.U.S. District Judge Lucy Koh disagreed, saying those Terms of Service “did not explicitly notify Plaintiffs that Google would intercept users’ emails for the purposes of creating user profiles or providing targeted advertising.”What’s more, even if Gmail account holders consented to having their emails searched, the people with whom those users are communicating didn’t. Google has claimed that users of, say, Microsoft’s Outlook, should know that Google will view their mail when sent to a Gmail account.Related: Google Looking Beyond ‘Cookies’ to Track People OnlineKoh was unconvinced, saying she “cannot conclude that any party — Gmail users or non-Gmail users — has consented to Google’s reading of email for the purposes of creating user profiles or providing targeted advertising.”The ruling, part of a proposed class action against Google, is a big win for privacy advocates, who have complained that technology companies have too much access to personal information and are not overt enough in explaining how customer information and data are used.The chorus for more protections has only gotten louder since it was revealed that companies like Google shared information with the U.S. government as the National Security Agency spied on American emails, texts and phone calls.Still, companies have long found that there is a potentially high value proposition for advertisers in targeting marketing toward users based on their interests. Google, for instance, has long tied advertising to search results from users. Gmail, it argues, is an extension of that.But Google has found itself more and more in the crosshairs of the privacy-protection crowd. Earlier this month, the company found out its capture of data over open Wi-Fi routers also could violate federal wiretapping laws. Google captured data through cars sent throughout the company to record images for its Google Street View maps. It has said it did so to improve its location-services features, but broader content was captured by the cars.Google is not alone. In theory, Judge Koh’s ruling could affect other companies that mine free email for information to match with advertisers. Yahoo mail, for instance, has a Terms of Service that allows for broader data capture.Court: Facebook Likes Are Protected Speech Next Article September 27, 2013 Opinions expressed by Entrepreneur contributors are their own. Attend this free webinar and learn how you can maximize efficiency while getting the most critical things done right. Editor-at-Large Add to Queue
Partnerships Uber Teams Up With Spotify So Passengers Can Play ‘Backseat DJs’ Opinions expressed by Entrepreneur contributors are their own. Add to Queue Laura Entis Free Webinar | July 31: Secrets to Running a Successful Family Business Guest Writer November 17, 2014 It’s official: After rumblings on Friday that Uber was teaming up with Spotify, the ridesharing service officially announced the partnership this morning in a blog post.“We’ve joined forces with Spotify, a world leader in streaming music, to enable you to remotely control the music that plays through your Uber’s speakers,” the company’s senior product manager wrote. “Whether you’re starting the night with your pre-party mix or unwinding with a chill playlist on your way home, the choice is now yours with Uber.”Related: Lyft Says Former COO Took Confidential Files With Him to UberTo play “backseat DJ,” you first need to connect your Spotify account with your Uber profile; when you request a ride, a music bar will appear at the bottom of the Uber app and you can select a playlist from your Spotify account while you wait for your car to show (although this only works if you get a music-enabled car, a detail that’s apparently stressing out Uber drivers with older vehicles who fear they’ll be penalized for not offering the service).Get in the car, and viola – your soundtrack will automatically start playing.For both companies, the partnership makes sense. On Uber’s part, it’s a way to differentiate itself from archrival Lyft, while Spotify (currently caught up in a tiff of its own with…Taylor Swift) gets increased exposure and presumably, new customers.The feature is slated to launch this Friday in 10 major cities, including London, Los Angeles, Mexico City, New York, San Francisco and Sydney with additional rollouts coming in the next few weeks. Related: Uber, Lyft Find Ally in New York’s Attorney General Next Article Learn how to successfully navigate family business dynamics and build businesses that excel. –shares 2 min read Register Now »
How is Artificial Intelligence Improving Advertising in 2019 Sebastien FilionApril 30, 2019, 4:00 pmApril 30, 2019 ad exchange platformsAdvertisingArtificial IntelligenceMarketing Technology Blogprogrammatic advertising Previous ArticleTechBytes with Chemi Katz, Co-Founder and CEO, NamogooNext ArticleSurefire Signs Your Revenue Team Is Disconnected from the Buyer Journey Artificial Intelligence (AI) is quickly becoming a mainstay in numerous industries across the world, enabling more efficient manufacturing, safer transportation, and faster problem-solving and data processing. While these practical applications are actively seen almost daily, AI is also being applied in ways we don’t see at first glance, namely in the world of advertising.How is AI Assisting Full-Scale Advertising CampaignsAs AI evolves and improves, it will provide numerous benefits for advertisers, publishers, marketers, and especially so in programmatic advertising. With programmatic advertising today, Artificial Intelligence is used to develop profiles of online and in-app viewers, which are then used to tailor appropriate, relevant ads for each unique user. This allows advertisers to target large scale audiences more effectively and publishers to offer ad experiences tailored to their editorial content without having to blanket entire sets of websites in the hopes that the right viewers will visit.AI is also assisting full-scale campaigns, analyzing entire datasets against KPIs to determine what is working and what needs to be changed. While not particularly innovative in the wider industry context AI’s entry here provides valuable support for advertisers, publishers, and marketers by significantly reducing the capacity for human error, which is always a possibility when comparing and contrasting any amount of performance data.Read More: Why Measurement is the Secret to Agency SuccessIs AI actually solving real digital advertising problems?First, by expanding on the development of audience profiles Artificial Intelligence systems are being used to evaluate and identify the most eye-catching and relevant advertisement to use for any online individual. This process, often known as Dynamic Creative Optimization (DCO), begins with the system identifying a unique user and their specific profile. With this information, the system can then look through any number of creative items in a campaign and based on the user’s profile, select the creative that will work best for them.For example, if a campaign is running hundreds of various images to drive traffic, it will have numerous color combinations. If the campaign begins to see a trend of stronger positive response to creative assets featuring a red call-to-action than a blue or a green, the DCO system will select and display the pieces of creative which match that profile. The purpose here being to present viewers with an advertisement similar in characteristics to ones they’ve connected with in past.Read More: Demystifying Predictive AnalyticsAI in programmatic digital advertisingMoving forward, Artificial Intelligence is becoming an ever more valuable tool in programmatic digital advertising through the process of Supply Path Optimization (SPO). Through this process, marketers can maximize every advertising dollar. SPO selects each possible avenue to deliver an advertisement to a given website or application, evaluates and compares the options offered by each ad publisher, and makes a choice that gives the buyer the best bang for their buck.If an advertiser wants to run an ad on a specific site, this advertiser should optimize that ad at different times, for different prices, to different audiences. Instead of choosing randomly or working through each possibility manually, an AI system can run through the possibilities quickly and determine the best choice for each ad.Looking ahead to what is next for artificial intelligence and its application in advertising, ad exchange platforms are at the forefront of development. On these exchanges where multiple publishers put forth potential avenues for an advertiser to get in front of an audience, the best choice comes down to the best price to performance ratio of any given option.Using an AI system, a company can build a pool of information based on previous sales for a certain target site or app, at a specific time, for any given audience – in this pool of information would be the most recent prices any given ad has sold for. With this information at hand, predicting the price an advertising space will sell for becomes a possibility, providing a valuable competitive edge for specific publishers. Further, this could create a more competitive marketplace in the future as prices are predicted earlier and more accurately, giving a window to price slightly above or below the trend.While these are only a few ways Artificial Intelligence is moving in to and affecting the world of digital advertising, especially as programmatic advertising becomes a primary driver in the industry, one thing is quickly becoming clear; AI is only going to become more important to effective advertising.Read More: Why Senior Business Leaders Should Care About CX Data
Explore further Citation: Microsoft, Alibaba AI programs beat humans in a Stanford reading test (2018, January 19) retrieved 18 July 2019 from https://phys.org/news/2018-01-microsoft-alibaba-ai-humans-stanford.html ©2018 The Mercury News (San Jose, Calif.) Distributed by Tribune Content Agency, LLC. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Reading comprehension: Alibaba model may get better marks than you First, they beat us at chess. Then it was Go. Now it’s basic reading comprehension. Credit: CC0 Public Domain The robots are coming.Two artificial intelligence programs created by Chinese e-commerce giant Alibaba and Microsoft beat humans on a Stanford University reading comprehension test, Alibaba said recently. Alibaba took the honors as creator of the first program to ever beat a human in a reading comprehension test, scoring 82.44 percent out of a perfect 100 and narrowly edging past the human’s 82.304 percent.A different program built by Microsoft scored higher than Alibaba’s at 82.605 on the same test, but that score was finalized a day later, according to Bloomberg.The test, known as Stanford Question Answering Dataset or SQuAD for short, asks contestants—human and robot—to provide exact answers to more than 100,000 questions drawn from more than 500 Wikipedia articles. The test is designed to see if artificial intelligence can process large amounts of information before fully comprehending it and offering precise answers.Some of the Wikipedia articles from which questions were drawn covered a wide range of topics, from Super Bowl 50 (“Where did Super Bowl 50 take place?” Answer: Santa Clara.) to Doctor Who (“What planet is Doctor Who from?” Answer: Gallifrey.).”These kinds of tests are certainly useful benchmarks for how far along the AI journey we may be,” Microsoft spokesperson Andrew Pickup told CNN. “However, the real benefit of AI is when it is used in harmony with humans.”Major technology companies in the United States and China have invested billions of dollars in artificial intelligence to gain a foothold in what may be the next technological frontier. The Chinese government has outlined a plan to create a $150 billion AI industry by 2030 in partnership with private companies such as Alibaba and Tencent.Microsoft in December announced its “AI on Earth” project to help the planet become more environmentally sustainable using the company’s in-house AI infrastructure. Microsoft will invest $50 million over the next five years, according to Microsoft CEO Brad Smith.”At Microsoft, we believe artificial intelligence is a game changer,” said Smith. “As we enter the world’s Fourth Industrial Revolution, a technology-fueled transformation, we must not only move technology forward, but also use this era’s technology to clean up the past and create a better future.”With comprehension skills now arguably better than a human’s, Alibaba’s chief data scientist said the new breakthrough will be applied to helping human customers.”The technology underneath can be gradually applied to numerous applications such as customer service, museum tutorials and online responses to medical inquiries from patients, decreasing the need for human input in an unprecedented way,” Luo Si, chief scientist for natural language processing at Alibaba’s Institute of Data Science of Technologies, told Bloomberg.
Provided by Case Western Reserve University Citation: Robots reading feelings (2019, April 5) retrieved 17 July 2019 from https://phys.org/news/2019-04-robots.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Explore further Teaching robots how to interact with children with autism “Woody,” a low-cost social robot, is reading the emotions on the human’s face, based on the algorithms being developed by researcher Kiju Lee and her team at Case Western Reserve University. Credit: Case Western Reserve University Robots are getting smarter—and faster—at knowing what humans are feeling and thinking just by “looking” into their faces, a development that might one day allow more emotionally perceptive machines to detect changes in a person’s health or mental state. Researchers at Case Western Reserve University say they’re improving the artificial intelligence (AI) now powering interactive video games and which will soon enhance the next generation of personalized robots likely to coexist alongside humans.And the Case Western Reserve robots are doing it in real time.New machines developed by Kiju Lee, the Nord Distinguished Assistant Professor in mechanical and aerospace engineering at the Case School of Engineering, and graduate student Xiao Liu, are correctly identifying human emotions from facial expressions 98 percent of the time—almost instantly. Previous results from other researchers had achieved similar results, but the robots often responded too slowly.”Even a three-second pause can be awkward,” Lee said. “It’s hard enough for humans—and even harder for robots—to figure out what someone feels based solely on their facial expressions or body language. “All of layers and layers of technology —including video capture—to do this also unfortunately slows down the response.”Lee and Liu accelerated the response time by combining two pre-processing video filters to another pair of existing programs to help the robot classify emotions based on more than 3,500 variations in human facial expression.But that’s hardly the extent of our facial variation: Humans can register more than 10,000 expressions, and each also has a unique way of revealing many of those emotions, Lee said.But “deep-learning” computers can process vast amounts of information once those data are entered into the software and classified.And, thankfully, the most common expressive features among humans are easily divided into seven emotions: neutral, happiness, anger, sadness, disgust, surprise and fear—even accounting for variations among different backgrounds and cultures.Applications now and futureThis recent work by Lee and Liu, unveiled at the 2018 IEEE Games, Entertainment, and Media Conference, could lead to a host of applications when combined with advances by dozens of other researchers in the AI field, Lee said.The two are also now working on another machine-learning based approach for facial emotion recognition, which so far has achieved over 99-percent of accuracy with even higher computational efficiency.Someday, a personal robot may be able to accurately notice significant changes in a person through daily interaction—even to the point of detecting early signs of depression, for example.”The robot could be programmed to catch it early and help with simple interventions, like music and video, for people in need of social therapies,” Lee said. “This could be very helpful for older adults who might be suffering from depression or personality changes associated with aging.”Lee is planning to explore the potential use of social robots for social and emotional intervention in older adults through collaboration with Ohio Living Breckenridge Village. Senior residents there are expected to interact with a user-friendly, socially interactive robot and help test accuracy and reliability of the embedded algorithms.Another future possibility: A social robot who learns the more-subtle facial changes in someone on the autism spectrum—and which helps “teach” humans to accurately recognize emotions in each other.”These social robots will take some time to catch in the U.S.,” Lee said. “But in places like Japan, where there is a strong culture around robots, this is already beginning to happen. In any case, our future will be side-by-side with emotionally intelligent robots.”
SHARE SHARE EMAIL COMMENTS United Kingdom Published on March 13, 2019 SHARE Ahead of the Lok Sabha elections, the overseas wings of the main political parties have swung into action to garner support from the sizable Indian diaspora in the UK. Both the Indian Overseas Congress and the Overseas Friends of the BJP are set to hold car rallies across parts the UK the coming Saturday, among other events.The Overseas Friends of the BJP has already kicked off its events, including a motorcycle rally earlier this week, while car rallies will take place across the UK including in Birmingham, Glasgow and central London on Saturday, with a convoy of over 100 cars, while in early April “flash mob,” event is planned for central London. Later that month, they are planning a “call for Modi,” to encourage members of the diaspora to make calls to friends and family in India to encourage them to support the Prime Minister. “We have 2,700 people who have registered to say they may want to travel to India to campaign for the Prime Minister,” said the organisation’s UK head Kuldeep Shekhawat, of a recently-launched programme. He said their awareness-raising campaign would focus on development, corruption-less administration and “image building of the country.”The IOC is set to hold a road show — involving over 100 cars — that will pass through some of the major Indian diaspora areas around London, Leicester, Birmingham, Coventry and Slough — said Sudhakar Rangula an IOC UK spokesperson. “We are going to encourage people to come out and support us and expect a prominent leader from within the party to join us,” he said. Further events are set to take place within the various UK branches of the organisation dotted across the UK, while the student wing is set to hold events of its own. “Our message will be around the failures of the government and all the promises that were made but not delivered on…not least the threat to our democracy posed by them. We want to motivate all of the diaspora irrespective of their political inclination to support us in this.”Totalling around 1.4 million — including British-born Indians and Indian citizens — the British Indian diaspora is the largest in Europe, and among the most wealthy and influential globally — with many occupying senior positions across sectors from politics to business, making it an important focus area in the campaign.Both party leaders have visited the UK in the past year — during a visit to London for the Commonwealth Heads of Government Summit and bilateral meetings, Modi attended a large diaspora event in central London, while Rahul Gandhi visited in August, speaking at a diaspora event in West London. COMMENT Indian Oversees Congress, Overseas Friends of the BJP to hold car rallies on Saturday London
WB: ASI allegedly thrashes shopkeeper for filing complaintThe complainant Raju Dey, a resident of Duttapukar claimed he was threatened by a local Arjun Mondal. advertisement Next Asian News International DuttapukurJuly 13, 2019UPDATED: July 13, 2019 19:14 IST Raju Dey (in pic) said he was called to the police station where he was brutally beaten by ASI Anich Ali Khan. (Photo: ANI)A shopkeeper was allegedly brutally beaten up by an assistant sub-inspector (ASI) at the Bidhannagar South Police Station here for filing a complaint against a person who had threatened to kill him.The complainant Raju Dey, a resident of Duttapukar claimed he was threatened by a local Arjun Mondal.According to Dey’s family, Mondal parked his bike in front of his shop on Friday and passed lewd comments and gave a life threat to him. Following which, he filed a complaint with the Bidhannagar South Police Station.Later, Dey said he was called to the police station, where he was brutally beaten by ASI Anich Ali Khan, who questioned him about his complaint.On Saturday, Raju lodged a complaint against the ASI, demanding immediate action against him.Raju was shifted to a government hospital where he is presently undergoing treatment after being allegedly beaten up by the policeman.TMC councillor Nirmal Dutta condemned the incident and told ANI that Bidhannagar police has assured strict action against the ASI if he is found guilty.Also Read | Man arrested for posing as police officer in Delhi’s Karol BaghAlso Read | 5 cops suspended after old video of them thrashing woman with belt goes viralFor the latest World Cup news, live scores and fixtures for World Cup 2019, log on to indiatoday.in/sports. Like us on Facebook or follow us on Twitter for World Cup news, scores and updates.Get real-time alerts and all the news on your phone with the all-new India Today app. Download from Post your comment Do You Like This Story? Awesome! Now share the story Too bad. Tell us what you didn’t like in the comments Posted byChanchal Chauhan