Tuesday 23 February 2016

Can optogenetics restore sight to blind people?


Optogenetics is the new hotness in neuroscience research: It affords the ability to control neurons by shining light on them. We’ve successfully used it in vivo to record neural activity patterns with millisecond-scale precision, and to create a “wireless router for the brain.” The 2014 Nobel Prize in Medicine went to a team of researchers using optogenetics to map the function of new types of brain cells.
Now a team of researchers supported by RetroSense Therapeutics, a startup from Ann Arbor, is diving straight into the therapeutic uses of this emerging technology by trying to cure one type of blindness. They’re using a clever application of optogenetics to take on retinitis pigmentosa: an incurable genetic disease that causes inexorable blindness as it destroys rods and cones in the eye.

The team’s strategy is simple, as much as anything is simple in bleeding-edge medical research. At a clinic in Texas, scientists will inject a non-pathogenic virus into neurons into the eyes of a group of subjects. They’re hoping the virus will infect nerve cells called ganglion cells, which transmit signals from the retina to the brain. The virus is altered to contain a genetic vector for channelrhodopsin, a light-sensing protein from algae which responds to light of a single wavelength. The idea is that making the ganglion cells express channelrhodopsin will make them sensitive to light, giving back some vision to those afflicted by the progressive disease.
Usually you have to implant fiber-optic wires in the brain to do anything optogenetic, because you need light to turn on optogenetically enhanced nerves, but light doesn’t penetrate well through the skull. (This is not an accident.) But because the eye is naturally exposed to light, it’s the perfect venue for a trial like this one, which seeks to switch the photoreceptive burden from the compromised rods and cones to ganglion cells deeper in the retina.

Since there’s image processing at every cellular layer in the eye, and the ganglion cells are deeper than the rods and therefore receive fewer photons, it isn’t clear exactly what visual granularity can be achieved here. If the experiment succeeds, the researchers expect that the experimental cohort will get monochromatic vision at very low resolution. RetroSense CEO Sean Ainsworth told the MIT Tech Review he hopes the treatment will allow patients to “see tables and chairs” or even read large letters. In experiments at the Institut de la Vision in Paris, blind mice treated with optogenetics will move their heads to follow an image and will move to avoid bright light if kept in a dark box.
Grainy, low-resolution monochromatic vision might not sound like much compared with what humans normally perceive, but these efforts are important steps on the road to long-term vision restoration. Rough shapes and grayscale projection are a far better alternative to total blindness.

Adobe brings Raw photo workflow to Android with Lightroom 2.0


One of the remaining speed bumps in the way of broader adoption of smartphones as the camera of choice among photo enthusiasts has been a lack of end-to-end support for shooting in Raw. Basic support for capturing Raw (DNG format) images has been rolling out in Android — for some phones running Android 5.0 or later — but processing and storing them has remained awkward. Google’s Snapseed added support for developing Raw images last year, but many have been waiting for a solution that would integrate with their Adobe-centric workflow.
Today, Adobe updated its Lightroom for Android application to version 2.0, and it now includes full support for Raw images through its in-app Camera — assuming your phone can shoot Raw to begin with. It further ups the ante with the ability to preview various presets in real time while you are shooting.
You can see how much difference Raw post-processing can make in the featured split image for this article (above). The left side is a portion of the JPEG as shot with a Nexus 5, and the right side is the Raw version of the same image post-processed using Adobe’s Lightroom. Pros and others who are serious about image quality have long relied on shooting Raw to give them the best possible post-processing possibilities. Now Adobe has full support for them doing it with their smartphones. Just make sure and double-check whether you can capture DNG images with your phone before getting too excited, as Adobe’s support only works on models which already have that capability.

Adobe’s in-app Camera feature is pretty cool

Lightroom 2.0 for Android features a well-designed in-app camera, making it trivial to capture images and have them available for editing and uploading. In addition to the usual set of camera controls, the in-app camera features five “shoot-through” presets that you can preview on your phone’s screen while you are composing and shooting. These special presets are non-destructive, in keeping with Lightroom’s editing mantra, so you can change or remove there effects later. You can further process your DNGs using Lightroom (or Snapseed) on your mobile device, but Lightroom also now syncs the DNG file to your desktop Lightroom, so it will automatically be available for you to work on from your main computer.

Dehaze and split toning also added to Lightroom for Android


IIf you can live with the limits of a phone-sized screen, Lightroom for Android now lets you do local adjustmentsn addition to support for Raw files, Adobe has continued to port some of its most popular image processing features from its desktop into Lightroom for Android. Version 2.0 adds support for Adobe’s popular Dehazing filter, and for Split Toning. You can now also specify specific points when you apply a Tone Curve, as well as set curves separately for each color channel. For those who want ultimate control, Adobe has also added targeted adjustments, so you can control which portions of your image are affected by the adjustments you add in Lightroom.

Adobe has also upgraded Lightroom’s sharing capabilities, and is working to build a community feel with the #lightroom hashtag. The new version of Lightroom also integrates directly with Adobe’s clever Clip mobile video editor, so you can very quickly and easily create professional-looking photo stories from your images. Lightroom 2.0 for Android is free, and available now for download from the Google Play store.

A simple wireless technology promises to make driving much safer.


Hariharan Krishnan hardly looks like a street racer. With thin-rimmed glasses and a neat mustache, he reminds me of a math teacher. And yet on a sunny day last September, he was speeding, seemingly recklessly, around the parking lot at General Motors’ research center in Warren, Michigan, in a Cadillac DTS.

I was in the passenger seat as Krishnan wheeled around a corner and hit the gas. A moment later a light flashed on the dashboard, there was a beeping sound, and our seats started buzzing furiously. Krishnan slammed on the brakes, and we lurched to a stop just as another car whizzed past from the left, its approach having been obscured by a large hedge.“You can see I was completely blinded,” he said calmly.

The technology that warned of the impending collision will start appearing in cars in just a couple of years. Called car-to-car or vehicle-to-vehicle communication, it lets cars broadcast their position, speed, steering-wheel position, brake status, and other data to other vehicles within a few hundred meters. The other cars can use such information to build a detailed picture of what’s unfolding around them, revealing trouble that even the most careful and alert driver, or the best sensor system, would miss or fail to anticipate.

Already many cars have instruments that use radar or ultrasound to detect obstacles or vehicles. But the range of these sensors is limited to a few car lengths, and they cannot see past the nearest obstruction.

Car-to-car communication should also have a bigger impact than the advanced vehicle automation technologies that have been more widely heralded. Though self-driving cars could eventually improve safety, they remain imperfect and unproven, with sensors and software too easily bamboozled by poor weather, unexpected obstacles or circumstances, or complex city driving. Simply networking cars together wirelessly is likely to have a far bigger and more immediate effect on road safety.

Creating a car-to-car network is still a complex challenge. The computers aboard each car process the various readings being broadcast by other vehicles 10 times every second, each time calculating the chance of an impending collision. Transmitters use a dedicated portion of wireless spectrum as well as a new wireless standard, 802.11p, to authenticate each message.

Krishnan took me through several other car-to-car safety scenarios in the company’s parking lot. When he started slowly pulling into a parking spot occupied by another car, a simple alert sounded. When he attempted a risky overtaking maneuver, a warning light flashed and a voice announced: “Oncoming vehicle!”

More than five million crashes occur on U.S. roads alone every year, and more than 30,000 of those are fatal. The prospect of preventing many such accidents will provide significant impetus for networking technology.

Just an hour’s drive west of Warren, the town of Ann Arbor, Michigan, has done much to show how valuable car-to-car communication could be. There, between 2012 and 2014, the National Highway Traffic Safety Administration and the University of Michigan equipped nearly 3,000 cars with experimental transmitters. After studying communication records for those vehicles, NHTSA researchers concluded that the technology could prevent more than half a million accidents and more than a thousand fatalities in the United States every year. The technology stands to revolutionize the way we drive, says John Maddox, a program director at the University of Michigan’s Transportation Research Institute.

Shortly after the Ann Arbor trial ended, the U.S. Department of Transportation announced that it would start drafting rules that could eventually mandate the use of car-to-car communication in new cars. The technology is also being tested in Europe and Japan.

There will, of course, also be a few obstacles to navigate. GM has committed to using car-to-car communication in a 2017-model Cadillac. Those first Cadillacs will have few cars to talk to, and that will limit the value of the technology. It could still be more than a decade before vehicles that talk to each other are commonplace.



Bitcoin could help cut power bills

The technology behind the Bitcoin virtual currency could help cut electricity bills, suggests research.
A blockchain-based smart plug that can adjust power consumption minute-by-minute has been created by technologists at Accenture.
The blockchain is the automated ledger that underpins Bitcoin and tracks where the coins are spent and swapped.
The plug shops for different power suppliers and will sign up for a cheaper tariff if it finds one.
Accenture said the smart plug could help people on low incomes who pay directly for power.

Smart contract

The smart plug modifies the basic Bitcoin blockchain technology to make it more active, said Emmanuel Viale, head of the Accenture team at the firm's French research lab that worked on the plug.
Instead of just resolving and confirming transaction records, the Accenture work has changed the blockchain to let it negotiate deals on behalf of its owner.

"It's about how we put more business behaviour or logic into the blockchain," said Mr Viale, adding that this essentially embeds a "smart contract" into the digital ledger.
The smart plug prototype works with other gadgets in the house that monitor power use. When demand is high or low it searches for energy prices and then uses the modified blockchain to switch suppliers if it finds a cheaper source.
So far, said Mr Viale, the Accenture system was just a proof of concept, but it could help many people on lower incomes who pay for their power via a meter.
Being able to quickly shift suppliers could save this group more than £660m in the UK annually, suggests Accenture research.
A blockchain-based system that can act on behalf of its owner might also prove useful as the Internet of Things becomes more ubiquitous, said Mr Viale.

Managing many different gadgets might be tricky without a more centralised system, he said,
Martin Garner, a mobile services expert at analyst firm CCS Insight, said blockchains were starting to crop up in many different areas including share trading, fishing rights databases and land registry claims.
They had two chief attractions for the Internet of Things, he said.
"They avoid dependence on any one supplier or ecosystem - some users have concerns about the potential dominance of key internet players creating, for example, the Google-of-Things or the Amazon-of-Things," he said.
"The second attraction is as a way of enabling autonomous trading between things, such as the appliances in your house being set up to re-order supplies from a pre-approved list of suppliers," he added.

John McAfee offers to unlock killer's iPhone for FBI


Anti-virus software creator John McAfee has said he will break the encryption on an iPhone that belonged to San Bernardino killer Syed Farook.
Mr McAfee made the offer to the FBI in an article published by Business Insider.
Apple has refused to comply with a court order asking it to unlock the device, dividing opinion over whether the firm should be compelled to do so.
Mr McAfee said he and his team would take on the task "free of charge".
The offer came as Mr McAfee continues his campaign as a US presidential candidate for the Libertarian Party.

"It will take us three weeks," he claimed in his article.
Security expert Graham Cluley told the BBC he was sceptical of Mr McAfee's claims.
"The iPhone is notoriously difficult to hack compared to other devices," he said.

'Dead men's tales'

For instance, Mr Cluley cast doubts on Mr McAfee's idea that he could use "social engineering" to work out the pass-code of Farook's locked iPhone.
This is a process by which hackers try to find out login credentials by tricking people into giving them away.
"In a nutshell, dead men tell no tales," said Mr Cluley. "Good luck to Mr McAfee trying to socially engineer a corpse into revealing its pass-code."
"The FBI isn't interested anyway, they want to set a precedent that there shouldn't be locks they can't break," he added.

In his article, Mr McAfee stated that he was keen to unlock the device because he didn't want Apple to be forced to implement a "back door" - a method by which security services could access data on encrypted devices.
Chief executive of Apple Tim Cook had previously said in a statement that the firm did not want to co-operate.
He argued that introducing a back door would make all iPhones vulnerable to hacking by criminals.

'I would eat shoe'

Mr McAfee believes that it would be possible to retrieve data from the phone by other means - though he did not give many details of how it would be done.
"I would eat my shoe on the Neil Cavuto [television] show if we could not break the encryption on the San Bernardino phone," he added.
Some, including the Australian Children's eSafety Commissioner who spoke to tech website ZDNet, have said that Apple would not necessarily have to introduce a back door, but that the firm is only being asked to provide access to a single device.

Tech firms' support

Other tech firms have rallied behind Apple's following a few days of debate over how it should respond to the FBI's request.
Google boss Sundar Pichai had already expressed his support for Mr Cook and yesterday chief executive of Twitter Jack Dorsey added his approval via a tweet.
In a statement, Facebook said it condemned terrorism and had solidarity with the victims of terror, but would continue its policy of opposing requests to diminish security.
"We will continue to fight aggressively against requirements for companies to weaken the security of their systems," it said.
"These demands would create a chilling precedent and obstruct companies' efforts to secure their products."

Thursday 18 February 2016

Whoa! Mind-Controlled Arm Lets Man Move Prosthetic Fingers


A new mind-controlled prosthetic arm was used to help a patient wiggle the device's fingers simply by thinking about it, and required very little training on the patient's part, according to a new study.

The research, though still in its nascent stages, could potentially help people who have lost arms due to injury or disease regain some mobility, the researchers said.

"We believe this is the first time a person using a mind-controlled prosthesis has immediately performed individual digit movements without extensive training," study senior author Dr. Nathan Crone, a professor of neurology at the Johns Hopkins University School of Medicine, said in a statement. "This technology goes beyond available prostheses, in which the artificial digits, or fingers, moved as a single unit to make a grabbing motion, like one used to grip a tennis ball." [Body Beautiful: The 5 Strangest Prosthetic Limbs]

However, the man in the experiment was not missing an arm or a hand. He was at the hospital for epilepsy treatment, and was already scheduled to undergo brain mapping so that doctors could determine where the seizures started in his brain, the researchers said. 
Doctors surgically implanted electrodes into the man's brain to track his seizures. But they also mapped and found the specific areas of his brain that move each finger, from the thumb to the pinkie.

That was no easy feat. A neurosurgeon carefully placed an array of 128 electrode sensors — all on a rectangular film the size of a business card — on the region of the man's brain that controls hand and arm movements. Each sensor covered a small, circular spot on the brain that measured 0.04 inches (1 millimeter) in diameter.
After the implantation, researchers asked the man to wiggle different fingers. The team noted which parts of his brain "lit up" when the sensors detected neural electrical activity from each finger movement.

The team also noted which parts of the brain were involved in feeling touch. They gave the man a glove that vibrated at the tip of each finger. Again, the researchers identified the different areas of the brain that "lit up" when the man felt the vibrations on his fingers.
After collecting the motor (movement) and sensory data, the researchers programmed the prosthetic arm, which was developed at the Johns Hopkins University Applied Physics Laboratory. Whenever a certain part of the man's brain expressed electrical activity, the prosthetic would move a corresponding finger.

This turned the electrode sensors into the ultimate mind-reading machine. Researchers connected the electrodes to the prosthesis, and asked the man to think about moving his fingers one at a time. Within moments of when the man moved his real fingers, the fingers on the prosthetic arm moved, too.

"The electrodes used to measure brain activity in this study gave us better resolution of a large region of cortex than anything we've used before and allowed for more precise spatial mapping in the brain," said Guy Hotson, a graduate student and lead author of the study. "This precision is what allowed us to separate the control of individual fingers." [Bionic Humans: Top 10 Technologies]

Handy accuracy
At first, the mind-controlled arm was accurate just 76 percent of the time. But then, researchers coupled the ring and pinkie fingers together, which increased the accuracy to 88 percent, they said.
"The part of the brain that controls the pinkie and ring fingers overlaps, and most people move the two fingers together," Crone said. "It makes sense that coupling these two fingers improved the accuracy."
Moreover, the device is easy to use, and doesn't require extensive training, the researchers said.

Yet, the technology is still years away from clinical use, and it will likely be expensive, the researchers said. But it would undoubtedly help many people. There are more than 100,000 people living in the United States with amputated hands or arms, according to the Amputee Coalition of America, a Virginia-based nonprofit organization that represents people who have experienced limb loss or amputation.

There are already myriad technologies designed to help people with missing limbs. For instance, advances in prosthetic limbs and artificial skin are helping to restore a sense of touch for people, even if they've lost extremities. 

HoloLens 'Teleports' NASA Scientist to Mars in TED Talk Demo


Something amazing happened at the TED2016 conference today: HoloLens developer Alex Kipman "teleported" a NASA scientist onto the stage, on the surface of Mars.
Jeff Norris of NASA's Jet Propulsion Laboratory was physically across the street from the auditorium in Vancouver, Canada, but with the HoloLens cameras, a hologram of him (a three-dimensional, talking hologram, which is made entirely of light) was beamed onto the stage where a virtual Mars surface was waiting.


"I'm actually in three places," Norris said. "I'm standing in a room across the street, while I'm standing on the stage with you, while I'm standing on Mars a hundred million miles away." [See Photos of the HoloLens Experience and Teleported Scientist]

Kipman demoed the HoloLens for the audience and, for the first time, revealed this new holographic teleportation aspect of the technology.
"I invite you to experience, for the first time anywhere in the world, here on the TED stage a real-life holographic teleportation…," Kipman said. When Norris, wearing a NASA T-shirt and baseball cap appeared onstage (his hologram, that is), Kipman was ecstatic. "Woo. That worked," he said.

The alien scape on which Norris stood was a holographic replica of the planet created from data collected by NASA's Curiosity rover.

To infinity and beyond
Kipman sees the technology as a game-changer for the world. Today, he says, humans are limited by our two-dimensional interaction with the world, through monitors and other screens.

"Put simply I want to create a new reality," Kipman said. "A reality where technology brings us infinitely closer to each other, a reality where people, not devices, are at the center of everything. I dream of a reality where technology senses what we see, touch and feel, a reality where technology no longer gets in the way but instead embraces who we are."
Enter the HoloLens: "This is the next step in the evolution. This is Microsoft Hololens, the first fully untethered holographic computer," said Kipman. "I'm talking about freeing ourselves from the 2D confines of traditional computing." [Here's How the Microsoft HoloLens Works]

The technology relies on a fish-eye camera lens, loads of sensors and a holographic processing unit, according to Microsoft.
And to allow the viewer to walk around in their own environment overlaid with various holograms, the devices maps your home or any surroundings in real-time. "The HoloLens maps in real-time at about five frames per second with this technology we call spatial mapping. So in your home as soon as you put it on holograms will start showing up and you'll start placing them, you'll start learning your home," Kipman said.

For the demo, where Kipman's headset was wirelessly linked to big screens, the HoloLens relied on stored information. "In a stage environment where we're trying to get something on my head to communicate with something over there with all of the wireless connectivity that usually brings all conferences down we don't take the risk of trying to do this live," Kipman said. "So what we do is we pre-map the stage at five frames per second with the same spatial mapping technology that you'll use with the product at home and then we store it."For the demo, where Kipman's headset was wirelessly linked to big screens, the HoloLens relied on stored information. "In a stage environment where we're trying to get something on my head to communicate with something over there with all of the wireless connectivity that usually brings all conferences down we don't take the risk of trying to do this live," Kipman said. "So what we do is we pre-map the stage at five frames per second with the same spatial mapping technology that you'll use with the product at home and then we store it."

Demoing more of the HoloLens experience, Kipman shows the audience what he sees through the headset as he dials his world from reality toward the imaginary, turning people in the audience, for instance, into elves with wings.

Exploring with HoloLens
The technology is already being put to good use in the scientific and consumer realm.
Medical students at Case Western University are using HoloLens to learn about medicine and the human body in an augmented-reality world; Volvo has developed a partnership with Microsoft to use the HoloLens for both design of their cars and as a way to enhance consumers' experiences with their vehicles and brand.

And Kipman's "personal favorite" — NASA is using the technology to let scientists explore planets holographically, a partnership dubbed OnSight.
"Today a group of scientists on our mission are seeing Mars as never before, an alien world made a little more familiar because they are finally exploring it as humans should," Norris said of the ability to use HoloLens to experience the planet as if one were there. "But our dreams don't have to end with making it just like being there. If we dial this real world to the virtual, we can do magical things. We can see in invisible wavelengths or teleport to the top of a mountain. Perhaps some day we'll feel the minerals in a rock just by touching it."

Astronauts aboard the International Space Station have HoloLens headsets so that scientists on Earth can assist them as if both were in the same place.

'See' What You Breathe with New Air-Quality Monitor


People typically think about clean or dirty air only when they're outside, but air quality can be a significant problem even indoors. And now, using a new gadget, people can identify pollutants — some smaller than the width of a hair — in their homes, and this could help ward off some illnesses, the device's creators said.
AirVisual — a global team of scientists, engineers and others — is producing the gadget, called the AirVisual Node. The Node's bright and colorful screen can illuminate pollution, temperature, humidity and stuffiness, both indoors and outdoors. The team hopes to change the approach to air-quality collection, said Yann Boquillod, co-founder of AirVisual.

People generally have some understanding of what they're breathing outdoors, because most governments actively monitor the air, Boquillod said. Indoor air, on the other hand, is a "big unknown," he told Live Science. "You spend 80 to 90 percent of your time indoors, so if you are able to actually control your indoor air quality," then you can protect your and family's health, Boquillod said. [In Photos: World's Most Polluted Places]

Using this monitor, "I have the visibility of how much pollution my children are breathing," he said.
Indoor air pollution can come from stove tops, fireplaces and wood products, among other sources, according to the U.S. Environmental Protection Agency (EPA). Burning food, especially, can release contaminant-laden smoke into the air, Boquillod said. The Node can identify these contaminants, which can include microscope particles, orparticulate matter, called PM2.5. The "2.5" comes from the diameter of the particle, which is 2.5 micrometers. "It's a very tiny particle, much smaller than a hair," Boquillod said.

The Node can measure particles up to 10 micrometers (PM10) in diameter, which includes dust. Particles smaller than PM10 can be inhaled into the lungs and get past the body's normal defense systems, eventually entering the bloodstream, Boquillod said. This can give rise to health issues like eye, nose and throat irritation, he added. The smallest particles can wedge deeply into the lungs, causing respiratory infections, bronchitis and even lung cancer, according to the EPA.
The Node is able to measure the particles using laser technology, the company said. Inside the Node, there is a fan that sucks in ambient air, a laser that shoots a sharp and precise laser beam, and a photo-sensor under the laser. "Whenever particulate matter passes in front of the photo-sensor, it breaks the laser beam," causing interference that is picked up by the photo-sensor, Boquillod said. "The photo-sensor counts how many times the laser beam is broken."

The device relies on a powerful algorithm that identifies the size and number of particles for each intake and extrapolates data from successive intakes to determine overall air pollution, Boquillod said. In addition to examining particles, the device also measures carbon dioxide levels, which can indicate how well a room is ventilated. The larger the amount of concentrated carbon dioxide there is, the stuffier a room tends to be. [The 10 Most Pristine Places on Earth]

When carbon dioxide levels get too high, "you feel like you are not at the most of your cognitive power," Boquillod said. The Node can measure carbon dioxide concentrations of 400 parts per million (ppm) to 10,000 ppm. When carbon dioxide reaches 1,000 ppm, the environment is confined and needs some fresh air, and when the level rises to 1,500 ppm, people will start to feel poorly, he said. When the level soars to 2,000 ppm, it's time to ventilate and exit, Boquillod said.

The best place to gather air-quality data is wherever you spend the most time, Boquillod said, which could be the bedroom or living room. The Node can also be used to measure air pollution outdoors, though the device needs to be in the shade, away from wind and shielded from rain. The Node can connect to the Internet to send outdoor air-quality measurements to AirVisual, which is planning to consolidate and share the data worldwide.

Revenue generated by the Nodes, which are selling on the crowdsourcing site Indiegogo, will help fund AirVisual's social project to map air pollution around the world. Although governments already collect air-quality data in a number of countries, many other nations are poorly monitored, compromising the health of citizens in those places, Boquillod said.

AirVisual currently offers an app and website that share and forecast global air quality. The group has the same goals as a nongovernmental organization, but wants to be self-funded to increase its efficiency in collecting and distributing data, Boquillod said.
The AirVisual Node sells for $149 and has collected $25,500, or 255 percent of its initial $10,000 goal, on Indiegogo. There are 18 days left in the crowdfunding campaign, and the Airvisual team plans to deliver the gadget in April, Boquillod said.

Tuesday 16 February 2016

New satellites could bring 1 terabit of internet bandwidth to remote regions


Delivering internet access to remote areas is challenging, as the traditional method of running lines from connected regions is extremely expensive. There are a few approaches to doing this wirelessly — for example, Google’s Project Loon balloons. However, a company called ViaSat is teaming up with Boeing to provide super-fast internet access to remote areas from space. The just-announced ViaSat-3 satellite will have a terabit of available bandwidth. Yes, a terabit per second.

ViaSat has made this announcement a little early, though. It has yet to announce its second-generation satellite, the ViaSat-2 (below). That platform is supposed to head into orbit on a SpaceX Falcon 9 rocket in a few months. While the ViaSat-2 is no slouch, it will only have one-third of the available bandwidth of the planned ViaSat-3. Once its new generation of satellites is in orbit, ViaSat claims its platform could double the network capacity of the roughly 400 commercial communications satellites already circling the globe.

The 1Tbps satellites will provide fast connections, but those on the ground obviously won’t be able to suck down the full 1Tb of bandwidth. ViaSat plans to offer residential connections of about 100Mbps, which is still faster than many people in US cities can get. When you consider many of the regions ViaSat expects to serve have no broadband service at all, I don’t think anyone will complain about “only” getting 100Mbps. Users will still have to contend with the limitations of satellite internet, including line-of-sight requirements and higher latency than terrestrial wired connections. Any real-time applications like video chat will probably be unworkable despite the incredible speeds.


Residential service is only one part of what ViaSat wants to do with its space-based connections. A more robust version of the service will be made available to corporate installations that are in remote areas (like oil and gas platforms) that can reach speeds of up to 1Gbps. Commercial jets might also be able to use ViaSat’s connections as a faster version of the internet service they already offer.
The company says that work is already underway on two ViaSat-3 satellites, and Boeing expects them to be ready for launch by the end of 2019. That would put ViaSat a few years ahead of Elon Musk’s tentative plan to get thousands of micro-satellites into orbit in order to deliver high-speed internet to the globe. Whoever makes it work is immaterial to people who lack sufficient bandwidth, but help is on the way.

Google finally kills Picasa desktop client, Web service


Google announced today that it’s ceasing development on Picasa, in favor of the new Google Photos. The writing has been on the wall for Picasa for a while — when Google launched its new photo service last year, it was clear that it wouldn’t make much sense for the company to keep two applications running side by side.
According to Anil Sabharwal, head of Google Photos, the phase-out is designed to give users time to migrate their photos — in fact, if you used Picasa Web Albums, you’ll apparently now find your content mirrored on Google Photos already. Google is apparently sensitive to the fact that people might either like some of the features formerly provided by Picasa or don’t want to lose captions or metadata. Sabharwal writes:
However, for those of you who don’t want to use Google Photos or who still want to be able to view specific content, such as tags, captions or comments, we will be creating a new place for you to access your Picasa Web Albums data. That way, you will still be able to view, download, or delete your Picasa Web Albums, you just won’t be able to create, organize or edit albums (you would now do this in Google Photos).
If you use the Picasa Web Albums feature, you’ll have until May 1, 2016 to use it as normal. The changes above and the migration to Google Photos will begin after that date.
Users who prefer the desktop application have fewer options. Google has stated that it won’t support the desktop application after March 15, 2016. It also states: “For those who have already downloaded this—or choose to do so before this date—it will continue to work as it does today, but we will not be developing it further, and there will be no future updates.” This seems to imply that Google will remove the software altogether, rather than leave it online in its current state.
I haven’t installed the desktop version of Picasa in years, so I downloaded it and gave it a whirl, more out of curiosity than anything else.
I was somewhat surprised to find it looks more-or-less the same as it did 10 years ago. I’m not sure if Google ever made any substantive changes to the software, but the UI appears nearly frozen in time.
If you liked Picasa because of its desktop features, Google Photos is no replacement at all. It’s a cloud management tool for photos, not a desktop application. If you aren’t interested in uploading your images to Google’s servers, you’ll have to look elsewhere for photo management options.

Sunday 14 February 2016

3D solar towers offer up to 20 times more power output than traditional flat solar panels


While we’ve looked at the development of solar cell technologies that employnanoscale 3D structures to trap light and increase the amount of solar energy absorbed, MIT researchers have now used 3D on the macro scale to achieve power output that is up to 20 times greater than traditional fixed flat solar panels with the same base area. The approach developed by the researchers involves extending the solar cells upwards in a three-dimensional tower or cube configuration to enable them to better capture the sun's rays when it is lower on the horizon.


Solar panels placed flat on a rooftop are most effective at harnessing solar energy when the sun is close to directly overhead, but quickly lose their efficiency as the angle of the sun’s rays hitting the panel increases – during the mornings, evenings, in the cooler months and in locations far from the equator. It is exactly in these situations that the researcher’s vertical solar modules provided the biggest boosts in power output.
After exploring a variety of possible 3D configurations using a computer algorithm and testing them under a range of latitudes, seasons and weather with specially developed analytic software, the team built three different individual 3D modules and tested them on the MIT lab building roof for several weeks. The results showed a boost in power output ranging from double to more than 20 times that of fixed flat solar panels with the same base area.
By going vertical and collecting more sunlight when the sun is closer to the horizon, the team’s 3D modules were able to generate a more uniform output over time. This uniformity extended over the course of each day, the seasons of a year, and even when accounting for blockage from clouds and shadows.
The researchers say this increase in uniformity could overcome one of the biggest hurdles facing solar energy – predictability of electricity supply that currently makes it difficult to integrate solar power sources into the grid.
They add that this uniformity, as well as the much higher energy output for a given area, would help offset the increased cost of the 3D modules, which are higher per the amount of energy generated when compared to conventional flat solar panels.
While the team’s computer modeling showed complex shapes – such as a cube with each face dimpled inward – would offer a 10 to 15 percent improvement in power output when compared to a simpler cube, these would be difficult to manufacture. In their rooftop tests, the team studied both simpler cube modules as well as more complex accordion-like shapes that could be shipped flat for unfolding on site.
This accordion-like tower was the tallest structure the team tested and such a design could be installed in a parking lot to provide a charging station for electric vehicles, according to Jeffrey Grossman, the senior author of the study and the Carl Richard Soderberg Career Development Associate Professor of Power Engineering at MIT.
Grossman and his colleagues believe that with the fall in the cost of solar cells in recent years - to the point where they have become less expensive than their supporting structures and the outlay for the land upon which they are placed - makes it an ideal time to examine the benefits of different solar cell configurations.
“Even 10 years ago, this idea wouldn’t have been economically justified because the modules cost so much,” Grossman says. But now, he adds, “the cost for silicon cells is a fraction of the total cost, a trend that will continue downward in the near future.”
Buoyed by the success of the tests on the individual 3D modules, the team now plans to study a collection of solar towers that will enable them to examine the effects that one tower’s shadow will have on another as the sun moves across the sky over the course of a day.
While the team believes its 3D solar cells could offer big advantages in flat-rooftop installations or urban environments where space is limited, they say they could also be used in larger-scale applications, such as solar farms, once a configuration that minimizes the shading effects between towers has been developed.
The results of the MIT team’s computer modeling and rooftop testing of real modules appear in the journal Energy and Environmental Science.

Windstalk is a wind farm without the turbines


Wind turbines are an increasingly popular way to generate clean energy withlarge-scale wind farms springing up all over the world. However, many residents near proposed wind farm sites have raised concerns over the aesthetics and the low frequency vibrations they claim are generated by wind turbines. An interesting Windstalk concept devised by New York design firm Atelier DNA could overcome both these problems while still allowing a comparable amount of electricity to be generated by the wind.

Devised as a potential clean energy generation project/tourist attraction for Abu Dhabi’s Masdar City, the Windstalk concept consists of 1,203 carbon fiber reinforced resin poles, which stand 55 meters (180 feet) high and are anchored to the ground in concrete bases that range between 10 and 20 meters (33-66 ft) in diameter. The poles, which measure 30cm (12 in.) in diameter at the base, tapering up to a diameter of 5cm (2 in.) at the top, are packed with a stack of piezoelectric ceramic discs. Between the discs are electrodes that are connected by cables that run the length of each pole – one cable connects the even electrodes, while another connects the odd ones.

So, instead of relying on the wind to turn a turbine to generate electricity, when the pole sways in the wind, the stack of piezoelectric discs are compressed, generating a current through the electrodes. In a nice visual way to indicate how much – if any – power the poles are generating, the top 50cm (20 in.) of each pole is fitted with an LED lamp that glows and dims relative to the amount of power. So when the wind stops, the LED’s go dark.


As a way to maximize the amount of electricity the Windstalk farm would generate, the concept also places a torque generator within the concrete base of each pole. As the poles sway, fluid is forced through the cylinders of an array of current generating shock absorbers to convert the kinetic energy of the swaying poles into electrical energy.
Because the electricity generation capabilities of a Windstalk field site would depend on the wind, the designers have devised a way to store the energy. Below the field of poles would be two large chambers located on top of each other and shaped like the bases of the poles but inverted, (see the cross section image of the pole base section below). When the wind is blowing, part of the electricity generated is used to power a set of pumps that moves water from the lower chamber to the upper one. Then, when the wind dies down, the water flows from the upper chamber down to the lower chamber, turning the pumps into generators.
The WIndstalk project is still only a concept, so the designers haven’t determined the optimal shape for the stalks, saying computer simulations could be used to devise the best profile for maximizing the pole’s movement and variation. Even so, the design team estimates that the overall electricity output of the concept would be comparable to that of a conventional wind turbine array because, even though a single wind turbine that is limited to the same height as the poles may produce more energy than a single Windstalk, the Windstalks can be packed in much denser arrays.
The Atelier DNA Windstalk concept took out second prize in the Land Art Generator Initiative (LAGI) competition this year that asked entrants to “design a series of land/environmental art installations that uniquely combine aesthetic intrigue and artistic concept with clean energy generation.”