lingkanra

lingkanra is from Cyprus, has been a member for 11 years and last logged in 11 years ago.


Trade Policy:

The trip by Christopher Nixon Cox, 100 years after President Richard M.
Nixon’s birth, was organized to lionize both the president’s and the Communist Party’s accomplishments.    
The play by Jesse Eisenberg, starrting Vanessa Redgrave, is one of the hottest Off Broadway tickets.A situation in Wise County illustrates how many family physicians are caught in a growing divide between rural and urban health care markets.     Samuel Deduno and four relievers held Puerto

Rico to three hits in the final as the Dominican Republic won all eight of its games in the World Baseball Classic. David Lynch doesn't want you to call him a musician He's a renowned filmmaker and self-taught improv[...] “Assaulted: Civil Rights Under Fire” features Ted Nugent, gun store owners and clips from Fox News.     Q: DEAR TIM: Are all heating systems the same?

Mine is running constantly and can maintain a temperature of only 67 degrees in my home. It's below zero outdoors. But still, I would expect the house to be comfortable even if the temperature outside is bitterly cold. The Chicago Fire hold off David Beckham and the L.A. Galaxy to set up a first round series with injury-riddled D.C. United, a pairing that will open Thursday in Illinois. Filed under: Cellular, HardwareSamsung, the all-things-CDMA handset maker du jour these days, will be

introducing new handsets to CDMA carriers Alltel and Verizon soon.The
two new models -- the R510 and U540 -- look identical to existing handsets offered from other carriers -- like the T-Mobile Trace and the Sprint A500.
Both, both of these are packing CDMA innards in ultra-slim form factors.[via Engadget Mobile]Read | Permalink | Email this | Linking Blogs | Comments Lawmakers examined agency cost-saving plans and members of both parties accused each other of having things backward in the sequester blame game during a pair of hearings on

Tuesday for the House oversight committee. Read full article >> President Obama used a congratulatory call with China’s new president to discuss the loss of American intellectual property from cyberattacks. Josh Fox’s “Gasland” movies grew out of a company’s effort to pay him for exploration rights to his land, which lies above the Marcellus Shale formation.     Greek yogurt, now produced under a number of name brands, is catching on in restaurants as an ingredient that’s indulgent and easy to use.
MIT professor

Stephen J. Lippard, who is widely acknowledged as one

of the founders of the field of bioinorganic chemistry, has been named recipient of the 2014 Priestley Medal, the highest honor conferred by the American Chemical Society (ACS).According to ACS, Lippard is being recognized “for mentoring legions of scientists in the course of furthering the basic science of inorganic chemistry and paving the way for improvements in human health.”


“It’s an honor to join the very distinguished list of Priestley Medal recipients,” Lippard said in an interview with Chemical & Engineering News. “It also makes me very proud of my postdocs, graduate students, and collaborators, without whose work none of this would have happened.
‘Professor’ stands for ‘professional student.’ The best part about being a professor is that you’re constantly learning from the students in your classes as well as from your lab members.”The
annual Priestley Medal is intended to recognize distinguished service and commemorate lifetime achievement in chemistry.Lippard,
the Arthur Amos Noyes Professor of Chemistry, has spent his career studying the role of inorganic molecules, especially metal ions and their complexes, in critical processes of biological systems.
He has made pioneering contributions in understanding the mechanism of the cancer drug cisplatin and in designing new variants to combat drug resistance and side effects.His research achievements include the preparation of synthetic models for metalloproteins; structural and mechanistic studies of iron-containing bacterial monooxygenases including soluble methane monooxygenase; and the invention of probes to elucidate the roles of mobile zinc and nitric oxide in biological signaling and disease.
Many of the students Lippard has mentored — including more than 110 PhD students, 150 postdocs and hundreds of undergraduates — have gone on

to become prominent scientists and teachers. Robert Langer, Institute Professor at MIT and the 2012 Priestly Award recipient, says, “I'm delighted to see Steve receive the Priestly Medal.
He richly deserves it for all the excellent research he has done and for being such a wonderful mentor and collaborator.”MIT colleague JoAnne Stubbe, the Novartis Professor of Chemistry and a professor of biology, says, “He [Lippard] is an inspiration to us all as a scientist and mentor.
I stand in awe at his continual ability to identify and move into exciting new fields, and bring

a new perspective and change thinking in the field. There is no university/college untouched by a Lippard trainee.
We are all very proud.”Fellow inorganic chemist and MIT colleague Christopher Cummins, a professor of chemistry, says, “Steve

has attracted so many talented individuals to study with him because he selects important research problems and works to solve them with creativity, boundless energy, optimism, and contagious enthusiasm.” “His work,” Cummins adds, “has pushed back the frontiers of the basic science known as inorganic chemistry, even as it has paved the way for improvements in human health and the conquering of disease.
Steve is an educator and a role model par excellence!" Vanessa Redgrave plays a Polish survivor of the Holocaust in “The Revisionist,” written by her co-star, Jesse

Eisenberg. It’s a question that arises with virtually every major new finding in science or medicine: What makes a result reliable enough to be taken seriously? The answer has to do with statistical significance — but also with judgments about what standards make sense in a given situation.The
unit of measurement usually given when talking about statistical significance is the standard deviation, expressed with the lowercase Greek letter sigma (σ).
The term refers to the amount of variability in a given set of data: whether the data points are all clustered together, or very spread out.In many situations, the results of an experiment follow what is called a “normal distribution.”
For example, if you flip a coin 100 times and count how many times it comes up heads, the average result will be 50.
But if you do this test 100 times, most of the results will be

close to 50, but not exactly.
You’ll get almost as many cases with 49, or 51. You’ll get quite a few 45s or 55s, but almost no 20s or 80s.
If you plot your 100 tests on a graph, you’ll get a well-known shape called a bell curve that’s highest in the middle and tapers off on either side. That is a normal distribution.The deviation is how far a given data point is from the average. In the coin example, a result of 47 has a deviation of three from the average (or “mean”) value of 50. The standard deviation is just the square root of the average of all the squared deviations. One standard deviation, or one sigma, plotted above or below the average value on that normal distribution curve, would define a region that includes 68

percent of all the data points.
Two sigmas above or below would include about 95 percent of the data, and three sigmas would include 99.7 percent.So,
when tinnitus miracle particular data point — or research result — considered significant? The standard deviation can provide a yardstick: If a data point is a few standard deviations away from the model being tested, this is strong evidence that the data point is not consistent with that model.
However, how to use this yardstick depends on the situation. John Tsitsiklis, the Clarence J. Lebel Professor of Electrical Engineering at MIT, who teaches the course Fundamentals of Probability, says, “Statistics is an art, with a lot of room for creativity and mistakes.”
Part of the art comes down to deciding what measures make sense for a given

setting.For
example,

if you’re taking a poll on how people plan to vote in an election, the accepted convention is that two standard deviations above or below the average, which gives a 95 percent confidence level, is reasonable. That two-sigma interval is what pollsters mean when they state the “margin of sampling error,” such as 3 percent, in their

findings. That means if you asked an entire population a survey question and got a certain answer, and then asked the same question to a random group of 1,000 people, there is a 95 percent chance that the second group’s results would fall within two-sigma from the first result. If a poll found

that 55 percent of the entire population favors candidate A, then 95 percent of the time, a second poll’s result would be somewhere between 52 and 58

percent.Of course, that also means that 5 percent of the time, the result would be outside the two-sigma range. That much uncertainty is fine for an opinion poll, but maybe not for the result of a crucial experiment challenging scientists’ understanding of an important phenomenon — such as last fall’s announcement of a possible detection of neutrinos moving faster than the speed of light in an experiment at the European Center for Nuclear Research, known as CERN.Six
sigmas can still be wrongTechnically, the results of that experiment had a very high level of confidence: six sigma. In most cases, a five-sigma result is considered the gold standard for significance, corresponding to about a one-in-a-million chance that the findings are just a result of random variations; six sigma translates to one chance in a half-billion that the result is a random fluke. (A popular business-management strategy called “Six Sigma” derives from this term, and is based on instituting rigorous quality-control procedures to reduce waste.)But
in that CERN experiment, which had the potential to overturn a century’s worth of accepted physics that has been confirmed in thousands of different kinds of tests, that’s still not nearly good enough.
For one thing, it assumes that the researchers have done the analysis correctly and haven’t overlooked some systematic source of error.
And because the result was so unexpected and so revolutionary, that’s exactly what

most physicists think happened — some undetected source of error.Interestingly, a different set of results from the same CERN particle accelerator were interpreted quite differently. A possible detection of something called a Higgs boson — a theorized subatomic particle that would help to explain why particles weigh something rather than nothing — was also announced last year. That result had only a 2.3sigma
confidence level, corresponding to about one chance in 50 that the result was a random error (98 percent confidence level). Yet because it fits what is expected based on current physics, most physicists think the result is likely to be correct, despite its much lower statistical confidence level.Significant but spuriousBut it gets more complicated in

other areas. “Where this business gets really tricky is in social science and medical science,” Tsitsiklis says. For example, a widely cited 2005 paper in the journal Public Library of Science — titled “Why most published research findings are wrong” — gave a detailed analysis of a variety of factors that could lead to unjustified conclusions. However, these are not accounted for in the typical statistical measures used, including “statistical significance.”The paper points out that by looking at large datasets in enough different ways, it is easy to find examples that pass the usual criteria for statistical significance, even though they are really just random variations. Remember the example about a poll, where one time out of 20 a result will just randomly fall outside those “significance” boundaries? Well, even with a five-sigma significance level, if a computer scours

through millions of possibilities, then some totally random patterns will be discovered that meet those criteria.
When that happens, “you don’t publish the ones that don’t pass” the significance test, Tsitsiklis says, but some random correlations will give the appearance of being real findings — “so you end up just publishing the flukes.”One

example of that: Many published papers in the last decade have claimed significant correlations between certain kinds of behaviors or thought processes and brain images captured by magnetic resonance imaging, or MRI.
But sometimes these tests can find apparent correlations that are just the results of natural fluctuations, or “noise,” in the system. One researcher in 2009 duplicated one such experiment, on the recognition of facial expressions, only instead of human subjects he scanned a dead fish — and found “significant” results.
“If you look in enough places, you can get a ‘dead fish’ result,” Tsitsiklis says.
Conversely, in many cases a result with low statistical significance can nevertheless “tell you something is worth investigating,” he says.So bear in mind, just because something meets an accepted definition of “significance,” that doesn’t necessarily make it significant.

It all depends on the context. to perform dwarf-galaxy archaeology, allowing us to better understand galaxy formation; Pablo Sandoval hit his first home run since May 21, and All-Star Madison Bumgarner settled down to allow just four hits in seven innings as the San Francisco Giants beat the San Diego Padres 4-2 Thursday night in a matchup of the worst teams in the weak NL West.     HOUSTON - Federal investigators on Thursday grilled BP Senior Vice President Kent Wells on the company's history of safety problems in the Gulf of Mexico and demanded more clarity on who at the company is ultimately responsible, and accountable, for drilling operations. As if U.S. airlines don't have enough to worry about, with rising fuel prices, mergers and bankruptcies, a safety-inspection crackdown and countless disgruntled customers. Now along comes a federal requirement to upgrade the drinking water on planes. In June 2011, on the fifth anniversary of its video series, TED released a list of the 20 most-watched TEDTalks to date, as seen on all the platforms they tracked — TED.com, YouTube, iTunes, embed and download, Hulu and more.
Included in

those videos were two from MIT's School of Architecture + Planning — featuring Patti Maes, head of the Media Lab’s Fluid Interfaces Group, and Pranav Mistry, inventor of SixthSense — and this year those two videos are still among the 20 most popular. It made us curious to know how many TEDTalks have given in total over the years by researchers and innovators linked to SA+P, so we did a search and came up with an unofficial count of 22. forex growth bot we missed a few but you can find descriptions of those we found, along with their number of viewings as of this writing. All of them are worth watching.Read more In computer science, the buzzword of the day is “big data.”
The proliferation of cheap, Internet-connected sensors — such as the GPS receivers, accelerometers and cameras in smartphones — has meant an explosion of information whose potential uses have barely begun to be explored.
In large part, that’s because processing all that data can be prohibitively time-consuming.Most
computer scientists try to make better sense of big data by developing ever-more-efficient algorithms. But in a paper presented this month at the Association for Computing Machinery’s International Conference on Advances in Geographic Information

Systems, MIT researchers take the opposite approach, describing a novel way to

represent data so that it takes up much less space in memory but can still be processed in conventional ways. While promising significant computational speedups, the approach could be more generally applicable than other big-data techniques, since it can work with existing algorithms.In the new paper, the researchers apply their technique to two-dimensional location data generated by GPS receivers, a very natural application that also

demonstrates clearly how the technique works. As Daniela Rus, a professor of computer science and engineering and director of MIT’s Computer Science and Artificial Intelligence Laboratory, explains, GPS receivers take position readings every 10 seconds, which adds up to a gigabyte of data each day. A computer system trying to process GPS data from tens

of thousands of cars in order to infer traffic patterns could quickly be overwhelmed.But
in analyzing the route traversed by a car, it’s generally not necessary to consider the precise coordinates of every point along the route.
The essential information is the points at which the car turns; the path between such points can be approximated by a straight line. That’s what the new algorithm does.A
key aspect of the algorithm, explains Dan Feldman, a postdoc in Rus’ group and lead author on the new paper, is that it can compress data on the fly.
For instance, it could compress the first megabyte of data it receives from a car, then wait until another

megabyte builds up and compress again, then wait for another megabyte, and so on — and yet the final representation of the data would preserve almost as much information as if the algorithm had waited for all the data to arrive before compressing.Drawing the lineIn some sense, Feldman says, the problem of approximating pathways between points is similar to the problem solved by regression analysis, a procedure common in statistics that finds the one line that best fits a

scatter of data points. One major difference, however, is that the researchers’ algorithm has to find a series of line segments that best fit the data points.As
Feldman explains, choosing the number of line segments involves a trade-off between accuracy and complexity.
“If you have N points, k” — the number of line segments — “is a number between 1 and N, and when you increase k, the error will be smaller,” Feldman says.
“If you just connect every two points, the error will be zero, but then it won’t be a good approximation. If you just take k equal to 1, like linear regression, it will be too rough an approximation.” So the first task of the algorithm is to find the optimal trade-off between number of line segments and error.The
next step is to calculate the optimal set of k line segments — the ones that best fit the data. The step after that, however, is the crucial one: In addition to storing a mathematical representation of the line segment that best fits each scatter of points, the algorithm also stores the precise coordinates of a random sampling of the points. Points that fall farther from the line have a higher chance of being sampled, but the sampled points are also given a weight that’s inversely proportional to their chance of being sampled.
That is, points close to the line have a lower chance of being sampled, but if one of them is sampled, it’s given more weight, since it stands in for a larger number of unsampled points.It’s
this combination of linear approximations and random samples that enables the algorithm to compress data on the fly. On the basis of the samples, the algorithm can recalculate the optimal line segments, if needed, as new data arrives.Satisfaction guaranteedDuring compression, some information is lost, but Feldman, Rus, and graduate student Cynthia Sung, the paper’s third author, were able to provide strict mathematical guarantees that the error introduced will stay beneath a low threshold. In many big-data contexts, a slightly erroneous approximation is much better than a calculation that can’t be performed at all.In principle, the same approach could work with any type of data, in many more dimensions than the two recorded by GPS receivers.
For instance, with each GPS reading, a car could also record temperature and air pressure and take a snapshot of the road ahead of it.
Each additional measurement would just be another coordinate of a point in multiple dimensions.
Then, when the compression was performed, the randomly sampled points would include snapshots and atmospheric data.
The data could serve as the basis for a computer system that,

for instance, retrieved photos that characterized any stretch of road on a map, in addition to inferring traffic

patterns.The
trick in determining new applications of the technique is to find cases in which linear approximations of point scatters have a clear meaning. In the case of GPS data, that’s simple: Each line segment represents the approximate path taken between turns.
One

of the new applications that Feldman is investigating is the analysis of video data, where each line segment represents a scene, and the junctures between line segments represent cuts. There, too, the final representation of the data would automatically include sample frames from each scene.According to Alexandre Bayen, an associate professor of systems engineering at the University of California, at Berkeley, the MIT researchers’ new paper “pioneers the field” of “extracting repeated patterns from a GPS signal and using this data to produce maps for streaming GPS data.” In computer science parlance, Bayen explains, a reduced data set that can be processed as if it were a larger set is called a “coreset.” “The coreset is a good solution to big-data problems because they extract efficiently the semantically important parts of the signal and use only this information for processing,” Bayen says. “These important parts are selected such that running the algorithm on the coreset data is only a little bit worse than running the algorithm on the entire data set, and this error has guaranteed bounds.”
With an expansionary push in recent years among entrepreneurs, Brazil is rapidly becoming one of the more important investors in Latin America and is making its presence felt as far away as the United States, Africa and Europe.
Right-hander Ervin Santana and the Los Angeles Angels agreed

to a four-year, $30 million

contract yesterday, a day after their scheduled arbitration hearing was
Contact:

You have to be logged in to contact this member.


Latest Sheets Of lingkanra lingkanra's Latest Sheets Feed
Latest Requests Of lingkanra lingkanra's Latest Requests Feed


Latest Sheets (0)

Member has not submitted any sheets yet.


Latest Requests (0)

Member has not requested any sheets yet.


Latest Friends (0)

Member has not added anyone as friend yet.