Friday, August 14, 2015

Super-Scary Android Flaw Found

Stagefright, which processes several popular media formats, is implemented in native code -- C++ -- which is more prone to memory corruption than memory-safe languages such as Java, according to Zimperium.
Stagefright has several remote code execution vulnerabilities that can be exploited using various methods, Zimperium said.
The worst of them doesn't require any user interaction.
The vulnerabilities critically expose 95 percent of Android devices -- about 950 million, by Zimperium's count.
"Users of Android versions older than 4.1 are at extreme risk," Drake told LinuxInsider.

The No-Touch Flaw

Attackers need nothing more than a victim's mobile phone number to exploit the most dangerous Stagefright flaw, Zimperium said.
They can send a specially crafted media file delivered as an MMS message.
A fully weaponized, successful attack could delete the message before the user sees it, leaving only a notification that the message was received.
The victim wouldn't need to take any action for the attack to be successful.
Zimperium reported the vulnerability to Google and submitted patches, which Google applied within 48 hours.

Who's Safe

Users of SilentCircle's Blackphone have been protected against these problems with the release earlier this month of PrivateOS version 1.1.7, Zimperium reported, and Mozilla's Firefox for mobile, aka "Fennec," includes fixes for these issues in v38 and later versions.
Google is coordinating with members of the Open Handset Alliance to get the issues addressed in official Android-compatible devices.
"We thank Joshua Drake for his contributions," said Google spokesperson Elizabeth Markman. "The security of Android users is extremely important to us, and so we responded quickly -- and patches have already been provided to partners that can be applied to any device."

What's Happening Now

If you're an Android device user, expect nothing and prepare for trouble.
"Many carriers and manufacturers prefer to push patches out to customers themselves, if at all," said Ken Westin, security analyst for Tripwire.
That means "even well after the patches are made public, more than half [of users] will still be vulnerable," he told LinuxInsider.
Further, this vulnerability goes back to Android 2.2, which was released five years ago, Westin pointed out, so "some of these devices may not have patches available through their carriers as they are too old and are no longer supported."
"This problem doesn't show any signs of going away," Drake said. "Even Nexus devices remain without a patch today, presumably because of this very problem."
Tripwire so far has not seen any exploits of the Stagefright flaw in the wild, although "this can change very quickly now that the vulnerability has been exposed," Westin said.

Android's General Safety Overview

Most Android devices, including all newer devices, "have multiple technologies that are designed to make exploitation more difficult," Google's Markman told LinuxInsider. Android devices "also include an application sandbox designed to protect user data and other applications on the device."
However, the jury's still out on whether sandboxes can fully protect devices.
Bluebox last year discovered an Android design error it dubbed "Fake ID," which let malware sneak by Android's app sandbox and take control of other apps.
Google removed the Android webview Flash flaw from Android 4.4 KitKat, but 82 percent of devices couldn't update to the new version of the OS because mobile carriers and manufacturers delayed or did not deliver the update, Bluebox said.
Sandboxes have failed to stop advanced cyberattacks, according to FireEye.

Staying Safe in the Malware Storm

Applying strong authentication to critical apps could help Android users remain safe, Secure Channels CEO Richard Blech told LinuxInsider. Also, login credentials should not be kept on the device.
"Always use a currently supported mobile device," Zimperium's Drake suggested, and "keep your device updated to the latest version at all times." If an update isn't available, "manually install an OS likeCyanogenMod that supports older devices for a longer period of time." 

Tuesday, August 11, 2015

heuristic

A heuristic is a commonsense guideline used to increase the probability of solving a problem by directing one's attention to things that are likely to matter. The word is derived from the Greek "heurisko" which simply means "I find". The exclamation "eureka", meaning "I found it!", shares roots with heuristic.

Just as the "Eureka!" screaming forty-niners of California's Gold Rush did not have secret knowledge or tools to tell them exactly where all the gold was buried; software testers don't know exactly where the bugs are going to hide. However, both software testers and gold miners know where bugs and gold have been found before. We can use that knowledge of past discoveries and the nature of what we seek to create heuristics that help us narrow in on areas most likely to contain the treasure.

Gold miners and testers can find treasure by accident. However, intentional exploration for bugs and gold are more likely to produce results than aimless wandering. That last statement is a heuristic. It is true most of the time, but sometimes it can be proven false. Sometimes wandering testers and miners stumble into something very important. I just don't want to do all my testing by accident.
Heuristic-based testing may not give us concrete answers, but it can guide us to the important things to test. Heuristics can also be used in automation to provide information to guide human testers.
There was a time that I told developers and project managers that I could not test their products when the requirements did not include straightforward "testable" criteria. I thought that I could not test without being able to report "pass" or "fail" for each test.
A good example was a requirement that stated something like "the user shall not have to wait an unacceptable amount of time". As a good quality school tester working in a factory school organization, I demanded to know how long was acceptable before I could start testing. I wanted to quantify "unacceptable". In this case, the truth is that "unacceptable" will vary based on the user. There were no contractual SLAs to satisfy. I may not be able to report that the requirement is met, but I can provide useful information to the project team to assist in determining if the performance is acceptable.
I have since learned that answers to heuristic questions are useful. It was in that same project that I started applying heuristics to automated data validation. Even without concrete requirements, we testers can provide provide information that is useful in answering important testing questions -- especially the qualitative questions. (As a side note, I am now amazed at how much we who call ourselves "Quality Assurance" like to focus on quantitative requirements and metrics.)
To use heuristics in testing, create a list of open-ended questions and guidelines. This will not be a pass/fail list of test criteria. It will not be a list of specific test steps. Instead it can be used to guide your test scripting and exploration for bugs. You will likely develop general heuristics that you can apply to all your testing and specific heuristics that apply to specific applications.
We need to be careful to apply heuristics as heuristics and not enforceable rules. For example, most people involved in testing web application have heard the heuristic that every page should be within three clicks of any other page. Applying this "rule" to web application design usually results in better usability. However, it does not always improve usability. Sometimes making every page within three clicks of another is not reasonable. Adding too many links are likely to confuse users more than they help. (Another heuristic?) Complex work flows often require that pages be more than three clicks away from others. Common sense needs to be applied to heuristics to ensure they are applied only when they fit the context.
Happy bug prospecting.

Computer program fixes old code faster than expert engineers

Last year, MIT computer scientists and Adobe engineers came together to try to solve a major problem that many companies face: bit-rot.
A good example is Adobe’s successful Photoshop photo editor, which just celebrated its 25th birthday. Over the years Photoshop had accumulated heaps of code that had been optimized for what is now old hardware.
“For high-performance code used for image-processing, you have to optimize the heck out of the software,” says Saman Amarasinghe, a professor at MIT and researcher at theComputer Science and Artificial Intelligence Laboratory (CSAIL). “The downside is that the code becomes much less effective and much more difficult to understand.”
This results in what Amarasinghe describes as “a billion-dollar problem”: companies like Adobe having to devote massive manpower to going back into the code every few years and, by hand, testing out a bunch of different strategies to try to patch it.
But what if there were a computer program that could automatically fix old code so that engineers can focus on more important tasks, such as actually dreaming up new software?
Enter Helium, a CSAIL system that revamps and fine-tunes code without ever needing the original source, in a matter of hours or even minutes.
The team started with a simple building block of programming that’s nevertheless extremely difficult to analyze: binary code that has been stripped of debug symbols, which represents the only piece of code that is available for proprietary software such as Photoshop.
A particular type of computational kernel popular for such software are “stencil kernels,” which allow you to do operations for entire areas of pixels. Stencil kernels are especially important to update because they use huge amounts of memory and compute power, and their performance degenerates quickly as new hardware become available.
With Helium, the researchers are able to lift these kernels from a stripped binary and restructure them as high-level representations that are readable in Halide, a CSAIL-designed programming language geared towards image-processing.
Going from binary to high-level languages was a big leap that the team originally didn’t think was doable, according to lead author Charith Mendis.
“The order of operations in these optimized binaries are complicated, which means that they can be hard to disentangle,” says Mendis, a graduate student at CSAIL. “Because stencils do the same computation over and over again, we are able to accumulate enough data to recover the original algorithms.”
From there, the Helium system then replaces the original bit-rotted components with the re-optimized ones. The net result: Helium can improve the performance of certain Photoshop filters by 75 percent, and the performance of less optimized programs such as Microsoft Windows’ IrfanView by 400 to 500 percent.
“We’ve found that Helium can make updates in one day that would take human engineers upwards of three months,” says Amarasinghe. “A system like this can help companies make sure that the next generation of code is faster, and save them the trouble of putting 100 people on these sorts of problems.”
The research was presented in a paper accepted to the Association for Computing Machinery SIGPLAN conference on Programming Language Design and Implementation (PLDI 2015), which took place June 13-17 in Portland, Oregon.
The paper was written by Mendis, fellow graduate students Jeffrey Bosboom and Kevin Wu, research scientist Shoaib Kamil, postdoc Jonathan Ragan-Kelley PhD '14, Amarasinghe, and researchers from Adobe and Google.
“We are in an era where computer architectures are changing at a dramatic rate, which makes it important to write code that can work on multiple platforms,” says Mary Hall, a professor at the University of Utah's School of Computing. “Helium is an interesting approach that has the potential to facilitate higher-level descriptions of stencil computations that could then be more easily ported to future architectures.”
One unexpected byproduct of the work is that it lets researchers see the different tricks that programmers used on the old code, such as archaeologists combing through computational fossils.
“We can see the ‘bit hacks’ that engineers use to optimize their algorithms,” says Amarasinghe, “as well as better understand the larger context of how programmers approach different coding challenges.”


Computer program fixes old code faster than expert engineers

Last year, MIT computer scientists and Adobe engineers came together to try to solve a major problem that many companies face: bit-rot.
A good example is Adobe’s successful Photoshop photo editor, which just celebrated its 25th birthday. Over the years Photoshop had accumulated heaps of code that had been optimized for what is now old hardware.
“For high-performance code used for image-processing, you have to optimize the heck out of the software,” says Saman Amarasinghe, a professor at MIT and researcher at theComputer Science and Artificial Intelligence Laboratory (CSAIL). “The downside is that the code becomes much less effective and much more difficult to understand.”
This results in what Amarasinghe describes as “a billion-dollar problem”: companies like Adobe having to devote massive manpower to going back into the code every few years and, by hand, testing out a bunch of different strategies to try to patch it.
But what if there were a computer program that could automatically fix old code so that engineers can focus on more important tasks, such as actually dreaming up new software?
Enter Helium, a CSAIL system that revamps and fine-tunes code without ever needing the original source, in a matter of hours or even minutes.
The team started with a simple building block of programming that’s nevertheless extremely difficult to analyze: binary code that has been stripped of debug symbols, which represents the only piece of code that is available for proprietary software such as Photoshop.
A particular type of computational kernel popular for such software are “stencil kernels,” which allow you to do operations for entire areas of pixels. Stencil kernels are especially important to update because they use huge amounts of memory and compute power, and their performance degenerates quickly as new hardware become available.
With Helium, the researchers are able to lift these kernels from a stripped binary and restructure them as high-level representations that are readable in Halide, a CSAIL-designed programming language geared towards image-processing.
Going from binary to high-level languages was a big leap that the team originally didn’t think was doable, according to lead author Charith Mendis.
“The order of operations in these optimized binaries are complicated, which means that they can be hard to disentangle,” says Mendis, a graduate student at CSAIL. “Because stencils do the same computation over and over again, we are able to accumulate enough data to recover the original algorithms.”
From there, the Helium system then replaces the original bit-rotted components with the re-optimized ones. The net result: Helium can improve the performance of certain Photoshop filters by 75 percent, and the performance of less optimized programs such as Microsoft Windows’ IrfanView by 400 to 500 percent.
“We’ve found that Helium can make updates in one day that would take human engineers upwards of three months,” says Amarasinghe. “A system like this can help companies make sure that the next generation of code is faster, and save them the trouble of putting 100 people on these sorts of problems.”
The research was presented in a paper accepted to the Association for Computing Machinery SIGPLAN conference on Programming Language Design and Implementation (PLDI 2015), which took place June 13-17 in Portland, Oregon.
The paper was written by Mendis, fellow graduate students Jeffrey Bosboom and Kevin Wu, research scientist Shoaib Kamil, postdoc Jonathan Ragan-Kelley PhD '14, Amarasinghe, and researchers from Adobe and Google.
“We are in an era where computer architectures are changing at a dramatic rate, which makes it important to write code that can work on multiple platforms,” says Mary Hall, a professor at the University of Utah's School of Computing. “Helium is an interesting approach that has the potential to facilitate higher-level descriptions of stencil computations that could then be more easily ported to future architectures.”
One unexpected byproduct of the work is that it lets researchers see the different tricks that programmers used on the old code, such as archaeologists combing through computational fossils.
“We can see the ‘bit hacks’ that engineers use to optimize their algorithms,” says Amarasinghe, “as well as better understand the larger context of how programmers approach different coding challenges.”


Computer program fixes old code faster than expert engineers

Last year, MIT computer scientists and Adobe engineers came together to try to solve a major problem that many companies face: bit-rot.
A good example is Adobe’s successful Photoshop photo editor, which just celebrated its 25th birthday. Over the years Photoshop had accumulated heaps of code that had been optimized for what is now old hardware.
“For high-performance code used for image-processing, you have to optimize the heck out of the software,” says Saman Amarasinghe, a professor at MIT and researcher at theComputer Science and Artificial Intelligence Laboratory (CSAIL). “The downside is that the code becomes much less effective and much more difficult to understand.”
This results in what Amarasinghe describes as “a billion-dollar problem”: companies like Adobe having to devote massive manpower to going back into the code every few years and, by hand, testing out a bunch of different strategies to try to patch it.
But what if there were a computer program that could automatically fix old code so that engineers can focus on more important tasks, such as actually dreaming up new software?
Enter Helium, a CSAIL system that revamps and fine-tunes code without ever needing the original source, in a matter of hours or even minutes.
The team started with a simple building block of programming that’s nevertheless extremely difficult to analyze: binary code that has been stripped of debug symbols, which represents the only piece of code that is available for proprietary software such as Photoshop.
A particular type of computational kernel popular for such software are “stencil kernels,” which allow you to do operations for entire areas of pixels. Stencil kernels are especially important to update because they use huge amounts of memory and compute power, and their performance degenerates quickly as new hardware become available.
With Helium, the researchers are able to lift these kernels from a stripped binary and restructure them as high-level representations that are readable in Halide, a CSAIL-designed programming language geared towards image-processing.
Going from binary to high-level languages was a big leap that the team originally didn’t think was doable, according to lead author Charith Mendis.
“The order of operations in these optimized binaries are complicated, which means that they can be hard to disentangle,” says Mendis, a graduate student at CSAIL. “Because stencils do the same computation over and over again, we are able to accumulate enough data to recover the original algorithms.”
From there, the Helium system then replaces the original bit-rotted components with the re-optimized ones. The net result: Helium can improve the performance of certain Photoshop filters by 75 percent, and the performance of less optimized programs such as Microsoft Windows’ IrfanView by 400 to 500 percent.
“We’ve found that Helium can make updates in one day that would take human engineers upwards of three months,” says Amarasinghe. “A system like this can help companies make sure that the next generation of code is faster, and save them the trouble of putting 100 people on these sorts of problems.”
The research was presented in a paper accepted to the Association for Computing Machinery SIGPLAN conference on Programming Language Design and Implementation (PLDI 2015), which took place June 13-17 in Portland, Oregon.
The paper was written by Mendis, fellow graduate students Jeffrey Bosboom and Kevin Wu, research scientist Shoaib Kamil, postdoc Jonathan Ragan-Kelley PhD '14, Amarasinghe, and researchers from Adobe and Google.
“We are in an era where computer architectures are changing at a dramatic rate, which makes it important to write code that can work on multiple platforms,” says Mary Hall, a professor at the University of Utah's School of Computing. “Helium is an interesting approach that has the potential to facilitate higher-level descriptions of stencil computations that could then be more easily ported to future architectures.”
One unexpected byproduct of the work is that it lets researchers see the different tricks that programmers used on the old code, such as archaeologists combing through computational fossils.
“We can see the ‘bit hacks’ that engineers use to optimize their algorithms,” says Amarasinghe, “as well as better understand the larger context of how programmers approach different coding challenges.”


GRAPH MATCHING

 GRAPH MATCHING
An implementation of a dual decomposition technique for the graph matching optimization problem described in
    Feature Correspondence via Graph Matching: Models and Global Optimization.
    Lorenzo Torresani, Vladimir Kolmogorov and Carsten Rother.
    In European Conference on Computer Vision (ECCV), October 2008.

SRMP

 SRMP
An implementation of the "SRMP" algorithm described in
    A new look at reweighted message passing.
    Vladimir Kolmogorov.
    In IEEE Transactions on Pattern Analysis and Machine Intelligence, May 2015.

The New Approach on Fuzzy Decision Forest

The introduction of decision trees in the 1980’s lead on to the thriving research until the 90’s. However, handling data that are not in a clear form was a problem to be solved since computers were designed to only calculate crisp data. This paper presents a method that allows decision trees to handle fuzzy data. Also, for better accuracy, this paper introduces decision forests, which is a bundle of decision trees that decide the case together. Additionally we did an experiment to prove the performance of new algorithm by comparing it with support vector machine, which is known as the best algorithm in data mining field.
....................................................................................................................................................................................................................

New Algorithm for Component Selection to Develop Component-Based Software with X Model

Component-Based Software Engineering (CBSE) is an approach which is used to enhance the reusability with the development of component-based software from the preexisting software components or with the components which is developed from the scratch. A new algorithm is proposed for component selection by using best-fit strategy and first-fit strategy through X model which is used to develop componentbased software with two approaches likely development for reuse and development with reuse. But when reuse a preexisting software component through the development with reuse, component selection play an important role. Component selection for Component-Based Software Development (CBSD) is very challenging field for researchers and practitioners. This paper also presents the two component selection problem viz. Simple Component Selection Problems (SCSP) and Criteria Component Selection Problem (CCSP). On the basis of these two problems, this paper presents a new optimal solution with new algorithm for optimal component selection from repositories. Lastly, paper summarizes the factors used in algorithm for optimal selection of components with the help of X model repositories to fulfill the requirements of client by using SCSP and CCSP.