Computer scientist Lior Shamir has developed a technology that uses MRI scanners to scan the legs of people as they pass by. He published a research paper in the International Journal of Biometrics detailing the results of his research. Studying the use of knee MRI images of 2,686 different patients as a form of biometric identification, analyzed using the wndchrm image classification scheme, he concluded that:
“Experimental results show that the rank-10 identification accuracy using the MRI knee images is ~93% for a dataset of 100 individuals, and ~45% for the entire dataset of 2,686 persons.”
Although that statistic drops to 45% for the entire dataset of 2,686 people, Mr. Shamir has proven that internal organs imaged with biomedical imaging devices can also allow biometric identification of individuals.
It should be noted that Mr. Shamir also pointed out in his research paper that since MRI images “are used for the purpose of imaging internal parts of the body, this approach of biometric identification can potentially offer high resistance to deception,” meaning that the likelihood of this potential modality reaching the point where it could be feasibly deployed is highly unlikely.
What should be noted though is that researchers and scientists are increasingly uncovering new and fresh ways to capture and identify people through their biometric information. Just in the last few years, we have seen advances in new biometric modalities like gait, gesture, heartbeat, ears, body odor, even using the soles of your shoes for identification (Wired magazine actually published an article today entitled – 11 Body Parts Defense Researchers Will Use to Track You that’s worth a read). Computer game manufacturers have already introduced innovative new biometric features to their gaming consoles that track and capture end user nuances as part of the interactive experience. Even biometric law enforcement applications continue to evolve, demonstrated by the recent announcement of “biometric handcuffs” that can automatically deliver shocks and injections to unruly detainees.
Innovation and creativity continue to be hallmarks of the biometrics research and development community and we can expect to see new and different biometric modalities sprouting up in the biometric industry as we move ahead through 2013 and beyond. What is important to remember is that whether you support the biometrics industry or are opposed to it, clearly biometric identification will play a part in your life (if it has not already) in the years to come. The question is, which biometric modality will it be?
Before we talk about 4G or GSM. You must know WCDMA which stands for Wide Band Code Division Multiple Access. It is a third generation network developed by NTT Docomo and from 2G network CDMA. There are no cellular companies that are not based on WCDMA except Sprint (however they are slowly converting). WCDMA is a standard for 3G network. It was not a revolutionary upgrade to network, but by having the standard based on 2G GSM/EDGE. It was able for a simultaneous switch between GSM and EDGE. Also it was capable of compatibility with other UMTS family.
GSM (Global System for Mobile communications) sim slot supports 2G features with voice and data services (GPRS and EDGE) with data transfer rates up to 500 kbps. It is the most popular technology across the world. The most used frequency spectrum, 900 MHz and 1800 MHz.
CDMA (Code Division Multiple Access) can be said as the counterpart of GSM 2G, only it uses a different technology than GSM. The frequency spectrum used in this technology are 800 MHz and 1900 MHz.
WCDMA (Wideband CDMA) and UMTS (Universal Mobile Telecommunications System) are the used interchangeably these days. Both these technologies are essentially 3G technologies, providing voice and data services at a higher data rate than the traditional 2G.
The SIM slots supporting GSM services do not support CDMA because both use different technology for communications, and similarly vice-versa.
However, 3G SIM slots are backward compatible, means they support the 2G too. And similarly the 4G-LTE are also backward compatible.
Earlier there were Mobile Phones available with either GSM or CDMA services in the market. Now there are dual sim phones available in the market, supporting GSM as well as CDMA sims in different slots.
CDMA – Code Division Multiple Access
CDMA is a channel access method used by various radio communication technologies. It shouldnot be confused with the mobile phone standars called cdmaOne, CDMA 2000 (3G Evolution of cdmaOne) and WCDMA (the 3G standard used by GSM carriers) which are often referred to as simply CDMA, and use CDMA as an underlying channel access method.
One of the concepts in data communication is the idea of allowing several transmitters to send information simultaneously over a single communication channel.This allows several users to share a band of frequencies. This concept is Multiple Access or MA of CDMA. This is also why Verizon has much better pentration than AT&T or T-Mobile.
However, there is a disadvantage of CDMA. Your phone will not use a subscriber identification module to access cell towers. It will use an electric serial number to access. Which is why Verizon or Sprint’s phones usually don’t have a sim card. However, with WCDMA being deployed. All phones for all carriers have a sim. Still, if the phone does not have a GSM (2G) antenna, the (W)CDMA phone will not be compatible with GSM carriers.
GSM – Global System for Mobile Communications
Our world divides into four different generations of cellular network.
1G – Analog Cellular Network
2G – GSM or CDMA
3G – WCDMA
4G – LTE, HSPA+ all based on WCDMA , Wi – Max
After the first generation network, carriers either chose GSM or CDMA standard for their 2G network. Both network are not compatible to each other. However, a WCDMA phone will be compatible with GSM network if it has a sim slot and has a GSM antenna.
GSM carrier frequencies are 900, 1800, 1900, and 2100 mHz. That is why iPhones are only capable with 2G network on T-Mobile’s network.
Phone Compatibility
GSM/CDMA compatible phone – iPhone 4S
GSM phone compatible carriers – AT&T, T-Mobile, Solavei (based on WCDMA and GSM)
GSM phone possibly compatible carruers – Verizon or Sprint if it is an iPhone 4S
CDMA only carriers- U.S. Cellular and Sprint.
What are Project Baselines?
A baseline is used to perform analysis to find current performance
against to the expected level for a specific activity in established
time-phase.
In
Project Management, Baseline refers to the accepted and approved plans &
their related documents. Project baselines are, generally, approved by project
management team and those are used to measure and control of project
activities.
Though
baselines are outputs of planning stage, but they are referred and updated
during executing & monitoring and controlling process groups.
Baselines
give the project manager a best way to understand project progress (by
analyzing baseline vs. actual) and forecast the project outcome.
Baselines
are important input to number of project processes and outputs of many
processes raise change request to these baselines.
Project
baselines include,
but are not limited to:
- Schedule baseline
- Cost baseline
- Scope baseline
- Quality baseline
Baselines
are prepared on triple constraints – Scope, Time, Cost (and Quality) –
management areas. All the above are considered as components Project management
plan. Often the scope, schedule, and cost baseline will be combined into a
performance measurement baseline that is used as an overall project baseline
against which project performance can be measured. The performance measurement
baseline is used for earned value measurements.
Benchmark,
Standard, Guideline and Baseline are different words that are used
interchangeably in Management.
There is
need to adequately master “SMALL DATA” design, structures, standards and
functions, if we want to effectively handle and deliver sustainably the
challenges of Big Data. Currently, our national Database structures are
perceived to be weak and venerable!
The hype in and around the concept and acclaimed
deliverables of Big Data is so overwhelmingly tempting to the extent that it
currently blinds developing countries and economics on how to match forward with
their plans for engaging the 21st century Information Technology Ecosystem.
However, that does not in any way remove or
diminished the fundamental need for and importance of appropriate and secured
storage of Data at Local Data Centre. Thanks to Mainone that has professionally
pointed the right way forward, by establishing a formidable and globally
certified Tier 3 Data Centre in Nigeria.
Reliable
Internet resources has revealed that the world’s technological per-capita
capacity to store information roughly doubled every 40 months since the 1980s.
As of 2012, every day 2.5 exabytes (2.5×1018) of data were created. As of 2014,
every day 2.3 zettabytes (2.3×1021) of data were created by Super-power
high-tech Corporation worldwide. As far as available date can lead us, no
Nigerian corporate enterprise is currently in that Big data league!
The convergence in the global ICT Ecosystem has
revealed the importance and strength of embedded systems. This phenomenal trend
has led to what is now known as Softwareization. Due to this complex trend of
“Softwareization of Things” (SoT), every nation is looking inwards for
strategies to adequately address the challenges and delivers solutions required
to respond to the current and emerging desires in ICT. Nigeria must rethink her
IT strategy, organize her information technology ecosystem and master the
design, processes, retrieval and storage of SMALL DATA as a reliable gateway
and veritable tool and migration strategy towards Big Data.
Simply defined, Softwareization is the
Internetization of the emergence of convergence of Information Technology,
Telecommunications and Broadcasting. These converged technologies have led to
the monumental inflation of the content on the Internet and compelled mankind
to migrate from IPv4 to IPv6, giving birth to “Internet of Things” (IoT), where
all connectable devises will acquire a recognized and secured IP Address.
The power and significance of Software as the blood
that flows through the digital world becomes evident from day to day. Today it
represents the blanket with which we rap the earthly cold of our planet – keeping
it warm for sustainable development.
Big data is “massively parallel software running on
tens, hundreds, or even thousands of servers”. It is an all-encompassing term
for any collection of data sets. These data sets are so large and complex that
it becomes difficult to process them using traditional data processing
applications.
The challenges include analysis, capture, curation,
search, sharing, storage, transfer, visualization and privacy violations.
The trend to larger data sets is due to the
additional information derivable from analysis of a single large set of related
data, as compared to separate smaller sets with the same total amount of data,
allowing correlations to be found to spot business trends, prevent diseases,
combat crime and so on. Global examples according to Internet resources
includes but not limited to the following: Walmart handles more than 1 million
customer transactions every hour, which are imported into databases estimated
to contain more than 2.5 petabytes (2560 terabytes) of data – the
equivalent of 167 times the information contained in all the books in the US
Library of Congress. Facebook handles 50 billion photos from its user base.
FICO Falcon Credit Card Fraud Detection System protects 2.1 billion active
accounts world-wide.
The volume of business data worldwide, across all
companies, doubles every 1.2 years, according to estimates. Windermere Real
Estate uses anonymous GPS signals from nearly 100 million drivers to help new
home buyers determine their typical drive times to and from work throughout
various times of the data. Scientists regularly encounter limitations due to
large data sets in many areas, including meteorology, genomics, connectomics,
complex physics simulations, biological and environmental research, and in
e-Science in general.
Without any fear of contradiction, Big Data is
critical and a very huge emerging market towering in thousands of dollars and
desirous of being looked into my corporate giants in developing economies.
However, that adventure should not be plunged into
– blindly. There is a critical need to look inwards and restructure our
national Software architecture with formidable standards – ensuring that we
build local capacities, capabilities and smart skills that can conquer and
master small data and effectively domesticate Big Data for global
competitiveness.
STOPPING HARDWARE TROJANS IN THEIR TRACKS
A FEW ADJUSTMENTS COULD PROTECT CHIPS AGAINST MALICIOUS CIRCUITRY
By Subhasish Mitra, H.-S. Philip
Wong & Simon Wong
Long ago, the story goes, Greek soldiers tried for 10 years
to conquer the city of Troy. Eventually, they departed, leaving behind a large
wooden horse, apparently as a gift. The Trojans pulled the beautiful tribute inside.
Later, a group of Greek soldiers slipped out of the horse and opened the
gates for their compatriots, who easily sacked the sleeping city.
Nowadays, some 3,000 years on, a Trojan is a
seemingly innocuous piece of software that actually contains malicious code.
Security companies are constantly developing new tests to check for these
threats. But there is another variety of Trojan—the “hardware Trojan”—that has
only started to gain attention, and it could prove much harder to thwart.
A hardware Trojan is exactly what it sounds like: a
small change to an integrated circuit that can disturb chip operation. With the
right design, a clever attacker can alter a chip so that it fails at a crucial time or generates false
signals. Or the attacker can add a backdoor that can sniff out encryption keys
or passwords or transmit internal chip data to the outside world.
There’s good reason to be concerned. In 2007, a
Syrian radar failed to warn of an incoming air strike; a backdoor built into
the system’s chips was rumored to be responsible. Other serious allegations of
added circuits have been made. And there has been an explosion in reports of counterfeit chips, raising questions
about just how much the global supply chain for integrated circuits can be
trusted.
If any such episode has led to calamity, the role
of the Trojan has been kept secret. Indeed, if any potentially threatening
hardware Trojans have been found, the news hasn’t yet been made public. But
clearly, in the right place a compromised chip could scuttle antimissile
defenses, open up our personal data to the world, or down a power plant or even
a large section of a power grid.
A lot of research is still being devoted to
understanding the scope of the problem. But solutions are already starting to
emerge. In 2011, the United States’ Intelligence Advanced Research Projects Activity
(IARPA) started a new program to explore ways to make trusted chips. As part of
that program, our team at Stanford University, along with other research
groups, is working on fundamental changes to the way integrated circuits are
designed and manufactured.
Today we try to protect against hardware Trojans by
keeping careful tabs on where chips are made, limiting the opportunity for
mischief by limiting who is authorized to make a chip. But if this research
succeeds, it could make it practical for anyone to design and build a chip
wherever they like and trust that it hasn’t been tampered with. More radically,
our research could open up ways to let you use a chip even if there is a Trojan
inside.
Get full gist at:
http://spectrum.ieee.org/semiconductors/design/stopping-hardware-trojans-in-their-tracks/
SPEED OF LIGHT NOT SO CONSTANT
AFTER ALL
By
Andrew Grant
Light doesn’t always travel at the speed of
light. A new experiment reveals that focusing or manipulating the structure of
light pulses reduces their speed, even in vacuum conditions.
A paper reporting the research, posted online at
arXiv.org and accepted for publication, describes hard experimental evidence
that the speed of light, one of the most important constants in physics, should
be thought of as a limit rather than an invariable rate for light zipping
through a vacuum.
“It’s very impressive work,” says Robert Boyd, an
optical physicist at the University of Rochester in New York. “It’s the sort of
thing that’s so obvious, you wonder why you didn’t think of it first.”
Researchers led by optical physicist Miles
Padgett at the University of Glasgow demonstrated the effect by racing photons
that were identical except for their structure. The structured light
consistently arrived a tad late. Though the effect is not recognizable in
everyday life and in most technological applications, the new research
highlights a fundamental and previously unappreciated subtlety in the behaviour
of light.
The speed of light in a vacuum, usually denoted
c, is a fundamental constant central to much of physics, particularly
Einstein’s theory of relativity. While measuring c was once considered an
important experimental problem, it is now simply specified to be 299,792,458
meters per second, as the meter itself is defined in terms of light’s vacuum
speed. Generally if light is not traveling at c it is because it is moving
through a material. For example, light slows down when passing through glass or
water.
Padgett and his team wondered if there were
fundamental factors that could change the speed of light in a vacuum. Previous
studies had hinted that the structure of light could play a role. Physics
textbooks idealize light as plane waves, in which the fronts of each wave move
in parallel, much like ocean waves approaching a straight shoreline. But while
light can usually be approximated as plane waves, its structure is actually
more complicated. For instance, light can converge upon a point after passing
through a lens. Lasers can shape light into concentrated or even bull’s-eye–shaped
beams.
The researchers produced pairs of photons and
sent them on different paths toward a detector. One photon zipped straight
through a fibre. The other photon went through a pair of devices that
manipulated the structure of the light and then switched it back. Had structure
not mattered, the two photons would have arrived at the same time. But that
didn’t happen. Measurements revealed that the structured light consistently
arrived several micrometres late per meter of distance travelled.
“I’m not surprised the effect exists,” Boyd says.
“But it’s surprising that the effect is so large and robust.”
Greg Gbur, an optical physicist at the University
of North Carolina at Charlotte, says the findings won’t change the way
physicists look at the aura emanating from a lamp or flashlight. But he says
the speed corrections could be important for physicists studying extremely
short light pulses.
for details, visit: www.sciencenews.org/article/speed-light-not-so-constant-after-all
NEWLY IDENTIFIED BRAIN CIRCUIT
HINTS AT HOW FEAR MEMORIES ARE MADE
WORKING WITH RATS, RESEARCHERS
REVEAL THE SHIFTING NEURAL CIRCUITRY BEHIND THE RECALL OF UNPLEASANT
EXPERIENCES
Scientists have
identified a previously unknown set of brain connections that play an important
role in how fear memories are stored and recalled. The discovery may lead to a
better understanding of post-traumatic stress disorder and other anxiety
problems.
Two teams of researchers independently found the
newly identified brain-cell circuit when studying rodents’ ability to recall a
fear memory. The circuit that initially recalled the memory differed from the
circuit that retrieved the memory days later, the researchers report in two
papers online January 19 in Nature. It is the first time scientists
have shown that a memory can be on temporary hold in one area of the brain and
later released to a completely separate spot.
for details:
visit: www.sciencenews.org/article/newly-identified-brain-circuit-hints-how-fear-memories-are-made