|
|
|
|
Frequently Asked Questions About the TATP BenchmarkTelecom One (TATP) is a benchmark designed to measure the performance of databases used
within telco and other network infrastructure applications such as VoIP,
SoftSwitches, Class 5 switches, provisioning, element management, media gateways, Emergency-911 systems, etc. The
first incarnation of the benchmark existed as an undisclosed
implementation at
Nokia Networks, in late nineties. The first public disclosure saw the light
in the form of an
M.S. thesis published in 2003. The thesis, "Open Source Database
Systems: Systems
Study, Performance and Scalability," by Toni Strandell, was published
by the
Department of Computer Science of University of Helsinki, in May 2003.
Based on
that specification, Solid Information Technology (at that time, a
private company based in Helsinki) decided to implement the benchmark.
Solid called its
implementation TM1 (Telecom One) and released
the code to open source, under GPL2, in the Autumn of 2004. Nokia eventually published the original
specification and a corresponding implementation in February 2006, under the name
of
Network Database Benchmark (NDBB).
In January 2008, IBM Corporation acquired Solid. In March 2009, the
benchmark was given the new name of TATP and was published under CPL
1.0. Both TATP and NDBB
are based on the same database schema, load structure and population
rules. The
implementations are, however, different. An advantage of TATP is that
it is equipped
with a result database allowing for automatic result extraction and flexible result data processing
over longer periods
of time and complex test spaces. To simulate a real-world experience, the TATP database schema is based on the
structure of a Home Location Register (HLR). Seven pre-defined transactions mimic
actions typically taken against the HLR. These transactions perform various inserts,
updates, deletes, and queries against the data in the database. Each transaction has a
certain probability, representative of the actual frequency of the transaction in live
applications. This criteria is based on the original input to the M.S. thesis and
Solid's experience from real customer engagements. As in the real-world, approximately
80% of the activity is read-only and the remaining 20% makes changes to the database. The results of a TATP benchmark show the overall throughput of the system, measured as
the Mean Qualified Throughput (MQTh) of the target database system, in transactions
per second, over the seven transaction types. In addition, the response time
distributions for each of the transactions is reported. When combined, these two
types of results provide an estimate of the application speed, and highlight any
anomalies in the way the database management system handles the individual
transactions. Three groups of vendors are expected to use the benchmark to demonstrate the speed
of their products: hardware vendors, operating system vendors, and relational database
management system (RBMS) vendors.
Unlike the well-known Transaction Processing Performance Council TPC
Benchmarks, which are based on enterprise and e-commerce applications, the TATP
benchmark specifically measures data activity against a specific Telco environment,
the Home Location Register (HLR). The HLR lies at the core of every mobile phone system,
providing information that identifies the user, and lists the preferences, services and
billing options for the account. Every mobile phone call requires at least one access
to the HLR. From the TPC web site: "While TPC benchmarks certainly
involve the measurement and evaluation of computer functions and operations, the TPC
regards a transaction as it is commonly understood in the business world: a
commercial exchange of goods, services, or money. A typical transaction, as defined by the
TPC, would include the updating to a database system for such things as inventory
control (goods), airline reservations (services), or banking (money)." i.e. they are not
looking at telco as an application domain. As a result of the lack of industry-specific benchmarks, companies are forced to
create one-off benchmarks each time they need to assess components for building telco
applications. A number of hardware, OS, database companies, and others are using TATP and have
published results. For example, Advanced Micro Devices and
MySQL have published results using TATP. Solid
has placed a description of the benchmark, including source code, on
SOURCEFORGE for free download. To validate the findings, published
results must be created or audited by an independent third party, and
must include enough information that any database professional could
reproduce the test. Companies use industry standard benchmarks to demonstrate their performance
relative to other products on the market. There has been a long history of RDBMS, OS
and hardware companies publishing TPC benchmarks. Any company that publishes TATP results must do two things that improve the level of
trust you can have in their results. Firstly, they must describe the system they use in
sufficient detail that another database professional could reproduce the results.
Secondly, any published results must either be created by a third party of audited by a
third party.
|
|
|
|