Performance NotesPerformance Notes
Concept Consequences - Performance Notes
Home > Books > Architect's Guide to NetKernel > Concepts > Concept Consequences > Performance Notes

Rate this page:
Really useful
Satisfactory
Not helpful
Confusing
Incorrect
Unsure
Extra comments:


Performance Notes

At first consideration it may appear that the overhead costs associated with a resource-oriented highly-dynamic system would be considerable. However, counter-intuitively, the benefits of this self-consistent abstraction far outweigh the costs and actually yield large performance gains. This is so for four reasons:

First, the cost of dynamically linking resources is very small compared with the time taken in computing resource values - we believe NetKernel costs no more than 2% over a statically linked system.

Second, the benefits of computation in the URI address space, where previously computed resources are cached, means that in many cases between 30-50% of normal computation is redundant and does not occur. In the limit, for frequently used resources that are 'pseudo-static' but which are dynamically computed on first use, the performance gain is extremely large.

Third, since code is a resource which is passed to language runtimes for execution we can use transreption to dynamically compile the code. Most of our scripting languages are dynamically compiled to Java bytecode and, since all code is cached, the more the code is executed the more the JVM has opportunity to apply JIT optimization. In addition all non-traditional languages use transreption to compile their code - for example all variants of XSLT use dynamically compiled stylesheets. Even higher order system tools like mappers and RDBMS configurations are dynamically 'compiled' into efficient and reusable object models.

Fourth, because NetKernel has an operating-system-like scheduler, it tightly manages assignment of Java threads to computation tasks so as to maximize throughput. NetKernel runs with a very small number of threads that are continuously reused. Java architectures that create a thread per external request are effectively delegating the scheduling to the native operating system which is unaware of the true workload and which also incurs thread context switching costs. In short, the more threads you use the less likely you are efficiently exploiting the CPU cycles. Furthermore, as we move to an era of multi-core processors, NetKernel's managed threading and stateless accessors scale extremely well with CPU core count - in essence SMP multi-core processors are to NetKernel as a load-balanced server cluster is to a Web application.

© 2003-2007, 1060 Research Limited. 1060 registered trademark, NetKernel trademark of 1060 Research Limited.