Abstraction Layer

Aneka

Rajkumar Buyya , ... S. Thamarai Selvi , in Mastering Deject Computing, 2013

5.2.ane From the ground up: the platform abstraction layer

The core infrastructure of the system is based on the .Cyberspace technology and allows the Aneka container to be portable over different platforms and operating systems. Any platform featuring an ECMA-334 [52] and ECMA-335 [53] uniform environs can host and run an instance of the Aneka container.

The Mutual Language Infrastructure (CLI), which is the specification introduced in the ECMA-334 standard, defines a common runtime environment and application model for executing programs merely does not provide any interface to access the hardware or to collect functioning data from the hosting operating system. Moreover, each operating organisation has a different file system organization and stores that information differently. The Platform Abstraction Layer (PAL) addresses this heterogeneity and provides the container with a compatible interface for accessing the relevant hardware and operating system information, thus allowing the residuum of the container to run unmodified on any supported platform.

The PAL is responsible for detecting the supported hosting environment and providing the corresponding implementation to interact with it to support the activeness of the container. The PAL provides the post-obit features:

Uniform and platform-independent implementation interface for accessing the hosting platform

Uniform access to extended and boosted properties of the hosting platform

Uniform and platform-independent access to remote nodes

Uniform and platform-independent direction interfaces

The PAL is a small layer of software that comprises a detection engine, which automatically configures the container at kicking fourth dimension, with the platform-specific component to access the above information and an implementation of the abstraction layer for the Windows, Linux, and Mac Os X operating systems.

The collectible information that are exposed by the PAL are the following:

Number of cores, frequency, and CPU usage

Memory size and usage

Aggregate bachelor disk infinite

Network addresses and devices fastened to the node

Moreover, additional custom information tin be retrieved by querying the backdrop of the hardware. The PAL interface provides means for custom implementations to pull additional information by using name-value pairs that can host any kind of information about the hosting platform. For case, these properties tin can contain boosted information about the processor, such as the model and family unit, or additional data nigh the process running the container.

Read full affiliate

URL:

https://world wide web.sciencedirect.com/scientific discipline/article/pii/B978012411454800005X

Thrust

Nathan Bell , Jared Hoberock , in GPU Computing Gems Jade Edition, 2012

26.4.2 Robustness

Thrust's abstraction layer likewise enhances the robustness of CUDA applications. In the previous section nosotros noted that past delegating the launch configuration details to Thrust we could automatically obtain maximum occupancy during execution. In addition to maximizing occupancy, the abstraction layer besides ensures that algorithms "simply work," even in uncommon or pathological use cases. For instance, Thrust automatically handles limits on filigree dimensions (no more than than 64K), works around limitations on the size of __global__ function arguments, and accommodates big user-divers types in most algorithms. To the degree possible, Thrust circumvents such factors and ensures right program execution across the full spectrum of CUDA-capable GPUs.

Read full chapter

URL:

https://www.sciencedirect.com/scientific discipline/article/pii/B9780123859631000265

Current Challenges in Abstracting Data Points

W. Burgstaller , ... P. Palensky , in Fieldbus Systems and Their Applications 2005, 2006

5.two DPAL Services in BACnet

Since the DPAL model is very similar to the BACnet model, the mapping of both is quite simple. The BACnet protocol defines various types of objects, which differ in number and kind of backdrop. We express the mapping to the analogue, multi-land and binary objects. These objects are either input, output or value objects. A BACnet object consist properties, which describe the value, awarding and user characteristics. The mapping of the BACnet and DPAL properties is shown in Tabular array iii.

Table 3. Mapping BACnet backdrop and DPAL properties

DPAL property BACnet holding identifier
Data type Object_Type
Low limit Min_Press_Value
High limit Max_Press_Value
Unit Units
Resolution Resolution
Precision configured
Timestamp generated by the DPAL
Direction configured
Condition Status_Flags
Poll cycle Update_Interval
COV increment COV_Increment
Description Description
DPAL handle generated by the DPAL
DPAL proper noun configured
Native name Object_Name

Since standard DPAL object only have 1 value property, communication objects with more than ane value are not possible and therefore can be mapped one-to-one to a DAPL object. To read or write the values of different objects at once, BACnet offers multiple-read and multiple-write-services. They accept the aforementioned functionality every bit the multiple-read and multiple-write service of the DPAL data access API.

Read full affiliate

URL:

https://www.sciencedirect.com/science/article/pii/B9780080453644500460

Thrust

With special contributions by, ... Chris Rodrigues , in Programming Massively Parallel Processors (2nd Edition), 2013

Robustness

Thrust'south abstraction layer also enhances the robustness of CUDA applications. In the previous section we noted that past delegating the launch configuration details to Thrust we could automatically obtain maximum occupancy during execution. In addition to maximizing occupancy, the brainchild layer also ensures that algorithms "simply piece of work," even in uncommon or pathological use cases. For instance, Thrust automatically handles limits on filigree dimensions (no more than 64  Yard in current devices), works around limitations on the size of global part arguments, and accommodates large user-defined types in virtually algorithms. To the degree possible, Thrust circumvents such factors and ensures correct plan execution across the full spectrum of CUDA-capable devices.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B978012415992100016X

22nd European Symposium on Computer Aided Process Engineering

Juliette Heintz , ... Jean-Pierre Belaud , in Computer Aided Chemical Engineering science, 2012

2.2 Abstraction Layers

The concept of abstraction layers comes from the domain of Model Based System Applied science (MBSE). This is an applied science discipline used in software and circuitous system analysis and design. It provides concepts, languages and tools to create and manage models. I purpose of MBSE is to transform a model used as a advice support into a model that tin can be understood and executed past calculating tools. MBSE is based on iii concepts: the model, the model transformation and the metamodel. Figure 2 gives a brief analogy for a molecule that is represented by a simple model, here with the UML2 language. This model is linked to a metamodel that sets estimation rules for all the models that conform to it. The layer 0 corresponds to the real system, namely to the object as information technology truly exists. At layer i, a model represents a simplification of the existent organisation. This model must conform to an upper abstraction model that is defined by its metamodel at layer 2 (Bézivin, 2004, Favre, 2006).

Figure ii. Abstraction models of a molecule

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780444595195500770

An Introduction to Virtualization

In Virtualization for Security, 2009

Summary

Virtualization is an abstraction layer that breaks the standard paradigm of computer architecture, decoupling the operating system from the concrete hardware platform and the applications that run on it. As a result, Information technology organizations tin attain greater IT resource utilization and flexibility. Virtualization allows multiple virtual machines, often with heterogeneous operating systems, to run in isolation, side-by-side, on the same physical machine. Each virtual car has its own ready of virtual hardware (CPU, memory, network interfaces, and disk storage) upon which an operating system and applications are loaded. The operating system sees the set of hardware and is unaware of the sharing nature with other guest operating systems running on the aforementioned physical hardware platform. Virtualization technology and its core components, such equally the Virtual Motorcar Monitor, manage the interaction with the operating system calls to the virtual hardware and the actual execution that takes identify on the underlying physical hardware.

Virtualization was first introduced in the 1960s to allow sectionalisation of large, mainframe hardware, a scarce and expensive resource. Over time, minicomputers and PCs provided a more efficient, affordable way to distribute processing power. By the 1980s, virtualization was no longer widely employed. However, in the 1990s, researchers began to encounter how virtualization could solve some of the problems associated with the proliferation of less expensive hardware, including underutilization, escalating management costs, and vulnerability.

Today, virtualization is growing as a cadre technology in the forefront of data center management. The technology is helping businesses, both large and pocket-sized, solve their problems with scalability, security, and direction of their global Information technology infrastructure while finer containing, if not reducing, costs.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781597493055000013

The Linux/ARM embedded platform

Jason D. Bakos , in Embedded Systems, 2016

1.9.ii Linux Perf_Event

Linux provides an brainchild layer for PMU direction called perf_event. In addition to interfacing with the PMU, perf_event is also capable of keeping track of software events such as context switches and page faults. Unfortunately, as of this writing, perf_event is not fully implemented for the ARM11 processor of the Raspberry Pi. Appendix A describes how to patch the kernel to add together back up.

When more counters are requested than are physically available, perf_event uses a technique called multiplexing. In this example, the kernel enables a subset of the requested counters, enabling a dissimilar subset at regular intervals. This allows the hardware counters to statistically sample the event counts at various periods throughout the time the user requests the counters to be enabled. When the user requests the results, perf_event will likewise report the number of cycles since the user enabled the counter and the number of cycles the counter was actually enabled. These values are chosen time enabled and time running. The user tin can extrapolate the bodily count by scaling the reported count past the ratio of these values.

In gild to apply perf_event, the user instantiates each counter using the organisation call named perf_event_open . Once open, the user can use the standard Posix ioctl() function to enable, disable, and reset information technology, and the read() function to read its state.

When opening a counter, the user must fill up in a " struct perf_event_attr " construction to configure the counter. The two about important fields of this structure are the .type field and .config field. When counting hardware events, in that location are only 2 valid types: PERF_TYPE_HARDWARE and PERF_TYPE_RAW . PERF_TYPE_HARDWARE is a platform-independent mechanism for specifying a set of common events, while PERF_TYPE_RAW allows the user to specify a processor-specific issue encoding to count.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128003428000018

Flash File Systems

Jalil Boukhobza , Pierre Olivier , in Flash Retentiveness Integration, 2017

4.2.1 Linux Virtual File Arrangement VFS

VFS is an abstraction layer of all the file systems supported by Linux; it allows the different types of file systems to coexist within the operating system during its execution. In fact, thanks to the VFS, the user can admission in the same way files that are situated in storage devices or partitions, formatted with different file systems: all read operations are performed via the system call read, write operations via write, etc. As illustrated above, VFS receives the user's requests and transmits them to the corresponding file system. Besides this abstraction office, the VFS layer includes the following mechanisms of I/O optimization:

1)

Page cache. It is a data cache in chief retentiveness. It buffers all the data read and written in files in order to improve I/O performance. Any data read in a file by a procedure is read from the page cache, and whenever necessary, information technology is transferred in that location from the storage device beforehand. In the same fashion, whatever data written by a process in a file is first written in the folio cache before existence written in the secondary storage. Note that the Linux page cache is a component linked to the memory management system. Nevertheless, in the context of secondary storage management, these two systems are strongly connected;

2)

Read-ahead algorithm. When a process demands to read a certain amount of data from a file, the Linux kernel can decide to read a greater amount than required, in order to store it in the page enshroud in anticipation of hereafter accesses. This predictable read method is called read-ahead;

3)

Algorithm of write-back from the page cache. A data write in a file performed by a procedure is buffered in the page cache for a certain amount of time. Data are not written directly on the storage device, instead, they are written at a afterwards moment, in an asynchronous way. The write-back (or delayed write) improves write performances by taking advantage of temporal locality principle and by assimilating potential repeated updates within the page cache.

Read full affiliate

URL:

https://www.sciencedirect.com/science/article/pii/B9781785481246500062

21st European Symposium on Computer Aided Process Engineering

Ingo Thomas , in Reckoner Aided Chemic Engineering science, 2011

4 Workflow and Strategical Benefits

The modeling environs provides a software-technical abstraction layer, which separates the thermodynamical or concrete model from the software engineering details of the process simulation environs. The applied and strategical consequences will be surveyed below.

Process unit model development is no longer overloaded with software engineering science details; the unit model developer does not need to carp with compilers, linkers, retention direction and and so on.

The software technology details to be taken care of in complex systems as OPTISIM ® or UNISIM ® are non to exist underestimated. Software technology perils (e.chiliad. retentivity management details) have been significant obstacles in a number of evolution projects.

Process unit models may be developed independently of the process simulation environs in which the model is to be used. By at present, models may be used in OPTISIM ® and UNISIM ® ; subsequently on, a Cape Open ESO implementation may exist discussed. This increases the safety of investment of the expensive development of detailed models.

Lets face it: Model development do is mostly debugging. A declarative modeling environs reduces debugging fourth dimension in several ways:

If derivatives have to be coded explicitly, they are a major source of subtle errors, which detoriate convergence speed and model reliability. Using automatic differentiation eliminates this source of errors.

Some other common source of errors in discrete-continuous systems are bookkeeping errors regarding switching role states. This source of errors is eliminated as well.

Read full chapter

URL:

https://www.sciencedirect.com/scientific discipline/article/pii/B9780444537119500079

Communicating pictures: delivery beyond networks

David R. Balderdash , Fan Zhang , in Intelligent Epitome and Video Compression (Second Edition), 2021

11.5.ane Network abstraction

Standards since H.264/AVC accept adopted the principle of a NAL that provides a mutual syntax, applicable to a wide range of networks. A high-level parameter set up is also used to decouple the information relevant to more than one slice from the media bitstream. H.264/AVC, for instance, describes both a sequence parameter ready (SPS) and a moving-picture show parameter set (PPS). The SPS applies to a whole sequence and the PPS applies to a whole frame. These describe parameters such as frame size, coding modes, and piece structuring. Further details on the construction of standards such every bit H.264 are provided in Affiliate 12.

Read full affiliate

URL:

https://www.sciencedirect.com/science/article/pii/B9780128203538000207