Digital Twins (DTs) constitute a growing and promising trend recognised by academia and industry. They are virtual replicas of distinctive objects, processes, buildings, or humans. DTs are used to reason about their physical counterparts’ functionalities, interactions, behaviours, and overall to plan optimal actions that they can perform or be subjected to. Given their intrinsic complexity, no standard definition nor a unified solution is yet available for designing and developing DTs. Intending to shed light on such a complex topic, we analysed the literature and derived a list of twelve pivotal characteristics of DTs. Such characteristics will be used as requirements for defining a Digital Twin Modelling Notation that will enable reasoning about the design of DT solutions.
The relevance of IoT-based solutions in everyday life is continuously increasing. The capability to sense the world, activate computation based on data gathered by sensors, and possibly produce reactions on the world itself results in an almost never-ending identification of novel IoT solutions and application scenarios. Nonetheless, IoT’s intrinsic nature, which includes a high degree of variability in used devices, data formats, resources, and communication protocols, complicates the design, development, reuse and customisation of IoT-based software systems. In addition, customers require personalised solutions strongly based on their specific requirements. Reducing the complexity of building customised solutions and increasing the reusability of developed artefacts are among the topmost challenges for enterprises and IoT application developers. Upon these challenges, we propose a model-driven approach organising the modelling and development of IoT applications in different steps, handling the complexity in representing the IoT domain variability, and empowering the reusability of design decisions and artefacts to simplify the derivation of customised IoT applications. Our proposal is named FloWare. It follows the typical path of an MDE solution, providing modelling support through feature models to fully represent and handle the possible variability of devices in a specific IoT application domain. Once a specific configuration has been selected, this will be complemented with specific information about the deployment context to automatically derive fragments of the IoT applications, that will be successively combined by the developer within a low-code development environment. The approach is fully supported by a toolchain that has been released for public use.
Several heterogeneous IoT platforms have been proposed and regularly used by enterprises and academies to support and facilitate IoT software applications development. However, IoT applications strongly depend on the functionalities supported by the specific platform used. This affects the development and portability of the developed applications that may require significant changes, or a complete re-design, for being migrated between platforms. This paper presents X-IoT, an MDE approach for developing cross-platform IoT applications. The approach implements a Domain-Specific Modelling Language (DSML) based on emerging IoT application requirements. A meta-model that incorporates the main IoT platform characteristics has been developed within the ADOxx platform, together with a graphical notation. Through the DSML, it is possible to model a platform-independent model of IoT applications that are refined and deployed on specific IoT platforms.
The engineering of large-scale cyber-physical systems (CPS) increasingly relies on principles from self-organisation and collective computing, enabling these systems to cooperate and adapt in dynamic environments. CPS engineering also often leverages digital twins that provide synchronised logical counterparts of physical entities. In contrast, sensor networks rely on the different but related concept of virtual device that provides an abstraction of a group of sensors. In this work, we study how such concepts can contribute to the engineering of self-organising CPSs. To that end, we analyse the concepts and devise modelling constructs, distinguishing between identity correspondence and execution relationships. Based on this analysis, we then contribute to the novel concept of “collective digital twin” (CDT) that captures the logical counterpart of a collection of physical devices. A CDT can also be “augmented” with purely virtual devices, which may be exploited to steer the self-organisation process of the CDT and its physical counterpart. We underpin the novel concept with experiments in the context of the pulverisation framework of aggregate computing, showing how augmented CDTs provide a holistic, modular, and cyber-physically integrated system view that can foster the engineering of self-organising CPSs.
A key goal of edge computing is to achieve “distributed sensing” out of data continuously generated from a multitude of interconnected physical devices. The traditional approach is to gather information into sparse collector devices by relying on hop-by-hop accumulation, but issues of reactivity and fragility naturally arise in scenarios with high mobility. We propose novel algorithms for dynamic data summarisation across space, supporting high reactivity and resilience by specific techniques maximising the speed at which information propagates towards collectors. Such algorithms support idempotent and arithmetic aggregation operators and, under reasonable network assumptions, are proved to achieve optimal reactivity. We provide evaluation via simulation: first in multiple scenarios showing improvement over the state of art, and then by a case study in edge data mining, which conveys the practical impact in higher-level distributed sensing patterns.
Emerging application scenarios, such as cyber-physical systems (CPSs), the
Internet of Things (IoT), and edge computing, call for coordination approaches addressing
openness, self-adaptation, heterogeneity, and deployment agnosticism. Field-based coordination is one such approach, promoting the idea of programming system coordination
declaratively from a global perspective, in terms of functional manipulation and evolution
in “space and time” of distributed data structures called fields. More specifically regarding
time, in field-based coordination it is assumed that local activities in each device are
regulated by a fair and unsynchronised fixed clock working at the platform level. In this
work, we challenge this assumption, and propose an alternative approach where scheduling
is programmed in a natural way (along with usual field-based coordination) in terms of
causality fields, each enacting a programmable distributed notion of a computational “cause”
(why and when a field computation has to be locally computed) and how it should change
across time and space. Starting from low-level platform triggers, such causality fields can
be organised into multiple layers, up to defining high-level, collectively-computed time
abstractions, to be used at the application level. This reinterpretation of the traditional
view of time in terms of articulated causality relations allows us to express what we call
“time-fluid” coordination, where scheduling can be finely tuned so as to select the triggers
to react to, generally allowing to adaptively balance performance (system reactivity) and
cost (resource usage) of computations. We formalise the proposed scheduling framework
for field-based coordination in the context of the field calculus, discuss an implementation
in the aggregate computing framework, and finally evaluate the approach via simulation on
several case studies.
Engineering large-scale Cyber-Physical Systems - like robot swarms, augmented crowds, and smart cities - is challenging, for many issues have to be addressed, including specifying their collective adaptive behaviour and managing the connection of the digital and physical parts. In particular, some approaches propose self-organising mechanisms to actually program global behaviour while fostering decentralised, asynchronous execution. However, most of these approaches couple behavioural specifications to specific network architectures (e.g., peer-to-peer), and therefore do not promote flexible exploitation of the underlying infrastructure. Conversely, pulverisation is a recent approach that enables self-organising behaviour to be defined independently of the available infrastructure while retaining functional correctness. However, there are currently no tools to formally specify and verify concrete architectures for pulverised applications. Therefore, we propose to combine pulverisation with multi-tier programming, a paradigm that supports the specification of the architecture of distributed systems in a single code base, and enables static checks for the correctness of actual deployments. The approach can be implemented by combining the ScaFi aggregate computing toolchain with the ScalaLoci multi-tier programming language, paving the path to support the development of self-organising cyber-physical systems, addressing both functional (behaviour) and non-functional concerns (deployment) in a single code base and modular fashion.
Many interesting systems in several disciplines can be modeled as networks of nodes that can store and exchange data: pervasive systems, edge computing scenarios, and even biological and bio-inspired systems. These systems feature inherent complexity, and often simulation is the preferred (and sometimes the only) way of investigating their behavior; this is true both in the design phase and in the verification and testing phase. In this tutorial paper, we provide a guide to the simulation of such systems by leveraging Alchemist, an existing research tool used in several works in the literature. We introduce its meta-model and its extensible architecture; we discuss reference examples of increasing complexity; and we finally show how to configure the tool to automatically execute multiple repetitions of simulations with different controlled variables, achieving reliable and reproducible results.
Cyber–physical systems increasingly feature highly-distributed and mobile deployments of devices spread over large physical environments: in these contexts, it is generally very difficult to engineer trustworthy critical services, mostly because formal methods generally hardly scale with the number of involved devices, especially when faults, continuous changes, and dynamic topologies are the norm. To start addressing this problem, in this paper we devise a formally correct and self-adaptive implementation of distributed monitors for spatial properties. We start from the Spatial Logic of Closure Spaces, and provide a compositional translation that takes a formula and yields a distributed program that provides runtime verification of its validity. Such programs are expressed in terms of the field calculus, a recently emerged computational model that focusses on global-level outcomes instead of single-device behaviour, and expresses distributed computations by pure functions and the functional composition mechanism. By reusing previous results and tools of the field calculus, we prove correctness of the translation, self-stabilisation of the derived monitors, and empirically evaluate adaptivity of such monitors in a realistic smart city scenario of safe crowd monitoring and control.
Emerging cyber-physical systems, such as robot swarms, crowds of augmented people, and smart cities, require well-crafted self-organizing behavior to properly deal with dynamic environments and pervasive disturbances. However, the infrastructures providing networking and computing services to support these systems are becoming increasingly complex, layered and heterogeneous—consider the case of the edge–fog–cloud interplay. This typically hinders the application of self-organizing mechanisms and patterns, which are often designed to work on flat networks. To promote reuse of behavior and flexibility in infrastructure exploitation, we argue that self-organizing logic should be largely independent of the specific application deployment. We show that this separation of concerns can be achieved through a proposed “pulverization approach”: the global system behavior of application services gets broken into smaller computational pieces that are continuously executed across the available hosts. This model can then be instantiated in the aggregate computing framework, whereby self-organizing behavior is specified compositionally. We showcase how the proposed approach enables expressing the application logic of a self-organizing cyber-physical system in a deployment-independent fashion, and simulate its deployment on multiple heterogeneous infrastructures that include cloud, edge, and LoRaWAN network elements.
Edge computing promotes the execution of complex computational processes without the cloud, i.e., on top of the heterogeneous, articulated, and possibly mobile systems composed of IoT and edge devices. Such a pervasive smart fabric augments our environment with computing and networking capabilities. This leads to a complex and dynamic ecosystem of devices that should not only exhibit individual intelligence but also collective intelligence—the ability to take group decisions or process knowledge among autonomous units of a distributed environment. Self-adaptation and self-organisation mechanisms are also typically required to ensure continuous and inherent toleration of changes of various kinds, to distribution of devices, energy available, computational load, as well as faults. To achieve this behaviour in a massively distributed setting like edge computing demands, we seek for identifying proper abstractions, and engineering tools therefore, to smoothly capture collective behaviour, adaptivity, and dynamic injection and execution of concurrent distributed activities. Accordingly, we elaborate on a notion of “aggregate process” as a concurrent collective computation whose execution and interactions are sustained by a dynamic team of devices, whose spatial region can opportunistically vary over time. We ground this notion by extending the aggregate computing model and toolchain with new constructs to instantiate aggregate processes and regulate key aspects of their lifecycle. By virtue of an open-source implementation in the ScaFi framework, we show basic programming examples as well as case studies of edge computing, evaluated by simulation in realistic settings.
The blockchain concept and technology are impacting many different research and application fields; hence, many are looking at the blockchain as a chance to solve long-standing problems or gain novel benefits. In the agent community several authors are proposing their own combination of agent-oriented technology and blockchain to address both old and new challenges. In this paper we aim at clarifying which are the opportunities, the dimensions to consider, and the alternative approaches available for integrating agents and blockchain, by proposing a roadmap and illustrating the issues yet to be addressed. Then, as both validation of our roadmap and grounds for future development, we discuss the case of Tenderfone, a custom blockchain integrating concepts borrowed from agent-oriented programming.
The term IoT-aware business processes refers to the interplay of business processes and Internet of Things concepts. Several studies have been carried out on such a topic, so a better awareness of the current state of knowledge can be beneficial. In particular, in a given application domain, this can help the choice of the most suitable modelling approach. This paper reports on the results of a systematic literature review with the aim of developing a map on modelling notations for IoT-aware business processes. It includes 48 research works from the main computer science digital libraries. We first present a description of the systematic literature review protocol we applied, then we report a list of available notations, discussing their main characteristics. A focus has been devoted to modelling tools and application scenarios. Finally, we provide a discussion on the capability of the identified modelling notations to represent requirements of scenarios enriched by IoT adequately.
Emerging application scenarios, such as cyber-physical systems (CPSs), the Internet of Things (IoT), and edge computing, call for coordination approaches addressing openness, self-adaptation, heterogeneity, and deployment agnosticism. Field-based coordination is one such approach, promoting the idea of programming system coordination declaratively from a global perspective, in terms of functional manipulation and evolution in “space and time” of distributed data structures, called fields. More specifically, regarding time, in field-based coordination it is assumed that local activities in each device, called computational rounds, are regulated by a fixed clock, typically, a fair and unsynchronized distributed scheduler. In this work, we challenge this assumption, and propose an alternative approach where the round execution scheduling is naturally programmed along with the usual coordination specification, namely, in terms of a field of causal relations dictating what is the notion of causality (why and when a round has to be locally scheduled) and how it should change across time and space. This abstraction over the traditional view on global time allows us to express what we call “time-fluid” coordination, where causality can be finely tuned to select the event triggers to react to, up to to achieve improved balance between performance (system reactivity) and cost (usage of computational resources). We propose an implementation in the aggregate computing framework, and evaluate via simulation on a case study.
by Danilo Pianini, Stefano Mariani, Mirko Viroli and Franco Zambonelli
published in Coordination Models and Languages - 22nd IFIP WG 6.1 International Conference, COORDINATION 2020, Held as Part of the 15th International Federated Conference on Distributed Computing Techniques, DisCoTec 2020, Valletta, Malta, June 15-19, 2020, Proceedings
Emerging application scenarios, such as cyber-physical systems (CPSs), the Internet of Things (IoT), and edge computing, call for coordination approaches addressing openness, self-adaptation, heterogeneity, and deployment agnosticism. Field-based coordination is one such approach, promoting the idea of programming system coordination declaratively from a global perspective, in terms of functional manipulation and evolution in “space and time” of distributed data structures, called fields. More specifically, regarding time, in field-based coordination it is assumed that local activities in each device, called computational rounds, are regulated by a fixed clock, typically, a fair and unsynchronized distributed scheduler. In this work, we challenge this assumption, and propose an alternative approach where the round execution scheduling is naturally programmed along with the usual coordination specification, namely, in terms of a field of causal relations dictating what is the notion of causality (why and when a round has to be locally scheduled) and how it should change across time and space. This abstraction over the traditional view on global time allows us to express what we call “time-fluid” coordination, where causality can be finely tuned to select the event triggers to react to, up to to achieve improved balance between performance (system reactivity) and cost (usage of computational resources). We propose an implementation in the aggregate computing framework, and evaluate via simulation on a case study.
The complex and opportunistic environment in which edge computing systems operate, poses a fundamental challenge for online edge system orchestration, resource provisioning and real-time responsiveness in response to user movement. Such a challenge needs to addressed throughout the edge system lifecycle, starting from the software development methodologies. In this paper, we propose a novel development process for modeling opportunistic edge computing services, which rely on (i) ETSI MEC reference architecture and Opportunistic Internet of Things Service modeling for the early stage of system analysis and design, i.e. domain model and service metamodel; and on (ii) feature engineering for evaluating those opportunistic aspects with data analysis. To address the identified opportunistic properties, at the service design phase we construct (both automatically and through domain expertise) Opportunistic Feature Vectors for Edge, containing the numerical representations of those properties. Such vectors enable further data analysis and machine learning techniques in the development of distributed, effective and efficient edge computing systems. Lastly, we exemplify the integrated process with a microservice-based user mobility management service, based on a real-world data set, for online analysis in MEC systems.
In Smart Factories, automated guided vehicles (AGVs) accomplish heterogeneous tasks as moving objects, restoring connectivity, or performing different manufacturing activities into production lines. These kinds of devices combine several capabilities, as artificial intelligence (visual and speech recognition, automatic fault detecting, proactive behavior) and mobility, into the so-called �mobile intelligence.� A typical scenario is represented by a workshop with a large number of mobile intelligent devices with associated agents, mutually interacting on their behalf. Here, to reach a given target by contemporary satisfying some basic requirements like effectiveness and efficiency, it is often necessary to organize ad hoc teams of free-moving vehicles, sensors, and smart devices. Therefore, a specific issue is the adequate representation of the reciprocal agent/device trustworthiness for advantaging such team formation processes within a smart factory environment. To this end, in this article, first, we define a trust measure based on reliability and reputation of AGVs, which are computed based on the feedbacks released for the AGVs activities in the factory; second, we design a trust framework exploiting the defined measures to support the formation of virtual, temporary, and trust-based teams of mobile intelligent devices; and third, we present a set of experimental results highlighting that the proposed trust framework can improve the workshop performance in terms of effectiveness and efficiency.
by Stefano Mariani, Roberto Casadei, Fabrizio Fornari, Giancarlo Fortino, Danilo Pianini, Barbara Re, Wilma Russo, Claudio Savaglio, Mirko Viroli, and Franco Zambonelli
published in Proceedings of the 1st Workshop on Artificial Intelligence and Internet of Things co-located with the 18th International Conference of the Italian Association for Artificial Intelligence (AI*IA 2019), Rende (CS), Italy, November 22, 2019
A number of scientific and technological advancements enabled turning the Internet of Things vision into reality. However, there is
still a bottleneck in designing and developing IoT applications and services: each device has to be programmed individually, and services are
deployed to specific devices. The Fluidware approach advocates that to
truly scale and raise the level of abstraction a novel perspective is needed,
focussing on device ensembles and dynamic allocation of resources. In
this paper, we motivate the need for such a paradigm shift through three
case studies emphasising a mismatch between state of art solutions and
desired properties to achieve.