Frieda Josi1, Christian Wartena1 and Ulrich Heid2, 1University of Applied Sciences and Arts Hanover, Expo Plaza 12, 30559 Hannover, Germany, 2University of Hildesheim, Universitätsplatz 1, 31141 Hildesheim, Germany
Legal documents often have a complex layout with many different headings, headers and footers, side notes, etc. For the further processing, it is important to extract these individual components correctly from a legally binding document, for example a signed PDF. A common approach to do so is to classify each (text) region of a page using its geometric and textual features. This approach works well, when the training and test data have a similar structure and when the documents of a collection to be analyzed have a rather uniform layout. We show that the use of global page properties can improve the accuracy of text element classification: we first classify each page into one of three layout types. After that, we can train a classifier for each of the three page types and thereby improve the accuracy on a manually annotated collection of 70 legal documents consisting of 20,938 text elements. When we split by page type, we achieve an improvement from 0.95 to 0.98 for single-column pages with left marginalia and from 0.95 to 0.96 for double-column pages. We developed our own feature-based method for page layout detection, which we benchmark against a standard implementation of a CNN image classifier. The approach presented here is based on corpus of freely available German contracts and general terms and conditions. Both the corpus and all manual annotations are made freely available. The method is language agnostic.
PDF Document Analysis, Legal Documents, Layout Detection, Feature and Text Extraction, Classification, Machine Learning, Deep Convolutional Networks, Image Recognition
Cheng Huang, YongGang Li and Ying Wang, Department of Information and Communication Engineering, Chongqing University of Posts and Telecommunications, Chong Qing, China
With the rapid development of modern military technology, the combat mode has been upgraded from traditional platform combat to system-level confrontation. In traditional combat network, node function is single and which is no proper assignment of tasks. The equipment system network studied in this paper contains many different functional nodes, which constitute a huge heterogeneous complex network. Most of the key node identification methods are analyzedfrom the network topology structure, such as degree, betweenness, K-shell, PageRank, etc. However, with the change of network topology, the identification effect of these methods will be biased. In this paper, we construct a nodal attack sequence, Consider the change of the number of effective OODA chains in the equipment system network after the nodes in the sequence are attacked. And combined with the improved Gray Wolf optimization algorithm, this paper proposes a key node evaluation model of equipment system network based on function chain - IABFI. Experimental results show that the proposed method is more effective, accurate, and applicable to different network topologies than other key node identification methods.
equipment system network, node sequence attack, effective OODA chain, improved Grey Wolf optimization algorithm.
Wei Liu, Fang Wei Li,、Jun Zhou Xiong, and Ming Yue Wang, Department of Information and Communication Engineering, Chongqing University of Posts and Telecommunications, Chongqing, China
In order to solve the problems of dual near and far in wireless powered communication network(WPCN) and the interference in the process of information transmission, a resource allocation method based on time inversal(TR) for WPCN is proposed. An optimization problem to maximize the minimum network throughput is constructed by jointly optimizing the transmission time of each phase of the network, the transmission power of the hybrid access point(HAP) and the transmission power of the relay. Since the constructed problem is non-convex, this paper converts the non-convex problem into an equivalent convex problem by introducing relaxation variables and auxiliary variables, and further divides the convex problem into two sub-problems to obtain the solution of the original problem. Finally, the simulation results show that the proposed resource allocation scheme can alleviate the dual distance and interference effectively, so as to obtain a higher total system throughput.
Wireless Powered Communication Network, Ttime Reversal, Jointly Optimizing.
Bo Li1, 2 and Hong Tang1, 2, 1School of Communication and Information Engineering, Chongqing University of Posts andTelecommunications, Chongqing 400065, China, 2Chongqing Key Laboratory ofMobile Communications Technology, Chongqing 400065, China
Aiming at the problem of limited system throughput caused by double near-far effect in wireless power communication network. In this paper, a retrodirective matrix method based on phase conjugation is proposed. In the method, energy base stations and information base stations are depolyed separately, energy base station uses large-scale MIMO system, when system running point equipment firstly to send a beacon signal to energy base station, the energy base station amplifies its conjugate to form a directional beam to achieve multi-input and multi-output energy gains, thus improving the throughput of information transmission of node devices. Through the joint optimization of beacon signal, energy transmission, time allocation of information transmission and power control, a convex optimization problem is proposed and solved by Lagrange generalized multiplier method and golden section method. Simulation results show that the proposed method has better performance than others projects.
Wireless Powered Communication Network, Matrix Retrodirective Array, Energy Transmision, Information Transmision, System Throughput.
Dereje Regassa1, Heon Young Yeom1 and Yongseok Son2, 1Department of Computer Science and Engineering, Seoul National University, Seoul, Korea, 2Chung-Ang University, Seoul, Korea
The emerging byte-addressable persistent memory (PM) is bringing innovations that require the rethinking of various data structures. One of the challenges in redesigning an effective hashing scheme to redefine the data structure in PM is to reduce overheads of dynamic hashing for designing hash tables. In this paper, we present an adaptive extendible hashing scheme that improves memory efficiency and performance of hash tables on PM. Our scheme enables us to maximally utilize the available space in buckets to storemore data in the hash table. Our indexing scheme effectively utilizes the hash table to delays the expensive operations. Thus, it canincrease memory utilization and reduce the expensive operation (e.g., directory doubling) in the hash table. We implement and evaluateour scheme on a machine with Intel Optane®persistent memory (i.e., DCPMM). The experimental results show that our schemeimproves the insertion performance compared with the state-of-art scheme for uniform and skewed data distributions.
Persistent memory, Dynamic hashing, Directory doubling.
Hangping Hu, Zhen Zhang, Weijian Qin, Yuan Wang, Xiaojian Li, School of Computer Science & Engineering Guangxi Normal University, Guilin, China
Any unexpected service interruption or failure may cause customer dissatisfaction or economic losses. To distinguish the rights and interests or security disputes between cloud service providers and customers, explore the essence and rules of cloud service events and their various connections, such as: Normal contact of service scheduling, normal contact of service dependence, abnormal contact of resource competition, abnormal contact of service delay, abnormal contact of service dependence, etc., as well as their rules in time, resources, scheduling and other aspects, and the form of the rules; The purpose is to provide the above abnormal connections, as well as the rule and presentation form in terms of time, resources and load, for the study of violation determination and failure tracing in the cloud service accountability mechanism.
Cloud service, Event connections, Correlation, Label adaptation.
Yew Kee Wong, School of Information Engineering, HuangHuai University, Henan, China
In the information era, enormous amounts of data have become available on hand to decision makers.Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and extract value and knowledge from these datasets. Machine learning is a method of data analysis that automates analytical model building. It is a branch of artificial intelligence based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention. Such minimal human intervention can be provided using big data analytics, which is the application of advanced analytics techniques on big data. This paper aims to analyse some of the different machine learning algorithms and methods which can be applied to big data analysis, as well as the opportunities provided by the application of big data analytics in various decision making domains.
Artificial Intelligence, Machine Learning, Big Data Analysis.
Adamkolo Mohammed Ibrahim, Department of Mass Communication, University of Maiduguri, Maiduguri, Borno State, Nigeria
In today’s cluttered media landscape, it is more difficult for television viewers to choose what media content to watch. The theory of Constrained Rationality suggests that when people try to understand all their options, they end up with a media overload. The IoT-TV is a concept that leverages the Internet of Things (IoT) to recommend media content to users. This could help viewers make better judgments by delivering more tailored media content recommendations. Using three focus groups, this paper analysed the idea and discovered which features of IoT-TV viewers found most beneficial. Timing and emotions have a major part in media selection, which are factors the IoT-TV concept considers. Media content selection will be facilitated by IoT-TV due to decrease of time required and matching of content to emotions. If IoT-TV is linked to smart items that send time and emotion information, it could lessen information overload.
Arewa 24 On-Demand TV, IoT TV, Media Convergence, Over-the-Top TV Broadcast, Web-based TV Services.
Brigitte Endres-Niggemeyer, NOapps, Hanover, Germany
Thinkie 3 supports Thinking Aloud (TA) on iPhone and iPad with iCloud connection. As a mobile IOS application Thinkie stores its data in a private cloud container of the user. The data structure includes metadata, audio and video files, audio clips and transcripts. The serverbased Apple Speech-To-Text API helps with transcription. Users are supported in data capture and initial data analysis. Files can be exported for further handling. Thinkie is confronted with the current technical and methodological state of Thinking Aloud. As an up-to-date report on TA is missing, the 2021 TA scene is described in some detail. Given that competing approaches are still out in IOS, a Thinkie 4 app with new/improved features might be an option. Thinkie can be downloaded from https://apps.apple.com/de/app/thinkie-3/id1552765360. User guides in English and German are available.
Thinking Aloud, verbal protocol, data capture, Speech-To-Text, mobile devices.
Nasreddine Aoumeur1 and Kamel barkaoui2, 1Department of Computer Science, University of Leicester, LE1, 7RH, UK, 2SYS:Equipe Systèmes Sûrs, Cedric/CNAM, France
To stay competitive in today’s high market volatility and globalization, cross-organizational business information systems and processes are deemed to be knowledge-intensive (e.g.rule-centric), highly adaptive and context-aware, that is explicitly responding to their surrounding environment, user’s preferences and sensing devices. Towards achieving these objectives in developing such applications, we put forwards in this paper a stepwise service-oriented approach that exhibits an explicit separation of concerns, that is, we first conceptualize the mandatory functionalities and then separately and explicitly consider the added-values of contextual concerns, which we then integrate at both the fine-grained activity-level and the coarse-grained process-level to reflect their intuitive business semantics. Secondly, the proposed approach is based on business rule-centric architectural techniques, with emphasis on Event-Conditions-Actions (ECA)-driven transient tailored and adaptive architectural connectors. As third benefit, for formal underpinnings towards rapid-prototyping and validation, we semantically interpret the approach into rewriting logic and its true-concurrent and reflective operational semantics governed by the intrinsic practical Maude language.
Context-awareness, ECA-Driven Rules, Architectural Connectors, Service-orientation, Adaptability, Maude Validation.
Gang Wang, Mark Nixon, University of Connecticut, Emerson Automation Solutions
Blockchain as a potentially disruptive technology advances many different applications, e.g., crypto-currencies, supply chains, and the Internet of Things. Under the hood of blockchain, it requires to handle different kinds of digital assets and data. The next-generation blockchain eco-system is expected to consist of numerous applications, and each application may have a distinct representation on digital assets. However, digital assets cannot be directly recorded on the blockchain, and a tokenization process is required to format these assets. Tokenization on blockchain will inevitably require a certain level of proper standards to enrich advanced functionalities and enhance interoperable capabilities for future applications. However, due to specific features of digital assets, it is hard to obtain a standard token form to represent all kinds of assets. For example, when considering fungibility, some assets are divisible and identical, which are commonly referred to fungible assets; while others that are not fungible are commonly referred to non-fungible assets. When tokenizing these assets, we are required to follow different tokenization processes. The way to effectively tokenize assets is thus essential and expecting to confront various unprecedented challenges. This paper provides a systematic and comprehensive review of the current progress of tokenization on blockchain. We explore both general principles and practical schemes to tokenize digital assets for blockchain, and classify digitalized tokens into three categories, namely, fungible tokens, non-fungible tokens, and semi-fungible tokens. We then focus on discussing the well-known Ethereum standards on non-fungible tokens. Finally, we discuss several critical challenges and some potential research directions to advance the research on exploring the tokenization process on the blockchain. To the best of our knowledge, this is the first systematic study for tokenization on blockchain.
Nur Nasuha Daud1*, Siti Hafizah Ab Hamid1, Chempaka Seri1, Muntadher Saadoon1, Nor Badrul Anuar2, 1Department of Software Engineering, Faculty of Computer Science and Information Technology, University of Malaya, 50603, Kuala Lumpur, Malaysia, 2Department of Computer System and Technology, Faculty of Computer Science and Information Technology, University of Malaya,50603, Kuala Lumpur, Malaysia
Scalable link prediction in social networks allow dynamic social interaction gathering, potential friends suggestion, and communities detection.Hadoop and Spark are among distributed open-source frameworks that facilitate efficient link prediction especially in large-scale social networks. The frameworks provide differentkinds of tunable properties for users to manually configure the parameters for the applications. However, manual configurations open to performance issue when the applications start scaling tremendously, which is hard to set up and expose to human errors. This paper proposed a novel Self-Configured Framework (SCF) to provide an autonomous feature in Spark that predicts and sets the best configuration instantlybefore the application executionusing XGBoost classifier. SCF is evaluated on the Twitter social network using three link prediction applications: Graph Clustering (GC), Overlapping Community Detection (OCD), and Redundant Graph Clustering (RGD) to assess the impact of shifting data sizes on different applications in Twitter. The result demonstrates a 40% reduction in prediction time as well as a balanced resource consumption that makes full use of resources, especiallyfor limited number and size of clusters. The presented framework established its efficiency for link prediction in large-scale social networks by automatically configuringthe best configuration suits for a specific application given varying dataset size, workload, and cluster specification.
self-configured framework, link prediction, social network, large-scale.
Mobolaji O. Olarinde, Ojonukpe S. Egwuche, Mutiu Ganiyu and Ademola A. Adeola, Department of Computer Science, Federal Polytechnic, Ile-Oluji, Ondo State, Nigeria
A memorandum (memo) is a short message or record that is used for internal communication in a business environment. The penetration of Information and Communication Technology (ICT) has made many organisations to adopt electronic means of communication. The current trends of internal communication in organisations are the Electronic Memorandum (E-Memo). E- Memo is a system that automates the entire process of lettering and filing.The designed system is intended toerase the manual process of information exchange within an organisation. Implementing E- Memo, makes it easy to send and receive official information outside the office environment. The system is designed to enhance effective, timely and reliable communication of information within an organisation.
Document Management System, Internet, Memo, Portal, Website.
Copyright © ADCOM 2022