Frieda Josi1, Christian Wartena1 and Ulrich Heid2, 1University of Applied Sciences and Arts Hanover, Expo Plaza 12, 30559 Hannover, Germany, 2University of Hildesheim, Universitätsplatz 1, 31141 Hildesheim, Germany
Legal documents often have a complex layout with many different headings, headers and footers, side notes, etc. For the further processing, it is important to extract these individual components correctly from a legally binding document, for example a signed PDF. A common approach to do so is to classify each (text) region of a page using its geometric and textual features. This approach works well, when the training and test data have a similar structure and when the documents of a collection to be analyzed have a rather uniform layout. We show that the use of global page properties can improve the accuracy of text element classification: we first classify each page into one of three layout types. After that, we can train a classifier for each of the three page types and thereby improve the accuracy on a manually annotated collection of 70 legal documents consisting of 20,938 text elements. When we split by page type, we achieve an improvement from 0.95 to 0.98 for single-column pages with left marginalia and from 0.95 to 0.96 for double-column pages. We developed our own feature-based method for page layout detection, which we benchmark against a standard implementation of a CNN image classifier. The approach presented here is based on corpus of freely available German contracts and general terms and conditions. Both the corpus and all manual annotations are made freely available. The method is language agnostic.
PDF Document Analysis, Legal Documents, Layout Detection, Feature and Text Extraction, Classification, Machine Learning, Deep Convolutional Networks, Image Recognition
Cheng Huang, YongGang Li and Ying Wang, Department of Information and Communication Engineering, Chongqing University of Posts and Telecommunications, Chong Qing, China
With the rapid development of modern military technology, the combat mode has been upgraded from traditional platform combat to system-level confrontation. In traditional combat network, node function is single and which is no proper assignment of tasks. The equipment system network studied in this paper contains many different functional nodes, which constitute a huge heterogeneous complex network. Most of the key node identification methods are analyzedfrom the network topology structure, such as degree, betweenness, K-shell, PageRank, etc. However, with the change of network topology, the identification effect of these methods will be biased. In this paper, we construct a nodal attack sequence, Consider the change of the number of effective OODA chains in the equipment system network after the nodes in the sequence are attacked. And combined with the improved Gray Wolf optimization algorithm, this paper proposes a key node evaluation model of equipment system network based on function chain - IABFI. Experimental results show that the proposed method is more effective, accurate, and applicable to different network topologies than other key node identification methods.
equipment system network, node sequence attack, effective OODA chain, improved Grey Wolf optimization algorithm.
Wei Liu, Fang Wei Li,、Jun Zhou Xiong, and Ming Yue Wang, Department of Information and Communication Engineering, Chongqing University of Posts and Telecommunications, Chongqing, China
In order to solve the problems of dual near and far in wireless powered communication network(WPCN) and the interference in the process of information transmission, a resource allocation method based on time inversal(TR) for WPCN is proposed. An optimization problem to maximize the minimum network throughput is constructed by jointly optimizing the transmission time of each phase of the network, the transmission power of the hybrid access point(HAP) and the transmission power of the relay. Since the constructed problem is non-convex, this paper converts the non-convex problem into an equivalent convex problem by introducing relaxation variables and auxiliary variables, and further divides the convex problem into two sub-problems to obtain the solution of the original problem. Finally, the simulation results show that the proposed resource allocation scheme can alleviate the dual distance and interference effectively, so as to obtain a higher total system throughput.
Wireless Powered Communication Network, Ttime Reversal, Jointly Optimizing.
Bo Li1, 2 and Hong Tang1, 2, 1School of Communication and Information Engineering, Chongqing University of Posts andTelecommunications, Chongqing 400065, China, 2Chongqing Key Laboratory ofMobile Communications Technology, Chongqing 400065, China
Aiming at the problem of limited system throughput caused by double near-far effect in wireless power communication network. In this paper, a retrodirective matrix method based on phase conjugation is proposed. In the method, energy base stations and information base stations are depolyed separately, energy base station uses large-scale MIMO system, when system running point equipment firstly to send a beacon signal to energy base station, the energy base station amplifies its conjugate to form a directional beam to achieve multi-input and multi-output energy gains, thus improving the throughput of information transmission of node devices. Through the joint optimization of beacon signal, energy transmission, time allocation of information transmission and power control, a convex optimization problem is proposed and solved by Lagrange generalized multiplier method and golden section method. Simulation results show that the proposed method has better performance than others projects.
Wireless Powered Communication Network, Matrix Retrodirective Array, Energy Transmision, Information Transmision, System Throughput.
Jin Meng Gao, Fang Cheng, Bing Guang Deng and Xiao Ya Wang, School of Communication and Information Engineering, Chongqing University of Posts and Telecommunications, Chongqing, China
Relaxed Polar codes reduce the complexity of coding and channel polarization by simplifying the polarization process of relaxed nodes during construction. In this paper, a new relaxation node selection scheme is proposed by using the algorithm of beta-expansion and coding rate, and the reliable channel selection method is given. Then, the construction principle of relaxing polar code generation matrix is analyzed and explained. The simulation results show that completely polarized Gaussian approximation method and the proposed structure algorithm selection of reliable index almost unanimously, sub-channels under dif erent noise power at the same time through the comparison, the proposed construction algorithm not only on the computational complexity than full Polar code had significantly lower, and have almost the same performance.
Polar Codes, Channel Polarization, Relaxation Node Selection, Generation Matrix.
YANG lu, ZHAO yaru, ZOU liang, HAO shengqiang, Chongqing University of Posts and Telecommunications, School of Communication and Information Engineering, Chongqing，China
Multiple In a 5G-based air-to-ground communication (ATG) communication system, in order to increase the coverage radius of its base station, increasing the length of the preamble sequence is the main method. However, due to the greater influence of Doppler frequency shift in the ATG scenario, the cascaded long preamble sequence will generate multiple peaks during the direct correlation calculation between the local sequence and the received sequence, resulting in inaccurate TA estimation. Aiming at the problem of large Doppler frequency shift of ATG channel, this paper proposes a multi-sequence differential correlation detection algorithm, which allows the first two sequences of the local cascaded sequence to be differentially conjugated, and similarly transforms the received sequence through differential conjugate The local sequence and the received sequence are then conjugated, and the detection function can get the unique peak value, which can better overcome the influence of Doppler shift. Theoretical analysis and simulation results show that by using a single long preamble sequence and the dual sequence joint differential detection algorithm proposed in this article, the coverage radius of the base station of the 5G ATG communication system can be enlarged, and the large Doppler shift can be better overcome. Problem.
5G, ATG, Doppler shift, Preamble detection.
Shuan Zhao, Ken Long, Yucai Pang and Jipeng Chen, Department of Computer Chongqing University of Posts and Telecommunications, Chongqing, China
According to the large amount of high- dimensional time series data generated by sensors, and the insufficient utilization of time series information in the network model in the prediction on remaining useful life (RUL) of aircraft engines, a data-driven model for RUL prediction based on a Multi-head Attention mechanism and a long short-term memory neural network (LSTM) is proposed in this paper to optimize RUL of aircraft engine. The model can select the key features in the time-series data, then input them into the LSTM layer to mine the internal connections, and finally obtain RUL predicted results through two fully connected layers. Using the CMAPSS dataset provided by NASA for verification and comparing with other algorithms, the accuracy of this method outperforms shallow neural networks based on support vector regression (SVM) and deep learning methods such as convolutional neural networks (CNN), multi-layered LSTM and multi-layered BiLSTM, which provides a powerful support for health management of aircraft engines, operation and maintenance decisions.
aircraft-engine, remaining useful life (RUL), attention mechanism, long short-term memory (LSTM), time series.
Dereje Regassa1, Heon Young Yeom1 and Yongseok Son2, 1Department of Computer Science and Engineering, Seoul National University, Seoul, Korea, 2Chung-Ang University, Seoul, Korea
The emerging byte-addressable persistent memory (PM) is bringing innovations that require the rethinking of various data structures. One of the challenges in redesigning an effective hashing scheme to redefine the data structure in PM is to reduce overheads of dynamic hashing for designing hash tables. In this paper, we present an adaptive extendible hashing scheme that improves memory efficiency and performance of hash tables on PM. Our scheme enables us to maximally utilize the available space in buckets to storemore data in the hash table. Our indexing scheme effectively utilizes the hash table to delays the expensive operations. Thus, it canincrease memory utilization and reduce the expensive operation (e.g., directory doubling) in the hash table. We implement and evaluateour scheme on a machine with Intel Optane®persistent memory (i.e., DCPMM). The experimental results show that our schemeimproves the insertion performance compared with the state-of-art scheme for uniform and skewed data distributions.
Persistent memory, Dynamic hashing, Directory doubling.
Hangping Hu, Zhen Zhang, Weijian Qin, Yuan Wang, Xiaojian Li, School of Computer Science & Engineering Guangxi Normal University, Guilin, China
Any unexpected service interruption or failure may cause customer dissatisfaction or economic losses. To distinguish the rights and interests or security disputes between cloud service providers and customers, explore the essence and rules of cloud service events and their various connections, such as: Normal contact of service scheduling, normal contact of service dependence, abnormal contact of resource competition, abnormal contact of service delay, abnormal contact of service dependence, etc., as well as their rules in time, resources, scheduling and other aspects, and the form of the rules; The purpose is to provide the above abnormal connections, as well as the rule and presentation form in terms of time, resources and load, for the study of violation determination and failure tracing in the cloud service accountability mechanism.
Cloud service, Event connections, Correlation, Label adaptation.
Yew Kee Wong, School of Information Engineering, HuangHuai University, Henan, China
In the information era, enormous amounts of data have become available on hand to decision makers.Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and extract value and knowledge from these datasets. Machine learning is a method of data analysis that automates analytical model building. It is a branch of artificial intelligence based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention. Such minimal human intervention can be provided using big data analytics, which is the application of advanced analytics techniques on big data. This paper aims to analyse some of the different machine learning algorithms and methods which can be applied to big data analysis, as well as the opportunities provided by the application of big data analytics in various decision making domains.
Artificial Intelligence, Machine Learning, Big Data Analysis.
Adamkolo Mohammed Ibrahim, Department of Mass Communication, University of Maiduguri, Maiduguri, Borno State, Nigeria
In today’s cluttered media landscape, it is more difficult for television viewers to choose what media content to watch. The theory of Constrained Rationality suggests that when people try to understand all their options, they end up with a media overload. The IoT-TV is a concept that leverages the Internet of Things (IoT) to recommend media content to users. This could help viewers make better judgments by delivering more tailored media content recommendations. Using three focus groups, this paper analysed the idea and discovered which features of IoT-TV viewers found most beneficial. Timing and emotions have a major part in media selection, which are factors the IoT-TV concept considers. Media content selection will be facilitated by IoT-TV due to decrease of time required and matching of content to emotions. If IoT-TV is linked to smart items that send time and emotion information, it could lessen information overload.
Arewa 24 On-Demand TV, IoT TV, Media Convergence, Over-the-Top TV Broadcast, Web-based TV Services.
Brigitte Endres-Niggemeyer, NOapps, Hanover, Germany
Thinkie 3 supports Thinking Aloud (TA) on iPhone and iPad with iCloud connection. As a mobile IOS application Thinkie stores its data in a private cloud container of the user. The data structure includes metadata, audio and video files, audio clips and transcripts. The serverbased Apple Speech-To-Text API helps with transcription. Users are supported in data capture and initial data analysis. Files can be exported for further handling. Thinkie is confronted with the current technical and methodological state of Thinking Aloud. As an up-to-date report on TA is missing, the 2021 TA scene is described in some detail. Given that competing approaches are still out in IOS, a Thinkie 4 app with new/improved features might be an option. Thinkie can be downloaded from https://apps.apple.com/de/app/thinkie-3/id1552765360. User guides in English and German are available.
Thinking Aloud, verbal protocol, data capture, Speech-To-Text, mobile devices.
Nasreddine Aoumeur1 and Kamel barkaoui2, 1Department of Computer Science, University of Leicester, LE1, 7RH, UK, 2SYS:Equipe Systèmes Sûrs, Cedric/CNAM, France
To stay competitive in today’s high market volatility and globalization, cross-organizational business information systems and processes are deemed to be knowledge-intensive (e.g.rule-centric), highly adaptive and context-aware, that is explicitly responding to their surrounding environment, user’s preferences and sensing devices. Towards achieving these objectives in developing such applications, we put forwards in this paper a stepwise service-oriented approach that exhibits an explicit separation of concerns, that is, we first conceptualize the mandatory functionalities and then separately and explicitly consider the added-values of contextual concerns, which we then integrate at both the fine-grained activity-level and the coarse-grained process-level to reflect their intuitive business semantics. Secondly, the proposed approach is based on business rule-centric architectural techniques, with emphasis on Event-Conditions-Actions (ECA)-driven transient tailored and adaptive architectural connectors. As third benefit, for formal underpinnings towards rapid-prototyping and validation, we semantically interpret the approach into rewriting logic and its true-concurrent and reflective operational semantics governed by the intrinsic practical Maude language.
Context-awareness, ECA-Driven Rules, Architectural Connectors, Service-orientation, Adaptability, Maude Validation.
Gang Wang, Mark Nixon, University of Connecticut, Emerson Automation Solutions
Blockchain as a potentially disruptive technology advances many different applications, e.g., crypto-currencies, supply chains, and the Internet of Things. Under the hood of blockchain, it requires to handle different kinds of digital assets and data. The next-generation blockchain eco-system is expected to consist of numerous applications, and each application may have a distinct representation on digital assets. However, digital assets cannot be directly recorded on the blockchain, and a tokenization process is required to format these assets. Tokenization on blockchain will inevitably require a certain level of proper standards to enrich advanced functionalities and enhance interoperable capabilities for future applications. However, due to specific features of digital assets, it is hard to obtain a standard token form to represent all kinds of assets. For example, when considering fungibility, some assets are divisible and identical, which are commonly referred to fungible assets; while others that are not fungible are commonly referred to non-fungible assets. When tokenizing these assets, we are required to follow different tokenization processes. The way to effectively tokenize assets is thus essential and expecting to confront various unprecedented challenges. This paper provides a systematic and comprehensive review of the current progress of tokenization on blockchain. We explore both general principles and practical schemes to tokenize digital assets for blockchain, and classify digitalized tokens into three categories, namely, fungible tokens, non-fungible tokens, and semi-fungible tokens. We then focus on discussing the well-known Ethereum standards on non-fungible tokens. Finally, we discuss several critical challenges and some potential research directions to advance the research on exploring the tokenization process on the blockchain. To the best of our knowledge, this is the first systematic study for tokenization on blockchain.
Nur Nasuha Daud1, Siti Hafizah Ab Hamid1, Chempaka Seri1, Muntadher Saadoon1 and Nor Badrul Anuar2, 1Department of Software Engineering, Faculty of Computer Science and Information Technology, University of Malaya, 50603, Kuala Lumpur, Malaysia, 2Department of Computer System and Technology, Faculty of Computer Science and Information Technology, University of Malaya,50603, Kuala Lumpur, Malaysia
Link prediction analysis becomes vital to acquire a deeper understanding of events underlying social networks interactions and connections especially in current evolving and large-scale social networks. Traditional link prediction approaches underperformed for most large-scale social networks in terms of its scalability and efficiency. Spark is a distributed open-source framework that facilitate scalable link prediction efficiency in large-scale social networks. The framework provides numerous tunable properties for users to manually configure the parameters for the applications. However, manual configurations open to performance issue when the applications start scaling tremendously, which is hard to set up and expose to human errors. This paper introduced a novel Self-Configured Framework (SCF) to provide an autonomous feature in Spark that predicts and sets the best configuration instantly before the application execution using XGBoost classifier. SCF is evaluated on the Twitter social network using three link prediction applications: Graph Clustering (GC), Overlapping Community Detection (OCD), and Redundant Graph Clustering (RGD) to assess the impact of shifting data sizes on different applications in Twitter. The result demonstrates a 40% reduction in prediction time as well as a balanced resource consumption that makes full use of resources, especially for limited number and size of clusters.
Self-configured Framework, Link Prediction, Social Network, Large-scale.
Mobolaji O. Olarinde, Ojonukpe S. Egwuche, Mutiu Ganiyu and Ademola A. Adeola, Department of Computer Science, Federal Polytechnic, Ile-Oluji, Ondo State, Nigeria
A memorandum (memo) is a short message or record that is used for internal communication in a business environment. The penetration of Information and Communication Technology (ICT) has made many organisations to adopt electronic means of communication. The current trends of internal communication in organisations are the Electronic Memorandum (E-Memo). E- Memo is a system that automates the entire process of lettering and filing.The designed system is intended toerase the manual process of information exchange within an organisation. Implementing E- Memo, makes it easy to send and receive official information outside the office environment. The system is designed to enhance effective, timely and reliable communication of information within an organisation.
Document Management System, Internet, Memo, Portal, Website.
Tayeb Basta, College of Engineering and Computing, Al Ghurair University, Dubai, UAE
In 1981, Longuet-Higgins introduced the essential matrix to the computer vision community, and later it has been replaced by the fundamental matrix. The latter was heavily studied during and after the nineties. Researchers devoted a lot of effort to estimating the fundamental matrix. Although it is a landmark of computer vision, it is not wrong to revise its underpinning theory. In the current work, I revised three derivations of the essential and fundamental matrices. The first one is Longuet-Higgins derivation of the essential matrix. He started his derivation by drawing a mapping between the position vectors of a 3D point; however, the one-to-one feature of that mapping is lost when he changed it to a relation between the image points of the 3D point. In the two other derivations, the authors try to directly establish a mapping between the image points. Unfortunately, they obtained their objective through the misuse of mathematics. I demonstrated the mathematical flaws in such derivations.
Fundamental Matrix, Essential Matrix, Stereo Vision, 3D Reconstruction.
Xinrui Que1, Yao Pan2, 1Crean Lutheran High School, 12500 Sand Canyon Ave, Irvine, CA 92618, 2Department of Computer Science, Vanderbilt University
Community based websites such as social networks and online forums usually require users to register by providing profile information and avatars. It is important to ensure these user uploaded information comply with the website policy. This includes the information being personal, related and clear, as well as not containing unhealthy/disturbing content. A review or censorship system is usually deployed to review new user registration. Nowadays, many platforms still use manual review or rely on 3rd party APIs. However, manual review is time- consuming and costly. While 3rd party services are not tailored to the specific business needs thus do not provide enough accuracy. In this paper, we developed an automatically new user registration review system with deep learning. We apply the state-of-art techniques such as CNN and BERT for an end-to-end evaluation system for multi-modal content. We tested our system in E-pal, a freelancing platform for gaming companionship and conducted a qualitative evaluation of the approach. The results show that our system can evaluate the quality of avatars, voice descriptions, and text profiles with high accuracy. The system can significantly reduce the ef ort of manual review and also provides input for the recommendation ranking.
Deep learning, Image classification, BERT, CNN.
Imane ELFALOUSSI1 and Muhammad ILYAS2, 1Institute of graduate studies/Department of Computer Engineering/ Information Technologies, Altinbas University, Turkey, 2Department of Electrical and Electronics Engineering, Faculty of Engineering and Natural Sciences, Altinbas University, Turkey
Nowadays, Big data has provided a huge revolution in healthcare field; it helped to generate a relationship between science life and healthcare which leaded to connect doctors, pharmaceutical and patients together. This revolution will disrupt out health systems and will transform the methods of care which will lead ultimately to influence our public health policy. Particularly, Big Data in healthcare represents the huge immense and complex data that are related with the healthcare system and which are not easy to be managed or analysed with traditional methods. The analyses of the data collected from patients permit to integrate the healthcare field with many other scientific areas, such as bioinformatics, medical informatics and health informatics. And also, can allow moving from a curative care system to a preventive care system by providing recurring systems, detecting the undesirable effects of drugs or the misuse of them and helping as a support clinical decision system.
Big data, Clinical decision support systems, CDSS, Exploratory Data Analysis, Healthcare, Pandas, Python.
Rayan Abri1, Sara Abri2 and Salih Çetin3, 1Department of Computer Engineering/Hacettepe University, Mavinci Informatics Inc., 2Department of Computer Engineering/Hacettepe University, Mavinci Informatics Inc., 3Mavinci Informatics Inc.
The ranking of search results is directly affected by user click preferences and is an effective way to improve the quality of the result of search engines. To tailor the ranking, it is necessary to use the submitted query information by the user, such as click history, user profile, the previous queries history, or query click entropy. There are ranking methods that explore the issue of underlying information using traditional machine learning algorithms. Recently LSTM (Long Short-Term Memory) based models are investigated in this field. As traditional ML models require considering short-term and long-term preferences in predicting, the LSTM based models can improve prediction efficiency by considering both short and longterm. This paper proposes a topic-based LSTM model to re-rank search results on a submitted input quey using the previous queries sequence and user click history. In this model, we use the topic distribution of user documents to the LSTM model. We compare the model with topic-based ranking models with data from an AOL search engine and Session TREC 2013,2014 to show its performance. The result reveals significant improvement in the Topic-based LSTM model using topics in the Mean Reciprocal Rank by 13\% compared to the baseline topic-based models.
Re-ranking algorithms, LSTM model, Topic-based models.
Medha Trivedi and Anita Yadav, Harcourt Butler Technical University, Kanpur U.P., India
Wireless sensor networks (WSNs) have very wide applications in several fields. Localisation is an important aspect in the field of wireless sensor networks (WSNs). It is needed for several applications like monitoring of objects placed in indoors and outdoors environments. The main aim of localisation is to find the location to each node. where the node location technology is one of the key technologies of WSN. Distance vector hop (DV-Hop) localization algorithm is an extensively used algorithm in this field. In order to find out coordinate location of unknown node with help of a beacon node. Therefore, there always exist chances of error in the algorithm itself. This paper tells about wireless sensor networks and traditional DV-Hop (distance vector-hop) algorithm, and then summarizes its limitations. This experimental analysis is done through MATLAB simulator.
Localization, Wireless Sensor Network, Distance Vector (DV) Hop, Hop Count, Minimum Hop.
Jay Prakash Maurya1 Manish Manoria2 and Sunil Joshi1, 1Samrat Ashok Technological Institute, Vidisha, India, 2Truba Institute of Engineering and Information Technology, Bhopal
Heart Diseases are common and major cause of death now days. Electrocardiogram (ECG), best expresses cardiac activity in a human being in medical field. The ECG signals can capture the heart’s rhythmic irregularities, commonly known as arrhythmias. ECG signal data is very large and requires good experts, large number of medical resources. Machine learning based system become extensive field of research area to find characteristics of ECG signals. Typical procedures, on the other hand, necessitate more effort in terms of feature extraction and developing a complex and optimal system. Deep learning techniques can be used for precise diagnoses of patients acute and chronic heart conditions. Deep CNN has proven useful in enhancing the accuracy of diagnosis algorithms in the fusion of medicine and modern machine learning technologies. Deep leaning model can be evaluated on available ECG signal dataset MIT-BIH. The one-dimensional ECG time series signals are transformed into 2-D spectrograms through short-time Fourier transform. Accurate deep learning CNN configuration and practices in ECG signal classification will positively impact to save medical resources and clinical studies.
ECG, CNN, RNN, LSTM, GRU, DBN.
Aparna Padmakumar Ranjana1 and Harikrishnan M2, 1Student Department of Computer Science and Engineering, Rajagiri School of Engineering and Technology, India, 2Assistant Professor Department of Computer Science and Engineering, Rajagiri School of Engineering and Technology, India
Technology in the music player is growing quickly, particularly on mobile phones. In spite of the fact that music retrieval methods have improved in ten years, the advancement of music recommendation systems is still at its beginning stage. This paper deals with a comparative study on music recommendation systems based on various facial emotion techniques. One of the most significant components of human anatomy is the face. It is quite useful in determining a person’s emotional state or mood. Certain traits visible on the face can be used to anticipate a person’s mood to a certain degree of accuracy. With today’s technology, facial expressions are captured using an inbuilt camera. Feature extraction is performed on face images to detect emotions such as happy, angry etc. . The user’s current emotions are used to build an automatically generated music playlist. The FER2013 dataset is used to train and test the system.
Music recommendation systems, facial emotion, feature extraction, music playlist. FER2013.
Fanjun Meng, Bingguang Deng, Qihang Qin and Weihai Zhou, School of Communicationand InformationEngineering, Chongqing University of Posts and Telecommunications, Chongqing, China
With the continuous increase of bandwidth in the 5G system, the data carried by the signal is also continuously improved.Applying the timing synchronization algorithm of LTE in 5G system will greatly increase the computational complexity. Therefore, to address this, we propose a joint superposition and frequency domain fast correlation detection algorithm. Firstly, the proposed algorithm adds the local three sets of PSS time domain signals through using characteristics of the M-sequence of the PSS signal (Primary Synchronization Signal, PSS) and performs frequency domain conversion. Then, we can detect the coarse synchronization point by excutingthe frequency domain fast correlation operation.As a result, we can obtain the fine synchronization point and the ID numberin the cell group through calculating.The result of data analysis shows thatthe proposed algorithm improves the detection probability in low signal to noise ratio (SNR) environment and computation compexity significantly compared to the classic algorithm.
5G, Timing Synchronization, Frequency Domain Correlation, Superposition.
Olayemi O. Olufunke1, Olasehinde O. Olayemi2, Alowolodu D. Olufunso3, Osho P. Olarewaju4, 1Department of Computer Science Joseph Ayo Babalola University Ikeji-Arakeji, Osun State, Nigeria, 2Department of Computer Science, Federal Polytechnic, Ile Oluji, Ondo State, Nigeria, 3Department of Cyber Security, Federal University of Technology, Akure, Ondo State, Nigeria, 4Department of Haematology & Immunology, University of Medical Sciences, Ondo, Ondo State, Nigeria
Knee Osteoarthritis is a deteriorating disease that affects human knee joints leading to impaired quality of life with no curative treatments. Early detection of Knee Osteoarthritis will ensure its proper management, prevent cartilage damage and slow down its progression. To optimize its early detection, two ensemble methods were proposed to improve the clinical diagnoses of the risks of Knee Osteoarthritis. The dataset used was patient’s clinical information obtained from the Federal Medical Centre, Ido-Ekiti, Nigeria. The base diagnostic models recorded higher diagnostic accuracy than a recent similar research reviewed. Improved diagnostic accuracy recorded by ensemble methods affirms the knack of ensemble learning to improve the diagnoses of two or more models. The diagnostic accuracy of 97.77% recorded by the Multi Response Linear Regression ensemble model is slightly higher than accuracy of 96.54% recorded by Majority Voting. The statistical tests validated the ranking of the machine and ensemble learning models.
Knee Osteoarthritis, Diagnosis, Ensemble learning, Machine learning, Computational Intelligence.
Leo Liao1, Ang Li2, 1Crean Lutheran High School, 12500 Sand Canyon Avenue, Irvine, CA 92618, 2California State University, Long Beach
Operator and sales employees in the logistics industry often have to submit the same inquiry repetitively to different vendors and opt in for the quotation that will generate the greatest profit for the company . This process can be very laborious and tedious. Meanwhile, for smaller companies that do not have a well-constructed database for quotation information, monitoring employee’s work is simply dif icult to achieve . To increase the ef iciency of sales’ workflow in this particular industry, this application devises a platform that automates the inquiry process, analyzes quotations from dif erent vendors, retrieves the most profitable one, and documents all inquiries an employee has committed . The results, after a series of intensive testing, prove to be promising and satisfying. The machine learning model can successfully fetch the most cost-ef ective price after analyzing a list of emails containing common languages used in the industry. All histories of an employee’s inquiry can be correctly displayed on any front-end device. Overall, the obstacle presented above is largely solved.
Automation, Quotation, Analysis
David A. Noever, PeopleTec, 4901-D Corporate Drive, Huntsville, AL, USA, 35805
Change detection methods applied to monitoring key infrastructure like airport runways represent an important capability for disaster relief and urban planning. The present work identifies two generative adversarial networks (GAN) architectures that translate reversibly between plausible runway maps and satellite imagery. We illustrate the training capability using paired images (satellite-map) from the same point of view and using the Pix2Pix architecture or conditional GANs. In the absence of available pairs, we likewise show that CycleGAN architectures with four network heads (discriminator-generator pairs) can also provide effective style transfer from raw image pixels to outline or feature maps. To emphasize the runway and tarmac boundaries, we experimentally show that the traditional grey-tan map palette is not a required training input but can be augmented by higher contrast mapping palettes (red-black) for sharper runway boundaries. We preview a potentially novel use case (called “sketch2satellite”) where a human roughly draws the current runway boundaries and automates the machine output of plausible satellite images. Finally, we identify examples of faulty runway maps where the published satellite and mapped runways disagree but an automated update renders the correct map using GANs.
Generative Adversarial Networks, Satellite-to-Map, Pix2Pix, CycleGAN Architecture.
Arunima Sharma, India
As per Language characteristics every author has unique style of writing which does not change significantly from one writing to another. These characteristics are represented by attributes viz Simple Surface Features, Readability measures, Obscurity of Vocabulary Features, Part of Speech (POS) and Syntax Features. Rank Features and Emotional Tone Features. These attributes have been created are 229 in numbers which are categorized in 6 classes. In this paper we present a mathematical estimation of different values used for author identification.
Attributes, Author Identification, Features, Emotions, Language Processing, Rank.
Poria Pirozmand1, Parisa Pirouzmand2*, 1School of Computer and software, Dalian Neusoft University of Information, China, 2*Computer Science Department, Dalian University of Technology, China
The continuous development of imaging technology and the increase in the number of images and their use have led to the need for new and different solutions. Among the various image processing, medical image segmentation and classification have a particular position. Considering that the human brain is one of the principal organs of the body, reviewing and processing brain images is very important. In this article, a solution for the segmentation and classification of MRI images is presented so that by using it and proper segmentation of images, brain tumor type can also be detected and classified. Accordingly, their quality is initially improved by using Gaussian and Morphologic techniques performed on the image.Then, in the next step, using the optimized watershed solution, the desired image is segmented, and the hidden features are discovered using the GLCM algorithm. Now, based on the learning that is done through it, the type of tumor is detected. In the final step, using the SVM algorithm, the desired tumor is classified into benign and malignant.The evaluation of the solution concerning the parameters of accuracy, precision, and recall indicates that the proposed method has been able to segment and classify images well.As a result, it is more efficient than other, more common algorithms in accuracy and precision.
Segmentation, Brain Tumor, Watershed Algorithm, SVM algorithm.
Yang Xiaojie and Qiao Yulong, Department of Information and Communication Engineering, Harbin Engineering University, Harbin City, China
Infrared imaging technology has many advantages, such as strong anti-interference ability and all-weather observation, so it has a wide range of applications in military, civilian, and other fields. With the development of artificial intelligence technology, the performance of infrared image target detection has been greatly improved. However, the targets in actual training and applications often come from different scenes and different observation distances, and when the observation distance is far, infrared target detection severely deteriorate. In order to solve such problems, based on deep learning, research on infrared image long-distance target detection algorithms under different observation distances to improve the accuracy of long-distance small target detection, we propose an improved YOLOv4 algorithm, which improves the residual unit in the feature extraction network, replaces the activation function in the network. At the same time, a new multi-scale detection branch is added. The experimental results show that, without pre-processing the infrared image, the improved YOLOv4 algorithm improves the detection accuracy by 2.59%, and realizes real-time detection.
Deep Learning, Infrared Image, Target Detection.
Copyright © NATP 2022