Convex Neural Networks Based Reinforcement Learning for Load Frequency Control under Denial of Service Attacks
Show Abstract
Abstract
With the increase in the complexity and informatization of power grids, new challenges,such as access to a large number of distributed energy sources and cyber attacks on power grid control systems, are brought to load-frequency control. As load-frequency control methods, both aggregated distributed energy sources (ADES) and artificial intelligence techniques provide flexible solution strategies to mitigate the frequency deviation of power grids. This paper proposes a load-frequency control strategy of ADES-based reinforcement learning under the consideration of reducing the impact of denial of service (DoS) attacks. Reinforcement learning is used to evaluate the pros and cons of the proposed frequency control strategy. The entire evaluation process is realized by the approximation of convex neural networks. Convex neural networks are used to convert the nonlinear optimization problems of reinforcement learning for long-term performance into the corresponding convex optimization problems. Thus, the local optimum is avoided, the optimization process of the strategy utility function is accelerated, and the response ability of controllers is improved. The stability of power grids and the convergence of convex neural networks under the proposed frequency control strategy are studied by constructing Lyapunov functions to obtain the sufficient conditions for the steady states of ADES and the weight convergence of actor–critic networks. The article uses the IEEE14, IEEE57, and IEEE118 bus testing systems to verify the proposed strategy. Our experimental results confirm that the proposed frequency control strategy can effectively reduce the frequency deviation of power grids under DoS attacks.
|
Fancheng Zeng,
Zhiqin Zhu,
Guanqiu Qi,
Gang Hu,
Jian Sun,
Matthew Haner,
|
0 |
Download Full Paper |
0 |
Unranking Small Combinations of a Large Set in Co-Lexicographic Order
Show Abstract
Abstract
The presented research is devoted to the problem of developing new combinatorial genera tion algorithms for combinations. In this paper, we develop a modification of Ruskey’s algorithm for unranking m-combinations of an n-set in co-lexicographic order. The proposed modification is based on the use of approximations to make a preliminary search for the values of the internal parameter k of this algorithm. In contrast to the original algorithm, the obtained algorithm can be effectively applied when n is large and m is small because the running time of this algorithm depends only on m .Furthermore, this algorithm can be effectively used when n and m are both large but n − m is small, since we can consider un ranking (n − m)-combinations of an n-set. The conducted computational experiments confirm the effectiveness of the developed modification.
|
Vladimir Kruchinin,
Yuriy Shablya,
Dmitry Kruchinin,
Victor Rulevskiy,
|
0 |
Download Full Paper |
0 |
Optimal Integration of Dispersed Generation in Medium-Voltage Distribution Networks for Voltage Stability Enhancement
Show Abstract
Abstract
This study addresses the problem of the maximization of the voltage stability index (λ coefficient) in medium-voltage distribution networks considering the optimal placement and sizing of dispersed generators. The problem is formulated through a mixed-integer nonlinear programming model (MINLP), which is solved using General Algebraic Modeling System (GAMS) software. A numerical example with a 7-bus radial distribution network is employed to introduce the usage of GAMS software to solve the proposed MINLP model. A new validation methodology to verify the numerical results provided for the λ-coefficient is proposed by using recursive power flow evaluations in MATLAB and DigSILENT software. The recursive evaluations allow the determination of the λ-coefficient through the implementation of the successive approximation power flow method and the Newton–Raphson approach, respectively. It is effected by fixing the sizes and locations of the dispersed sources using the optimal solution obtained with GAMS software. Numerical simulations in the IEEE 33- and 69-bus systems with different generation penetration levels and the possibility of installing one to three dispersed generators demonstrate that the GAMS and the recursive approaches determine the same load ability index. Moreover, the numerical results indicate that, depending on the number of dispersed generators allocated, it is possible to improve the λ-coefficient between 20.96% and 37.43% for the IEEE 33-bus system, and between 18.41% and 41.98% for the IEEE 69-bus system .
|
Brayan Enrique Aguirre-Angulo,
Lady Carolina Giraldo-Bello,
Oscar Danilo Montoya,
Francisco David Moya,
|
0 |
Download Full Paper |
0 |
An ADMM Based Parallel Approach for Fund of Fund Construction
Show Abstract
Abstract
In this paper, we propose a parallel algorithm for a fund of fund (FOF) optimization model. Based on the structure of objective function, we create an augmented Lagrangian function and separate the quadratic term from the nonlinear term by the alternate direction multiplier method (ADMM), which creates two new subproblems that are much easier to be computed. To accelerate the convergence speed of the proposed algorithm, we use an adaptive step size method to adjust the step parameter according to the residual of the dual problem at every iterate. We show the parallelization of the proposed algorithm and implement it on CUDA with block storage for the structured matrix, which is shown to be up to two orders of magnitude faster than the CPU implementation on large scale problem.
|
Yidong Chen,
Chen Li,
Zhonghua Lu,
|
0 |
Download Full Paper |
0 |
An Algorithm for Estimation of SF6 Leakage on Power Substation Assets
Show Abstract
Abstract
This paper presents an algorithm that evaluates the current state of an asset fleet containing sulphur hexafluoride (SF6) and estimates its leakage in future electrical power substation projects .The algorithm uses simple models and easy tools to facilitate decision making for transmission and distribution system operator companies. The algorithm is evaluated using data provided by ENEL-CODENSA. The corresponding results are shown, and the estimation values obtained are compared with leakage records in existing assets which helps to understand the advantages and limitations of the algorithm.
|
Ferley Castro Aranda,
Andrés Felipe Cerón Piamba,
Rodolfo García Sierra,
Benjamin Mailhé,
Luis Miguel León Gil,
|
0 |
Download Full Paper |
0 |
Swarm Robots Cooperative and Persistent Distribution Modeling and Optimization Based on the Smart Community Logistics Service Framework
Show Abstract
Abstract
The high efficiency, flexibility, and low cost of robots provide huge opportunities for the application and development of intelligent logistics. Especially during the COVID-19 pandemic, the non-contact nature of robots effectively helped with preventing the spread of the epidemic. Task allocation and path planning according to actual problems is one of the most important problems faced by robots in intelligent logistics. In the distribution, the robots have the fundamental characteristics of battery capacity limitation, limited load capacity, and load affecting transportation capacity. So, a smart community logistics service framework is proposed based on control system, automatic replenishment platform, network communication method, and coordinated distribution optimization
technology, and a Mixed Integer Linear Programming (MILP) model is developed for the collaborative and persistent delivery of a multiple-depot vehicle routing problem with time window (MDVRPTW) of swarm robots. In order to solve this problem, a hybrid algorithm of genetically improved set-based particle swarm optimization (S-GAIPSO) is designed and tested with numerical cases. Experimental results show that, Compared to CPLEX, S-GAIPSO has achieved gaps of 0.157%, 1.097%, and 2.077% on average, respectively, when there are 5, 10, and 20 tasks. S-GAIPSO can obtain the optimal or near-optimal solution in less than 0.35 s, and the required CPU time slowly increases as the scale increases. Thus, it provides utility for real-time use by handling a large-scale problem in a short time.
|
Meng Zhang,
Bin Yang,
|
0 |
Download Full Paper |
0 |
Rendezvous on the Line with Different Speeds and Markers That Can Be Dropped at Chosen Time
Show Abstract
Abstract
In this paper, we introduce a linear program (LP)-based formulation of a rendezvous game with markers on the infinite line and solve it. In this game one player moves at unit speed while the second player moves at a speed bounded by v max ≤ 1. We observe that in this setting, a slow-moving player may have interest to remain still instead of moving. This shows that in some conditions the wait-for-mummy strategy is optimal. We observe as well that the strategies are completely different if the player that holds the marker is the fast or slow one. Interestingly, the marker is not useful when the player without marker moves slowly, i.e., the fast-moving player holds the marker.
|
Pierre Leone,
Nathan Cohen,
|
0 |
Download Full Paper |
0 |
Applying Simheuristics to Minimize Overall Costs of an MRP Planned Production System
Show Abstract
Abstract
Looking at current enterprise resource planning systems shows that material requirements planning (MRP) is one of the main production planning approaches implemented there. The MRP planning parameters lot size, safety stock, and planned lead time, have to be identified for each MRP planned material. With increasing production system complexity, more planning parameters have to be defined. Simulation-based optimization is known as a valuable tool for optimizing these MRP planning parameters for the underlying production system. In this article, a fast and easy-to-apply simheuristic was developed with the objective to minimize overall costs. The simheuristic sets the planning parameters lot size, safety stock, and planned lead time for the simulated stochastic production systems. The developed simheuristic applies aspects of simulation annealing (SA) for an efficient metaheuristic-based solution parameter sampling. Additionally, an intelligent simulation budget management (SBM) concept is introduced, which skips replications of not promising iterations. A comprehensive simulation study for a multi-item and multi-staged production system structure is conducted to evaluate its performance. Different simheuristic combinations and parameters are tested, with the result that the combination of SA and SBM led to the lowest overall costs. The contributions of this article are an easy implementable simheuristic or MRP parameter optimization and a promising concept to intelligently manage simulation budget.
|
Wolfgang Seiringer,
Klaus Altendorfer,
Juliana Castaneda,
Javier Panadero,
Angel A. Juan,
|
0 |
Download Full Paper |
0 |
Recent Advances in Positive-Instance Driven Graph Searching
Show Abstract
Abstract
Research on the similarity of a graph to being a tree—called the treewidth of the graph—has seen an enormous rise within the last decade, but a practically fast algorithm for this task has been discovered only recently by Tamaki (ESA 2017). It is based on dynamic programming and makes use of the fact that the number of positive subinstances is typically substantially smaller than the number of all subinstances. Algorithms producing only such subinstances are called positive-instance driven (PID). The parameter treedepth has a similar story. It was popularized through the graph sparsity project and is theoretically well understood—but the first practical algorithm was discovered only recently by Trimble (IPEC 2020) and is based on the same paradigm. We give an alternative and unifying view on such algorithms from the perspective of the corresponding configuration graphs in certain two-player games. This results in a single algorithm that can compute a wide range of important graph parameters such as treewidth, pathwidth, and treedepth. We complement this algorithm with a novel randomized data structure that accelerates the enumeration of subproblems in positive-instance driven algorithms.
|
Max Bannach,
Sebastian Berndt,
|
0 |
Download Full Paper |
0 |
Clustering with Nature-Inspired Algorithm Based on Territorial Behavior of Predatory Animals
Show Abstract
Abstract
Clustering constitutes a well-known problem of division of unlabelled dataset into disjoint groups of data elements. It can be tackled with standard statistical methods but also with meta heuristics, which offer more flexibility and decent performance. The paper studies the application of the clustering algorithm—inspired by the territorial behaviors of predatory animals—named the Predatory Animals Algorithm (or, in short: PAA). Besides the description of the PAA, the results of its experimental evaluation, with regards to the classic k-means algorithm, are provided. It is concluded that the application of newly-created nature-inspired technique brings very promising outcomes. The discussion of obtained results is followed by areas of possible improvements and plans for further research.
|
Maciej Trzci nski,
Piotr A. Kowalski,
Szymon Łukasik,
|
0 |
Download Full Paper |
0 |
Calibration of an Adaptive Genetic Algorithm for Modeling Opinion Diffusion
Show Abstract
Abstract
Genetic algorithms mimic the process of natural selection in order to solve optimization problems with minimal assumptions and perform well when the objective function has local optima on the search space. These algorithms treat potential solutions to the optimization problem as chromosomes, consisting of genes which undergo biologically-inspired operators to identify a better solution. Hyperparameters or control parameters determine the way these operators are implemented. We created a genetic algorithm in order to fit a DeGroot opinion diffusion model using limited data, making use of selection, blending, crossover, mutation, and survival operators. We adapted the algorithm from a genetic algorithm for design of mixture experiments, but the new algorithm required substantial changes due to model assumptions and the large parameter space relative to the design space. In addition to introducing new hyperparameters, these changes mean the hyperparameter values suggested for the original algorithm cannot be expected to result in optimal performance. To make the algorithm for modeling opinion diffusion more accessible to researchers, we conduct a simulation study investigating hyperparameter values. We find the algorithm is robust to the values selected for most hyperparameters and provide suggestions for initial, if not default, values and recommendations for adjustments based on algorithm output.
|
Kara Layne Johnson,
Nicole Bohme Carnegie,
|
0 |
Download Full Paper |
0 |
Test and Validation of the Surrogate-Based, Multi-Objective GOMORS Algorithm against the NSGA-II Algorithm in Structural Shape Optimization
Show Abstract
Abstract
Nowadays, product development times are constantly decreasing, while the requirements for the products themselves increased significantly in the last decade. Hence, manufacturers use Computer-Aided Design (CAD) and Finite-Element (FE) Methods to develop better products in shorter times. Shape optimization offers great potential to improve many high-fidelity, numerical problems such as the crash performance of cars. Still, the proper selection of optimization algorithms provides a great potential to increase the speed of the optimization time. This article reviews the optimization performance of two different algorithms and frameworks for the structural behavior of a b-pillar. A b-pillar is the structural component between a car’s front and rear door, loaded under static and crash requirements. Furthermore, the validation of the algorithm includes a feasibility constraint.Recently, an optimization routine was implemented and validated for a Non-dominated Sorting Genetic Algorithm (NSGA-II) implementation. Different multi-bjective optimization algorithms are reviewed and methodically ranked in a comparative study by given criteria. In this case, the Gap Optimized Multi-Objective Optimization using Response Surfaces (GOMORS) framework is chosen and implemented into the existing Institut für Konstruktionstechnik Optimizes Shapes (IKOS) frame work. Specifically, the article compares the NSGA-II and GOMORS directly for a linear, non-linear, and feasibility optimization scenario. The results show that the GOMORS outperforms the NSGA-II vastly regarding the number of function calls and Pareto-efficient results without the feasibility constraint. The problem is reformulated to an unconstrained, three-objective optimization problem to analyze the influence of the constraint. The constrained and unconstrained approaches show equal performance for the given scenarios. Accordingly, the authors provide a clear recommendation towards the surrogate-based GOMORS for costly and multi-objective evaluations. Furthermore, the algorithm can handle the feasibility constraint properly when formulated as an objective function and as a constraint.
|
Yannis Werner,
Thomas Vietor,
Tim van Hout,
Vijey Subramani Raja Gopalan,
|
0 |
Download Full Paper |
0 |
A Novel MCDA-Based Methodology Dealing with Dynamics and Ambiguities Resulting from Citizen Participation in the Context of the Energy Transition
Show Abstract
Abstract
In the context of the energy transition, sound decision making regarding the development of renewable energy systems faces various technical and societal challenges. In addition to climate related uncertainties affecting technical issues of reliable grid planning, there are also subtle aspects and uncertainties related to the integration of energy technologies into built environments. Citizens’ opinions on grid development may be ambiguous or divergent in terms of broad acceptance of the energy transition in general, and they may have negative attitudes towards concrete planning in their local environment. First, this article identifies the issue of discrepancies between preferences of a fixed stakeholder group with respect to the question of the integration of renewable energy technology, posed from different perspectives and at different points in time, and considers it as a fundamental problem in the context of robust decision making in sustainable energy system planning. Second, for dealing with that issue, a novel dynamic decision support methodology is presented that includes multiple surveys, statistical analysis of the discrepancies that may arise, and multicriteria decision analysis that specifically incorporates the opinions of citizens. Citizens are considered as stakeholders and participants in smart decision-making processes. A case study applying agent-based simulations underlines the relevance of the methodology proposed for decision making in the context of renewable energies.
|
Sadeeb Simon Ottenburger,
Stella Möhrle,
Tim Oliver Müller,
Wolfgang Raskob,
|
0 |
Download Full Paper |
0 |
Two Taylor Algorithms for Computing the Action of the Matrix Exponential on a Vector
Show Abstract
Abstract
The action of the matrix exponential on a vector e Atv, A ∈ Cn×n , v ∈ Cn , appears in problems that arise in mathematics, physics, and engineering, such as the solution of systems of linear ordinary differential equations with constant coefficients. Nowadays, several state-of-the-art approximations are available for estimating this type of action. In this work, two Taylor algorithms are proposed for computing e Av, which make use of the scaling and recovering technique based on a backward or forward error analysis. A battery of highly heterogeneous test matrices has been used in the different experiments performed to compare the numerical and computational properties of these algorithms, implemented in the MATLAB language. In general, both of them improven those already existing in the literature, in terms of accuracy and response time. Moreover, a high-performance computing version that is able to take advantage of the computational power of a GPU platform has been developed, making it possible to tackle high dimension problems at an execution time significantly reduced.
|
Emilio Defez,
Javier Ibáñez,
José M. Alonso,
Pedro Alonso Jordá,
Jorge Sastre,
|
0 |
Download Full Paper |
0 |
Using Explainable Machine Learning to Explore the Impact of Synoptic Reporting on Prostate Cancer
Show Abstract
Abstract
Machine learning (ML) models have proven to be an attractive alternative to traditional statistical methods in oncology. However, they are often regarded as black boxes, hindering their adoption for answering real-life clinical questions. In this paper, we show a practical application of explainable machine learning (XML). Specifically, we explored the effect that synoptic reporting (SR; i.e., reports where data elements are presented as discrete data items) in Pathology has on the survival of a population of 14,878 Dutch prostate cancer patients. We compared the performance of a Cox Proportional Hazards model (CPH) against that of an eXtreme Gradient Boosting model (XGB) in predicting patient ranked survival. We found that the XGB model (c-index = 0.67) performed significantly better than the CPH (c-index = 0.58). Moreover, we used Shapley Additive Explanations (SHAP) values to generate a quantitative mathematical representation of how features—including usage of SR—contributed to the models’ output. The XGB model in combination with SHAP visualizations revealed interesting interaction effects between SR and the rest of the most important features. These results hint that SR has a moderate positive impact on predicted patient survival. Moreover, adding an explainability layer to predictive ML models can open their black box, making them more accessible and easier to understand by the user. This can make XML-based techniques appealing alternatives to the classical methods used in oncological research and in health care
in general.
|
Femke M. Janssen,
Berdine L. Heesterman,
Arturo Moncada Torres,
Katja K. H. Aben,
Quirinus J. M. Voorham,
Paul A. Seegers,
|
0 |
Download Full Paper |
0 |
Adaptive Authentication Protocol Based on Zero-Knowledge Proof
Show Abstract
Abstract
Authentication protocols are expanding their application scope in wireless information systems, among which are low-orbit satellite communication systems (LOSCS) for the OneWeb space Internet, automatic object identification systems using RFID, the Internet of Things, intelligent transportation systems (ITS), Vehicular Ad Hoc Network (VANET). This is due to the fact that authentication protocols effectively resist a number of attacks on wireless data transmission channels in these systems. The main disadvantage of most authentication protocols is the use of symmetric and asymmetric encryption systems to ensure high cryptographic strength. As a result, there is a problem in delivering keys to the sides of the prover and the verifier. At the same time, compromising of keys will lead to a decrease in the level of protection of the transmitted data. Zero-knowledgeauthentication protocols (ZKAP) are able to eliminate this disadvantage. However, most of these protocols use multiple rounds to authenticate the prover. Therefore, ZKAP, which has minimal time costs, is developed in the article. A scheme for adapting protocol parameters has been developed in this protocol to increase its efficiency. Reductions in the level of confidentiality allow us to reduce the time spent on the execution of the authentication protocol. This increases the volume of information traffic. At the same time, an increase in the confidentiality of the protocol entails an increase in the time needed for authentication of the prover, which reduces the volume of information traffic. The FPGA Artix-7 xc7a12ticsg325-1L was used to estimate the time spent implementing the adaptive ZKAP protocol. Testing was performed for 32- and 64-bit adaptive authentication protocols.
|
Nikita Konstantinovich Chistousov,
Igor Anatolyevich Kalmykov,
Daniil Vyacheslavovich Dukhovnyj,
Maksim Igorevich Kalmykov,
Aleksandr Anatolyevich Olenev,
|
0 |
Download Full Paper |
0 |
No Cell Left behind: Automated, Stochastic, Physics-Based Tracking of Every Cell in a Dense, Growing Colony
Show Abstract
Abstract
Precise tracking of individual cells—especially tracking the family lineage, for example in a developing embryo—has widespread applications in biology and medicine. Due to significant noise in microscope images, existing methods have difficulty precisely tracking cell activities. These difficulties often require human intervention to resolve. Humans are helpful because our brain naturally and automatically builds a simulation “model” of any scene that we observe. Because we understand simple truths about the world—for example cells can move and divide, but they cannot instantaneously move vast distances—this model “in our heads” helps us to severely constrain the possible interpretations of what we see, allowing us to easily distinguish signal from noise, and track the motion of cells even in the presence of extreme levels of noise that would completely confound existing automated methods. Results: Here, we mimic the ability of the human brain by building an explicit computer simulation model of the scene. Our simulated cells are programmed to allow movement and cell division consistent with reality. At each video frame, we stochastically generate millions of nearby “Universes” and evolve them stochastically to the next frame. We then find and fit the best universes to reality by minimizing the residual between the real image frame and a synthetic image of the simulation. The rule-based simulation puts extremely stringent constraints on possible interpretations of the data, allowing our system to perform far better than existing methods even in the presense of extreme levels of image noise. We demonstrate the viability of this method by accurately tracking every cell in a colony that grows from 4 to over 300 individuals, doing about as well as a human can in the difficult task of tracking cell lineages.
|
Huy Pham,
Emile R. Shehada,
Shawna Stahlheber,
Kushagra Pandey,
Wayne B. Hayes,
|
0 |
Download Full Paper |
0 |
k-Center Clustering with Outliers in Sliding Windows
Show Abstract
Abstract
Metric k-center clustering is a fundamental unsupervised learning primitive. Although widely used, this primitive is heavily affected by noise in the data, so a more sensible variant seeks for the best solution that disregards a given number z of points of the dataset, which are called outliers. We provide efficient algorithms for this important variant in the streaming model under the sliding window setting, where, at each time step, the dataset to be clustered is the window W of the most recent data items. For general metric spaces, our algorithms achieve O(1) approximation and,remarkably, require a working memory linear in k + z and only logarithmic in |W|. For spaces of bounded doubling dimension, the approximation can be made arbitrarily close to 3. For these latter spaces, we show, as a by-product, how to estimate the effective diameter of the window W, which is a measure of the spread of the window points, disregarding a given fraction of noisy distances. We also provide experimental evidence of the practical viability of the improved clustering and diameter estimation algorithms.
|
Paolo Pellizzoni,
Andrea Pietracaprina,
Geppino Pucci,
|
0 |
Download Full Paper |
0 |
Converting of Boolean Expression to Linear Equations,Inequalities and QUBO penalties for Cryptanalysis
Show Abstract
Abstract
There exists a wide range of constraint programming (CP) problems defined on Boolean functions depending on binary variables. One of the approaches to solving CP problems is using specific appropriate solvers, e.g., SAT solvers. An alternative is using the generic solvers for mixed integer linear programming problems (MILP), but they require transforming expressions with Boolean functions to linear equations or inequalities. Here, we present two methods of such a transformation which applies to any Boolean function defined by explicit rules giving values of the Boolean function for all combinations of its Boolean variables. The first method represents the Boolean function as a linear equation in the original binary variables and, possibly, binary ancillaries, which become
additional variables of the MILP problem being composed. The second method represents the Boolean function as a set of linear inequalities in the original binary variables and one additional continuous variable (representing the value of the function). The choice between the first or second method is a trade-off between the number of binary variables and number of linear constraints in the emerging MP problem. The advantage of the proposed approach is that both methods reduce important cryptanalysis problems, such as the preimaging of hash functions or breaking symmetric ciphers as the MILP problems, which are solved by the generic MILP solvers. Furthermore, the first method enables to reduce the binary linear equations to quadratic unconstrained binary optimization (QUBO), by the quantum annealer, e.g., D-Wave.
|
Aleksey I. Pakhomchik,
Vladimir V. Voloshinov,
Valerii M. Vinokur,
Gordey B. Lesovik,
|
0 |
Download Full Paper |
0 |
An Evasion Attack against Stacked Capsule Autoencoder
Show Abstract
Abstract
Capsule networks are a type of neural network that use the spatial relationship between features to classify images. By capturing the poses and relative positions between features, this network is better able to recognize affine transformation and surpass traditional convolutional neural networks (CNNs) when handling translation, rotation, and scaling. The stacked capsule autoencoder (SCAE) is a state-of-the-art capsule network that encodes an image in capsules which each contain poses of features and their correlations. The encoded contents are then input into the downstream classifier to predict the image categories. Existing research has mainly focused on the security of capsule networks with dynamic routing or expectation maximization (EM) routing, while little
attention has been given to the security and robustness of SCAEs. In this paper, we propose an evasion attack against SCAEs. After a perturbation is generated based on the output of the object capsules in the model, it is added to an image to reduce the contribution of the object capsules related to the original category of the image so that the perturbed image will be misclassified. We evaluate the attack using an image classification experiment on the Mixed National Institute of Standards and Technology Database (MNIST), Fashion-MNIST, and German Traffic Sign Recognition Benchmark (GTSRB) datasets, and the average attack success rate can reach 98.6%. The experimental results indicate that the attack can achieve high success rates and stealthiness . This finding confirms that the SCAE has a security vulnerability that allows for the generation of adversarial samples. Our work seeks to highlight the threat of this attack and focus attention on SCAE’s security.
|
Jiazhu Dai,
Siwei Xiong,
|
0 |
Download Full Paper |
0 |
Algorithms for Detecting and Refining the Area of Intangible Continuous Objects for Mobile Wireless Sensor Networks
Show Abstract
Abstract
Detecting the intangible continuous object (ICO) is a significant task, especially when the ICO is harmful as a toxic gas. Many studies used steady sensors to sketch the contour and find the area of the ICO. Applying the mobile sensors can further improve the precision of the detected ICO by efficiently adjusting the positions of a subset of the deployed sensors. This paper proposed two methods to figure out the area of the ICO, named Delaunay triangulation with moving sensors (MDT) and convex hull with moving sensors (MCH). First, the proposed methods divide the sensors into ICO-covered and ICO-uncovered sensors. Next, the convex hull algorithm and the Delaunay triangulation geometric architecture are applied to figure out the rough boundary of the ICO. Then,
the area of the ICO is further refined by the proposed sensor moving algorithm. Simulation results show that the figured out area sizes of MDT and MCH are 135% and 102% of the actual ICO. The results are better than the planarization algorithms Gabriel Graph (GG) and Delaunay triangulation without moving sensors, that amount to 137% and 145% of the actual ICO. The simulation also evaluates the impact of the sensors’ moving step size to find the compromise between the accuracy of the area and the convergence time of area refinement.
|
Shih-Chang Huang,
Cong-Han Huang,
|
0 |
Download Full Paper |
0 |
Minimizing Travel Time and Latency in Multi-Capacity Ride-Sharing Problems
Show Abstract
Abstract
Motivated by applications in ride-sharing and truck-delivery, we study the problem of matching a number of requests and assigning them to cars. A number of cars are given, each of which consists of a location and a speed, and a number of requests are given, each of which consists of a pick-up location and a drop-off location. Serving a request means that a car must first visit the pick-up location of the request and then visit the drop-off location. Each car can only serve at most c requests. Each assignment can yield multiple different serving routes and corresponding serving times, and our goal was to serve the maximum number of requests with the minimum travel time (called CSsum) and to serve the maximum number of requests with the minimum total latency (called CSlat). In addition, we studied the special case where the pick-up and drop-off locations of a request coincide. Both problems CSsum and CSlat are APX-hard when c ≥ 2. We propose an algorithm, called the transportation algorithm (TA), which is a (2c − 1)-approximation (resp. c-approximation) algorithm for CSsum (resp. CSlat); these bounds are shown to be tight. We also considered the special case where each car serves exactly two requests, i.e., c = 2. In addition to the TA, we investigated another algorithm, called the match-and-assign algorithm (MA). Moreover, we call the algorithm that outputs the best of the two solutions found by the TA and MA the CA. We show that the CA is a two-approximation (resp. 5/3) for CS sum (resp. C Slat), and these ratios are better than the ratios of the individual algorithms, the TA and MA.
|
Kelin Luo,
Frits C. R. Spieksma,
|
0 |
Download Full Paper |
0 |
Reputation-Driven Dynamic Node Consensus and Reliability Sharding Model in IoT Blockchain
Show Abstract
Abstract
The Internet of Things that links the cyber and physical worlds brings revolutionary changes to society, however, its security and efficiency problems have not been solved. The Consortium Blockchain + IoT is considered to be an effective solution. The IoT blockchain network’s demand for transaction processing speed is gradually increasing. The throughput problem of the blockchain needs to be solved urgently and the security problem of transaction processing that comes with it. To solve the above problems, this paper proposes a reputation-driven dynamic node security sharding consensus model (RDSCM) in the blockchain, which consists of two parts: a reputation-driven node to eliminate PBFT (RE-PBFT) and a reputation-driven node cross reconfiguration sharding scheme (NCRS). The RE-PBFT eliminates abnormal nodes in the consensus network and reduces the probability of abnormal nodes becoming master nodes. NCRS improves the blockchain throughput while ensuring sharding reliability. Finally, the experiment proves that RE-PBFT can identify abnormal nodes and remove them in a short time. NCRS can effectively guarantee the reliability of sharding, and the transaction processing efficiency has been greatly improved after sharding.
|
Nianqi Jiang,
Fenhua Bai,
Tao Shen,
Lin Huang,
Zhengyuan An,
|
0 |
Download Full Paper |
0 |
A Spacecraft Attitude Determination and Control Algorithm for Solar Arrays Pointing Leveraging Sun Angle and Angular Rates Measurements
Show Abstract
Abstract
The capability to orient the solar arrays of a spacecraft toward the Sun is an ultimate asset for any attitude determination and control subsystem (ADCS). This ability should be maintained in any operative circumstance, either nominal or off-nominal, to avoid the loss of the entire space-borne system. The safe mode implementation should guarantee a positive power generation from the solar arrays, regardless of the health status of the satellite platform. This paper presents a solar array pointing algorithm, to be executed on-board, with a minimal set of sensors and actuators. In fact, the sensors are limited to the solar arrays, exploiting the current/voltage sensing capacity of the electrical power subsystem to measure the Sun angle with respect to the arrays normal, and to the angular rates sensors. The actuators are required to provide a torque only along two axes and, thus,a reduced actuation capacity is still manageable by the proposed algorithm. The paper describes the algorithm, both in the Sun direction determination and in the Sun pointing control capacity. The achieved performance is outlined, considering either an ideal system or a realistic one, being the latter affected by sensors and actuators limitations. The actuation by means of momentum exchange devices or magnetic torquers is discussed, with the purpose to prove the wide applicability range of the presented algorithm, which is capable to guarantee solar array orientation with a minimal hardware set.
|
Andrea Colagrossi,
Michèle Lavagna,
|
0 |
Download Full Paper |
0 |
Using Graph Embedding Techniques in Process-Oriented Case-Based Reasoning
Show Abstract
Abstract
Similarity-based retrieval of semantic graphs is a core task of Process-Oriented Case-Based Reasoning (POCBR) with applications in real-world scenarios, e.g., in smart manufacturing. The involved similarity computation is usually complex and time-consuming, as it requires some kind of inexact graph matching. To tackle these problems, we present an approach to modeling similarity measures based on embedding semantic graphs via Graph Neural Networks (GNNs). Therefore, we first examine how arbitrary semantic graphs, including node and edge types and their knowledge rich semantic annotations, can be encoded in a numeric format that is usable by GNNs. Given this,the architecture of two generic graph embedding models from the literature is adapted to enable their usage as a similarity measure for similarity-based retrieval. Thereby, one of the two models is more optimized towards fast similarity prediction, while the other model is optimized towards knowledge intensive, more expressive predictions. The evaluation examines the quality and performance of these models in preselecting retrieval candidates and in approximating the ground-truth similarities of a graph-matching-based similarity measure for two semantic graph domains. The results show the great potential of the approach for use in a retrieval scenario, either as a preselection model or as an approximation of a graph similarity measure.
|
Maximilian Hoffmann,
Ralph Bergmann,
|
0 |
Download Full Paper |
0 |
meta.shrinkage: An R Package for Meta-Analyses for Simultaneously Estimating Individual Means
Show Abstract
Abstract
Meta-analysis is an indispensable tool for synthesizing statistical results obtained from individual studies. Recently, non-Bayesian estimators for individual means were proposed by applying three methods: the James–Stein (JS) shrinkage estimator, isotonic regression estimator, and pretest (PT) estimator. In order to make these methods available to users, we develop a new R package meta.shrinkage. Our package can compute seven estimators (named JS, JS+, RML, RJS, RJS+, PT, and GPT). We introduce this R package along with the usage of the R functions and the “average-min-max”steps for the pool-adjacent violators algorithm. We conduct Monte Carlo simulations to validate the proposed R package to ensure that the package can work properly in a variety of scenarios. We also analyze a data example to show the ability of the R package.
|
Nanami Taketomi,
akeshi TEmura,
Hirofumi Michimae,
Yuan-Tsung Chang,
|
0 |
Download Full Paper |
0 |
Knowledge Distillation-Based Multilingual Code Retrieval
Show Abstract
Abstract
Semantic code retrieval is the task of retrieving relevant codes based on natural language queries. Although it is related to other information retrieval tasks, it needs to bridge the gaps between the language used in the code (which is usually syntax-specific and logic-specific) and the natural language which is more suitable for describing ambiguous concepts and ideas. Existing approaches study code retrieval in a natural language for a specific programming language, however it is unwieldy and often requires a large amount of corpus for each language when dealing with multilingual scenarios.Using knowledge distillation of six existing monolingual Teacher Models to train one Student Model – MPLCS (Multi-Programming Language Code Search), this paper proposed a method to support multi-programing language code search tasks. MPLCS has the ability to incorporate multiple languages into one model with low corpus requirements. MPLCS can study the commonality between different programming languages and improve the recall accuracy for small dataset code languages. As for Ruby used in this paper, MPLCS improved its MRR score by 20 to 25%. In addition, MPLCS can compensate the low recall accuracy of monolingual models when perform language retrieval work on other programming languages. And in some cases, MPLCS’recall accuracy can even outperform the recall accuracy of monolingual models when they perform language retrieval work on themselves.
|
Wen Li,
Junfei Xu,
Qi Chen,
|
0 |
Download Full Paper |
0 |