text_with_holes
stringlengths 180
2.75k
| text_candidates
stringlengths 36
912
| Selection 1
stringclasses 6
values | Selection 2
stringclasses 6
values | Selection 3
stringclasses 6
values | Selection 4
stringclasses 6
values | label
stringclasses 4
values |
---|---|---|---|---|---|---|
<|MaskedSetence|> <|MaskedSetence|> Let $p'$ be the path $p$ with $v_i$ excluded i.e.~$p = p' \cdot v_i$ where $\cdot$ is the path concatenation operator. Therefore, $p'$ is a path from $v_s$ with length $k - 1$. Let $f_p(1) = l_{j'}$ and $f_i(l_{j'}) = l_j$ for some $l_{j'} \in L$. <|MaskedSetence|> By construction of $E'$, $(v^1_{i'j'}, v^0_{ij'}), (v^0_{ij'}, v^1_{ij}) \in E'$. So, $v^1_{ij}$ is reachable from $v^0_{sm}$ in $G'$.
. | **A**: By induction hypothesis, $v^1_{i'j'}$ is reachable from $v^0_{sm}$ in $G'$.
**B**: Let $p = v_s, \ldots, v_{i'}, v_i$ for some $v_{i'} \in V$.
**C**:
\emph{Inductive step: } Let $\len(p) = k$ and let the if part be true for all paths from $v_s$ in $G$ with length less than $k$.
| CBA | CBA | CBA | CBA | Selection 4 |
The main goal of this paper is to provide a solution to the aforementioned longstanding open question. <|MaskedSetence|> Our main method for handling this question is based on the techniques of construction formulas presented in \cite{Bra07,BBFK14}, together with our own new observations and ideas. Namely, we try to construct PCTL formulas which encode the modified {\em Post Correspondence Problem} from our ideas. <|MaskedSetence|> In addition, by the techniques presented in \cite{BBFK14, Bra07} alone, it seems impossible to answer this question, which means that it requires new angles of viewpoint (see e.g. <|MaskedSetence|> | **A**: Remark \ref{remark7} and Remark \ref{remark8}).
.
**B**: Our main purpose here is that we are willing to tackle an open question in the field to get a taste of this subject.
**C**: It should be pointed out that although we continue to employ some technique presented in \cite{Bra07, BBFK14}, our contributions are not only to be just solving a math question based on the already known techniques, because there are many new observation and idea hidden behind the solution.
| BCA | CAB | BCA | BCA | Selection 3 |
{ We study three different approximations to the posterior distribution. Firstly, we consider using the mean of the Gaussian process emulator as a surrogate model, resulting in a deterministic approximation to the posterior distribution. <|MaskedSetence|> The uncertainty in the posterior distribution introduced in this way can be thought of representing the uncertainty in the emulator due to the finite number of function evaluations used to construct it. This uncertainty can in applications be large (or comparable) to the uncertainty present in the observations, and a user may want to take this into account to "inflate" the variance of the posterior distribution. Finally, we construct an alternative deterministic approximation by using the full Gaussian process as surrogate model, and taking the expected value (with respect to the distribution of the surrogate) of the likelihood. It can be shown that this approximation of the likelihood is optimal in the sense that it minimises the $L^2$-error \cite{sn16}. <|MaskedSetence|> <|MaskedSetence|> | **A**: Our second approximation is obtained by using the full Gaussian process as a surrogate model, leading to a random approximation in which case we study the second moment of the Hellinger distance between the true and the approximate posterior distribution.
**B**:
}
.
**C**: In contrast to the approximation based on only the mean of the emulator, this approximation also takes into account the uncertainty of the emulator, although only in an averaged sense.
| ACB | CAB | ACB | ACB | Selection 1 |
<|MaskedSetence|> Our analysis showed that mixed observations, compared with separate observations of individual random variables, can reduce the number of samples required to identify the anomalous random variables accurately. <|MaskedSetence|> Therefore, the compressed hypothesis testing problem considered in this paper is quite different from the conventional compressed sensing problem. <|MaskedSetence|> Numerical experiments demonstrate that mixed observations can play a significant role in reducing the required samples in hypothesis testing problems.
. | **A**: Compared with conventional compressed sensing problems, in our setting, each random variable may take dramatically different realizations in different observations.
**B**: Additionally, for large-scale hypothesis testing problems, we designed efficient algorithms - Least Absolute Shrinkage and Selection Operator (LASSO) and Message Passing (MP) based algorithms.
**C**:
In this paper, we studied the compressed hypothesis testing problem, which is finding $k$ anomalous random variables following a different probability distribution among $n$ random variables by using mixed observations of these $n$ random variables.
| CAB | CAB | ACB | CAB | Selection 1 |
<|MaskedSetence|> <|MaskedSetence|> This cryptographic treatment of the wiretap channel induces fundamental differences in terms of design goals of the resulting cryptographic wiretap system. First, the provision of security can be decoupled from the design of codes for reliability. Second, the system departs from the one-way Shannon transmission system model and enables implementing different security protocols. <|MaskedSetence|> | **A**:
The framework was later generalised to meet cryptographic symmetry-breaking methods by~\cite{Maurer1993} and~\cite{Bellare2012}.
**B**: The cryptographic generalisation identifies the logical equivalence between some cryptographic and information theoretic security metrics while introduces additional ones not used in Wyner's model.
**C**:
.
| ABC | ABC | ABC | BCA | Selection 1 |
\end{widetext}
In wireless quantum communication, there exists mesh backbone network which consist of route nodes and edge route nodes. <|MaskedSetence|> \ref{figu1}. <|MaskedSetence|> To achieve this, it scrutinizes its routing table to find if there is any available route to J. <|MaskedSetence|> However, in the absence of none, source node A requests for a quantum route discovery from the neighboring edge route B and thus, the quantum route finding process commences. Once a routing path that permits co-existence of quantum and classical route, from source node to the destination is found and selected, the edge route node I sends a route reply to node A. At this moment, the process of establishing the quantum channel commences.. | **A**: Node A wishes to send information to node J.
**B**: We delineate the quantum mesh network in Fig.
**C**: If there are available route, it forwards the packet to next hop node.
| BAC | BAC | BAC | BAC | Selection 4 |
S.~Dumitrescu and S.~Zendehboodi, ``Globally Optimal Design of a Distributed Scalar Quantizer for Linear Classification'', in \emph{Proc. IEEE Int. Symp. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> van Emde Boas, ``Another NP-Complete Partition Problem and the Complexity of Computing Short Vectors in a Lattice,'' Report 81-04, Mathematical Institute, University of Amsterdam, Amsterdam, 1981.. | **A**: 2021.
\bibitem{Boas:1981} P.
**B**: Inf.
**C**: Theory}, Melbourne, Australia, Jul.
| BCA | BCA | CBA | BCA | Selection 2 |
<|MaskedSetence|> <|MaskedSetence|> Where a region of finite time invariance is defined as a space surrounding the CoM where non-linear time-dependent dynamics can be considered invariant for a short time \cite{Tiseo2018,Tiseo2018bioinspired}. This approach is usually used in control applications with highly variable environmental conditions, and, similarly to human behaviour, it does not guarantee that the system will always converge at the exact desired output, but it identifies the strategy to obtain a locally stable system. Moreover, we have also hypothesised a second BoS that describes the expected dynamic conditions in a given posture, called Ballistic-BoS (BBoS). <|MaskedSetence|> | **A**:
We have recently proposed that the NS relies on two different models of the BoS to plan and supervise locomotion \cite{Tiseo2016, Tiseo2018,Tiseo2018bioinspired}.
**B**: Therefore, the BBos describes a forward model implemented by the Cerebellum to predict future global stability conditions required by the motor cortex to plan stable movements.
.
**C**: The Instantaneous-BoS (IBoS) has been theorised to be used as a region of finite time invariance by the Cerebellum to supervise the Central Pattern Generators (CPG) in the brain stem and the spinal cord.
| ACB | CBA | ACB | ACB | Selection 4 |
The Barab谩si鈥揂lbert model, $\textsc{BA}(m_0,m)$, uses a preferential attachment mechanism to generate a growing scale-free network. The model starts with a graph of $m_0$ vertices. Then, each new vertex connects to $m\leq m_0$ existing nodes with probability proportional to its instantaneous degree. <|MaskedSetence|> few vertices become hubs with extremely large degree \cite{barabasi1999emergence}. <|MaskedSetence|> In our experiments, we let the network grow until a desired network size $n$ is attained. <|MaskedSetence|> We keep the value of $m$ equal to $5$.
For each generation model, we generate graphs on size $|V| = 50, 100, 150, \ldots, 500$. On each graph instance, we assign integer edge weights $c(e)$ randomly and uniformly between 1 and 10 inclusive. We only consider connected graphs in our experiment. Computational challenges of solving an ILP limit the size of the graphs to a few hundred in practice.. | **A**: This model is a network growth model.
**B**: We vary $m_0$ from $10$ to $100$ in our experiments.
**C**: The BA model generates networks with power-law degree distribution, i.e.
| CAB | CAB | ABC | CAB | Selection 4 |
In this work, the interaction bipartite graph of Hamiltonians and the classical event-variable graph are both denoted by the bipartite graph $\GBipartiteGraph=([m],[n],E_B)$. <|MaskedSetence|> Usually, we will index the left vertices with ``$i$" and the right vertices with ``$j$". <|MaskedSetence|> In this paper, there will never be ambiguity in identifying which vertex is which from the context.
\subsubsection{Tight Region for QLLL} In this paper, we first prove the tightness of Shearer's bound for QLLL, which affirms the conjecture in \cite{pnas,Morampudi2018Many}. <|MaskedSetence|> | **A**: Precisely,.
**B**: We call the vertices in $[m]$ the left vertices and those in $[n]$ the right vertices.
**C**: In $\GBipartiteGraph$, there may be two vertices with the same index $k$: one is a left vertex and the other is a right vertex.
| BCA | BCA | BCA | BCA | Selection 2 |
<|MaskedSetence|> First and foremost, a full support for multi-GPUs is added, which allows to analyze datasets with almost unlimited numbers of rows (available memory is a constraint). Secondly, the method has been integrated with Bioconductor, which enables the user to run all the analysis from the R level. Thirdly, a different method for performing analysis was added, which depends on the presence or absence of missing values within the data. Last, but not least, some bugs have been fixed and optimizations were made for more efficient memory management. <|MaskedSetence|> <|MaskedSetence|> | **A**:
In this paper we introduce the open source package built on top of the upgraded version of the method.
**B**:
.
**C**: All above combined make an this open source software ready out-of-the-box for big data biclustering analysis.
| ACB | ACB | ABC | ACB | Selection 1 |
In this section we analyze the expected number of edges that are not eliminated by the criterion of Jonker and Volgenant. <|MaskedSetence|> <|MaskedSetence|>
Recall that the edge elimination procedure of Jonker and Volgenant eliminates the edge $pq$ if for another vertex $r$ the set $I_q^{pr}\cup I_p^{qr}$ does not contain any vertex other than $p,q$ and $r$. <|MaskedSetence|> This leads to the result that the expected number of remaining edges is quadratic.. | **A**: Our goal is to get a bound on the area of $I_q^{pr}\cup I_p^{qr}$ and to show that it is likely that they contain vertices other than $p,q$ and $r$.
**B**: We consider a fixed edge $pq$.
**C**: Moreover, we assume that $r$ is a vertex other than $p$ or $q$.
| BCA | BCA | BCA | BCA | Selection 3 |
\caption[]{(a) Each white box in the figure corresponds to a residual block~\cite{He_2016_CVPR}. Blue circles are the intermediate predictions whereas the yellow one is the final prediction. <|MaskedSetence|> The region in the dashed orange box represents the encoding procedure, where red rhombus is the context layer and the pink box is the branch for semantic encoding loss. (b) The Encoding Layer contains a codebook and smoothing factors, capturing encoded semantics. <|MaskedSetence|> <|MaskedSetence|> (Notation: FC fully connected layer, $\otimes$ channel-wise multiplication.)}
\label{fig:network}
. | **A**: The top branch predicts scaling factors selectively highlighting class-dependent featuremaps.
**B**: The down branch predicts the presence of the categories in the scene.
**C**: A loss function is applied to all these predictions through the same ground truth.
| CAB | CAB | ACB | CAB | Selection 1 |
<|MaskedSetence|> NMF became popular after Lee and Seung derived multiplicative factor updates that made the additive steps in the direction of the negative gradient obsolete \cite{Lee1999}. In \cite{Lee2001}, Lee and Seung give empirical evidence of convergence of the multiplicative updates to a stationary point, using (a) the squared Euclidean distance and (b) the generalized Kullback--Leibler divergence as the contrast function. The factorization's origins can be traced back to \cite{Paatero1994,Paatero1997}.
A convolutional variant of the factorization based on the Kullback--Leibler divergence is introduced in \cite{Smaragdis2004}. There, the idea is to model temporal relations in the neighborhood of a point in the time-frequency plane. <|MaskedSetence|> In \cite{Smaragdis2007}, to provide a remedy, multiple coefficient matrices are updated (one for each translation) and the final update is by taking the average over all coefficient matrices. The exact same principles are applied in \cite{Wang2009} to derive a convolutional NMF based on the squared Euclidean distance. There, the authors combine the updates from \cite{Lee2001} with the averaging from \cite{Smaragdis2007} in an efficient manner. Why these updates are inexact is explained in \cite{Villasana2018_arXiv}. A nonnegative matrix factor deconvolution in 2D based on (a) the squared Euclidean distance and (b) the Kullback--Leibler divergence is found in \cite{Schmidt2006}. It should be pointed out that the update rule for the coefficient matrix is different from those in \cite{Smaragdis2004, Smaragdis2007, Wang2009}. <|MaskedSetence|> | **A**: The corresponding factor updates are taken from \cite{Lee2001} and lead to a biased factorization.
**B**: Nonnegative matrix factorization (NMF) finds its application in the area of machine learning and in connection with inverse problems.
**C**: A convolutional NMF has been deployed with arguable success to extract sound objects \cite{Smaragdis2004}, to separate speakers \cite{Smaragdis2007}, to detect onsets \cite{Wang2009}, to automatically transcribe music \cite{Schmidt2006}, and more recently to enhance speech \cite{Sun2015} or to discover recurrent patterns in neural data \cite{Mackevicius2018}..
| BAC | BAC | BAC | BAC | Selection 3 |
<|MaskedSetence|> At stage $i$ of the cascade, each index is either medial or lateral. The medial indices are split into $\medsetMinusBrace{i}$ and $\medsetPlusBrace{i}$. <|MaskedSetence|> Important subsets of these sets are $\numedsetMinus{i}$ and $\numedsetPlus{i}$, respectively. <|MaskedSetence|> BST operations will be applied to exactly one of these subsets, called the ``active set.'' On all other indices, the pass-through operation will be applied.
. | **A**: These two subsets will also be defined as part of the recursion.
**B**:
What remains to define is which operation, BST or pass-through, to apply to which index, and to determine the various medial sets.
**C**: These sets will be defined as part of the recursion.
| BCA | BCA | BCA | CAB | Selection 1 |
HRI sessions to evaluate the efficiency of deployed systems have been attempted by many researchers. Meena et al. <|MaskedSetence|> Ramachandran et al. evaluated their approach using two methods, firstly they asked the participants to appear in a pre-test and a post-test prepared on the contents of session to access the participant's learning within a single session. Secondly, they kept tracked the number of declined hints and auto hints and compared them across sessions to deduce if sessions are fruitful. In another similar approach, Ismail et al. <|MaskedSetence|> <|MaskedSetence|> They performed gaze detection manually for the same purpose.
. | **A**: evaluated their model by mapping expectation and experience of participants through questionnaire.
**B**: \cite{c10} argued that eye contact plays an equally important role in understanding the quality of communication.
**C**: That is why they proposed a method for detecting concentration level of child with ASD in its interaction with NAO.
| ABC | ABC | BCA | ABC | Selection 1 |
Recently, Rodriguez and Laio \cite{rodriguez2014clustering} proposed a remarkable strategy that achieves Clustering by finding Density Peaks (CDP). CDP, addresses the limitations of DBSCAN by initially finding density peaks and using them to separate clusters. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> This task can be performed either automatically or manually by an user visually inspecting the density-distance plot. Due to the simplicity of this rule CDP can be applied whenever density can be measured which hold in a wide range of applications.\\
Once peaks are identified, each one is assigned to a different cluster with a unique label. Then, CDP processes each remaining point by inheriting the label of the closest point with higher density. Although this rule is applicable to any cluster shape, cases remain where such a local criterion is not optimal.
. | **A**: Therefore, CDP identifies density peaks by detecting the outliers of a density-distance plot.
**B**: Density peaks are considered as points surrounded by a sufficiently many points with lower density.
**C**: These neighbors with lower density make density peaks distant.
| BCA | ABC | BCA | BCA | Selection 1 |
<|MaskedSetence|> Thus, the samples produced by the second step are more likely to be the protectors of the nodes which are prone to be misinformation-influenced. Note that the pattern of the sample areas is determined by the nodes collected in the first step. <|MaskedSetence|> \ref{fig: patterns}, under the uniform reverse sampling the samples are uniformly distributed to the whole graph, while under the hybrid sampling the samples tend to be centered around the seed node of the misinformation. As shown later, the sample obtained by our hybrid sampling method can be used to directly estimate the prevention effect of the positive cascade. <|MaskedSetence|> In order to evaluate the proposed algorithm, we design experiments which compare the algorithms by evaluating their performance under the same time constraint. The effectiveness of our solution is supported by encouraging experimental results.
. | **A**: As shown in Fig.
**B**: In this sampling method, the frequency that a node can be collected in the first step is proportional to the probability that it will be affected by the misinformation.
**C**: Based on the hybrid sampling method, we propose a new randomized approximation algorithm for the MP problem.
| BAC | BAC | BAC | CAB | Selection 3 |
The performance of TSDB is evaluated by a set of metrics. <|MaskedSetence|> Cost-time is used as the performance measurement and it means the elapse time between sending a request or statement to the TSDB and receiving the full result from the TSDB successfully, which is also called latency or TTLB (Time to Last Byte). <|MaskedSetence|>
Second, we use \textbf{throughput} to evaluate the performance of ingestion test, which is calculated by the cost-time and the number of concurrent clients. We add up the ingestion cost-time of each client, respectively, as accumulative cost-time for each client and take the maximum accumulative cost-time as the total cost-time of multiple concurrent ingestion clients. The throughput equals to the total number of ingested data points divided by the total cost-time. <|MaskedSetence|> | **A**: First, a set of statistical metrics is needed to evaluate the performance of each type of operations, including minimum, maximum, average, middle-average, 1st, 5th, 50th, 90th, 95th and 99th percentile of \textbf{cost-time}.
**B**: .
**C**: Middle-average is the average cost-time that cuts off 5\% head and tail.
| ACB | ACB | ACB | ABC | Selection 2 |
<|MaskedSetence|> \ref{frames}. <|MaskedSetence|> The axis of the first revolute joint, $z_0$, is parallel to the quadrotor $x$-axis. The axis of the second joint, $z_1$, is normal to that of the first joint and hence it is parallel to the quadrotor $y$-axis at the extended configuration. Therefore, the pitching and rolling rotation of the end-effector is allowable independently from the horizontal motion of the quadrotor. Hence, with this proposed aerial manipulator, it is possible to manipulate objects with arbitrary location and orientation. <|MaskedSetence|> | **A**: The manipulator has two revolute joints.
**B**: \end{figure}
System geometrical frames, which are assumed to satisfy the Denavit-Hartenberg (DH) convention, are illustrated in Fig.
**C**: Consequently, the end-effector can make motion in 6-DOF with minimum possible number of actuators/links that is critical factor in flight.
\begin{figure}[!t].
| BAC | BAC | CBA | BAC | Selection 2 |
\subsection{Construction of distributed representation of words and sentences}
Here we construct distributed representation of words and sentences and express them in their final form which is used to train the models. <|MaskedSetence|> The word vectors for a total of 251,292 tokens were generated. Here are the 3 plots for plotting the most similar words for the words ‘good’ and ‘bad’. <|MaskedSetence|> Figure \ref{fig:w2v_cbow}, shows the plot for Word2Vec CBOW. <|MaskedSetence|> | **A**: PCA is used for dimensionality reduction.
**B**: We use a third-party library, gensim \cite{9}, for this purpose, we constructed vectors for all the words and the 3 types of construction.
**C**: Here you can see that all the negative words or the words related to ‘bad’ are slightly on the top and the words related to ‘good’ or positive words are slightly at the bottom, but the converge as we move to the left.
.
| BAC | BAC | BAC | BAC | Selection 2 |
<|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> While the range suggested from the permutations statistics as described in Section~\ref{sec:dimension} falls within the range suggested by experts, their range is too broad. Specifically, a dimension greater than 9 can be computationally cumbersome, and a dimension lower than 4 would not show significant differences for dynamic state changes. Therefore, we suggest the use of our narrower range of dimension from $n \in [5, 6]$ for maps, which agrees with our optimal PE parameter range.
. | **A**: For the permutation dimension we found a suggested dimension $n \in [4 , 7]$, in comparison to the expected suggested dimension ranging from 2 to 16.
**B**:
\underline{Maps}:
When selecting the delay parameter for permutations and takens' embedding for maps we found that all of the topological methods suggested accurate delay parameters, while the standard mutual information method selected overly large delay parameters when the maps are chaotic.
**C**: Therefore, we suggest the use of one of the topological methods when estimating the delay parameter for maps.
| BCA | BCA | ABC | BCA | Selection 4 |
<|MaskedSetence|> <|MaskedSetence|> Moreover, attackers now are leveraging automation and cloud to scale their attacks faster and infiltrate systems in record break time. <|MaskedSetence|> Knowing the potential strike points or actions of attacker, the organization can take necessary steps for mitigating cyber risks to organization’s business.
. | **A**: In current world where organizations are highly digital, a single vulnerability can lead to penetrative attack negatively affecting business on a large scale.
**B**:
The existing cyber security tools focus on reactive methods and algorithms as a major part of their cyber security arsenal.
**C**: Therefore, it is advisable for organization to stay one step ahead of attackers and be able quickly foresee where and when they will strike.
| CAB | BAC | BAC | BAC | Selection 4 |
<|MaskedSetence|> <|MaskedSetence|> A possible extension to this paper is to develop update schemes that use this to provide more robust convergence guarantees for full information continuous games.
Different learning rates amongst agents also affects the region of attraction of
the game, hence starting from the same initial condition, agents may converge to a different equilibria. Agents may use this to their benefit, as shown in the last example. <|MaskedSetence|> We also show through numerical examples that, counterintuitively, if an agent decides to learn slower, a stable differential Nash equilibrium can go unstable, resulting in learning dynamics that do not converge to Nash.. | **A**: By preconditioning the gradient dynamics by $\Gamma$, a diagonal matrix where the diagonals represent the agents' learning rates, we can begin to understand how a changing learning rate relative to others can change the properties of the fixed points of the dynamics.
**B**: Such insights into the learning behavior of agents will be useful for providing guarantees on the design of control or incentive policies to coordinate agents.
**C**: Moreover, players do not know how a change in others' strategies affects its own cost ($D_j f_i$ where $j\neq i$).
| ACB | ACB | ABC | ACB | Selection 1 |
The main contribution of this paper is a novel MFG-based framework for uplink power control in an ultra-dense millimeter wave network. <|MaskedSetence|> In particular, we consider directional beamforming by the base stations and the mobile users. <|MaskedSetence|> <|MaskedSetence|> Thus, we consider adaptive user association, in which each user, at each time instant, connects with the BS that provides the required quality-of-service requirement of the MU. We derive the expressions for the user association distributions for a finite size network, as opposed to the prior work that assumes infinite network size. Our results show that the proposed approach can improve energy efficiency by up to $24\%$, compared to a baseline in which the nodes transmit according to a path loss compensating power control policy.
. | **A**: We consider the time evolution of the users' orientations and the energy available in their batteries.
**B**: Further, we model the randomness of the deployment of BSs as well as MUs using stochastic geometry \cite{mmwavecoverage}.
**C**: We formulate the uplink power control as a mean-field game that takes into account the characteristics of millimeter wave networks.
| CAB | CAB | BCA | CAB | Selection 2 |
<|MaskedSetence|> Standard system-theoretic tolls do not apply directly to OMAS, because of the evolution of their state space. For this reason, we had to propose several new definitions, including suitable definitions of state evolution and of stability. The proposed notion of stability has two features: (1) the distance from the origin is normalized by the number of agents; and (2) the definition disregards what happens within a certain distance from the origin (we refer to this distance as stability radius). In order to study the evolution and the stability of OMAS, it is necessary to compare states that belong to different spaces. <|MaskedSetence|> In particular, we showed that multi-agent systems whose dynamics (up to arrivals and departures of agents) can be defined by contraction maps are stable according to our definition and their stability radius depends upon the properties of the join and leave mechanisms in the network. Furthermore, we applied our results to an adaptation to OMAS of the proportional dynamic consensus protocol. <|MaskedSetence|> | **A**: \end{figure}
\section{Conclusions}\label{conclusions}
In this paper we proposed a theoretical framework for stability analysis of discrete-time open multi-agent systems.
**B**: Future work should pursue two complementary direction: building up a more general and comprehensive theory, while at the same time investigate other classes of open-multi agent systems and propose novel ``open'' distributed coordination algorithms..
**C**: To this purpose, we defined the open distance function and used it to establish criteria for stability in the proposed open scenario.
| ACB | ACB | ACB | BCA | Selection 3 |
This paper introduces the RGAS and convergence analysis for the moving horizon estimator based on adaptive arrival cost proposed
in \cite{sanchez2017adaptive} in the practical case of nonlinear detectable systems subject to bounded disturbances. To establish robust stability properties for MHE it is crucial that the prior weighting in the cost function is chosen properly. In various schemes the necessary assumptions in the prior weighting are difficult to verify (\cite{rao2003constrained}, \cite{rawlings2009model}), while in others can be verified a prior \cite{muller2017nonlinear}. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> | **A**: Furthermore, the disturbances gains become uniform (i.e., they are valid independent of $N$), allowing to extend the stability analysis to full information estimators with least-square type cost functions.
**B**: .
**C**: In the MHE scheme analysed in this work, the assumption on the prior weighting can be verified a prior by design.
| CAB | CAB | CAB | CBA | Selection 3 |
<|MaskedSetence|> Gaurav Pandey} is a Technical Expert in the Controls and Automated Systems department of Ford Motor Company. He is currently leading the mapping and localization group at Ford and is working on developing localization algorithms for SAE level 3 and level 4 autonomous vehicles. Prior to Ford, Dr. Pandey was an Assistant Professor at the Electrical engineering department of Indian Institute of Technology (IIT) Kanpur in India. At IIT Kanpur he was part of two research groups (i) Control and Automation, (ii) Signal Processing and Communication. <|MaskedSetence|> He did is B-Tech from IIT Roorkee in 2006 and completed his Ph.D. <|MaskedSetence|> | **A**: from University of Michigan, Ann Arbor in December 2013.
\end{IEEEbiography}
.
**B**: \begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{figs/Gaurav_Pandey2.jpg}}]{Dr.
**C**: His research focus is on visual perception for autonomous vehicles and mobile robots using tools from computer vision, machine learning, and information theory.
| BCA | BCA | BCA | ACB | Selection 2 |
<|MaskedSetence|> <|MaskedSetence|> The notion of generalized complement graphs that we introduced provides a much better tool than those in the literature to estimate the region of attraction and ultimate level of phase cohesiveness when the network is weighted complete or uncomplete. However, the disadvantage of this method is that the number of edges connecting each node has a noticeable lower bound. The simulations we have performed provides some insight into understanding the partial synchrony observed in human brain. <|MaskedSetence|>
\ifCLASSOPTIONcaptionsoff. | **A**: We are interested in investigating other mechanisms that could render partial synchronization.
**B**: Sufficient conditions in the forms of algebraic connectivity and nodal degree have been obtained by using the incremental $2$-norm and $\infty$-norm, respectively.
**C**: We have studied partial phase cohesiveness, instead of complete synchronization, of Kuramoto oscillators coupled by two-level networks in this note.
| CBA | CBA | CBA | BAC | Selection 2 |
\subsection{Model Uncertainties}
Let $f_{0}$ denote the information available at $t_o$. <|MaskedSetence|> The forecast state $S$ can be regarded as a sufficient statistic which parameterizes the information on wind at time $t_{1}$. <|MaskedSetence|> <|MaskedSetence|> Define . | **A**: We call this an \emph{information state}.
**B**: Let $p(w|S)$ be the conditional probability of the wind given the intermediate forecast state $S$ at time $t_{1}$.
**C**: We parameterize $S \in [0, 1]$.
| ABC | BCA | BCA | BCA | Selection 4 |
In this paper, we propose an integrated power and thermal management (i-PTM) optimization framework for a power split HEV, accounting for cabin heating requirement. <|MaskedSetence|> We first develop and experimentally validate a control-oriented model, including both power and thermal loops.
To provide a bench-mark performance target, the DP is adopted to minimize the fuel consumption by controlling the engine operating mode, power split, and heating power supplied to the cabin, while enforcing the system constraints and managing thermal responses. <|MaskedSetence|> %Some key findings are discussed in the context of the real-time i-PTM development for connected and automated vehicles (CAVs). <|MaskedSetence|> | **A**: .
**B**: The engine is assumed to be the only resource to provide the heating power to the cabin.
**C**: To demonstrate the thermal impact on behaviors of the power management and the corresponding fuel saving potential, a realistic winter congested driving scenario is considered in the simulation case study in which the proposed i-PTM with 2-dimensional and 3-dimensional DP formulations are compared with the baseline controllers.
| ACB | BCA | BCA | BCA | Selection 4 |
<|MaskedSetence|> In Section 3, we consider the case of (skew-)symmetric matrices; we present the (skew-)symmetry-preserving low-rank integrator and study its properties. In Section 4, we recapitulate the projector-splitting integrator for low-rank Tucker tensors. <|MaskedSetence|> <|MaskedSetence|>
. | **A**:
The outline of the paper is the following: in Section 2, we briefly restate the idea of dynamical low-rank approximation for matrices and we present the matrix projector-splitting integrator with some of its properties.
**B**: In Section 5, we present the integrator for (anti)-symmetric tensors of low multilinear rank and study its properties.
**C**: In the final section, we present numerical experiments that illustrate the approximation properties and the robustness to small singular values.
| ABC | ABC | ABC | ABC | Selection 3 |
<|MaskedSetence|> Moreover, systems of nonlinear PDEs have attracted much attention in the study of nonlinear time dependent equations describing wave propagation. <|MaskedSetence|> Burgers' equations occur in a wide area of applied mathematics such as, heat conduction, modeling of dynamics, acoustic wave, turbulent fluids and in continuous stochastic processes \cite{3zsd,1zsd,2zsd,bj}. <|MaskedSetence|> In this paper, we should analyze the following two-dimensional evolutionary viscous coupled Burgers' equations
. | **A**: Numerical analysis of Burgers' equations have attracted attention during the last decades and it represents an active filed of research to develop fast and efficient numerical schemes in the approximate solutions of such equations.
**B**:
\section{Introduction and motivation}\label{sec1}
A broad range of nonlinear evolutionary partial differential equations (PDEs) arise in several fields of science, namely in physics, engineering, chemistry, biology, finance and are very important in the mathematical formulation of continuum models.
**C**: For instance, the two-dimensional unsteady nonlinear coupled Burgers' equations are such type of PDEs.
| BCA | BAC | BCA | BCA | Selection 4 |
This paper presents continuum traffic flow models for both pure AV traffic and mixed AV-HV traffic. <|MaskedSetence|> <|MaskedSetence|> To demonstrate the mixed traffic stability analysis, three groups of numerical experiments are performed. In particular, we characterize the stability regions over AV density and HV density as well as over total density and AV's penetration rate in the mixed traffic. <|MaskedSetence|> In future work, we plan to develop analytical stability analysis for mixed traffic and discuss the relation between more general AV controller designs and stability under different types of AV-HV interactions.
. | **A**: The mixed AV-HV traffic is modeled by a coupled MFG-ARZ system.
**B**: The pure AV traffic is modeled by a mean field game and the linear stability analysis shows the traffic is always stable.
**C**: We also quantify the impact of the AV controller parameter on traffic stability.
| BAC | BAC | ACB | BAC | Selection 4 |
Different simple techniques can be used in order to compensate for these issues. <|MaskedSetence|> This can lead to reducing the sign alternations of the errors. <|MaskedSetence|> <|MaskedSetence|> For each up-crossing of the foreknown levels (the maximum and the minimum) we either clip the estimated signal or substitute the best existing signal for the reconstructed signal. If the signal is too noisy, the Median Filter (MedFilt) can be used in order to smooth the NL signal. In this article, we use both MedFilt and Clipping.
. | **A**: One approach could be using some of the linear combinations (weighted averages) of the existing estimations in substitution instead.
**B**: Another approach is applying the NL formula to transformed versions of the estimations in another specified domain.
**C**: Assuming that the signals are bounded, simple methods such as Clipping, Substitution and Smoothing can be used in order to compensate for the undesirable spikes caused by the NL method.
| ABC | BCA | ABC | ABC | Selection 4 |
<|MaskedSetence|> The available model of the system assumes bounded noise in both the dynamics and the observation equation with the latter being possibly affected by an unknown but sparse attack signal. Contrary to the settings in some existing works, we did not impose here any restriction on the number of sensors which are subject to attacks, that is, any sensor can be compromised at any time. <|MaskedSetence|> <|MaskedSetence|> Our bound, although necessarily conservative, has the important advantage of being explicitly expressible in function of the properties of the considered dynamic system. This makes it a valuable qualitative tool for assessing the impact of the estimator's design parameters and that of the system matrices on the quality of the estimation.
. | **A**:
The contribution of the current paper is the design of a (convex) optimization-based resilient estimator for LTI discrete-time systems.
**B**:
We show that the estimation error associated with the new estimator can be made, under certain conditions, insensitive to the amplitude of the attack signal.
**C**: Our main theoretical result concerns the resilience analysis of the proposed estimator.
| ACB | CBA | ACB | ACB | Selection 4 |
Recently, based on the definition of tensor Singular Value Decomposition (t-SVD) \cite{Kilmer2011Factorization,Martin2013An} that enjoys many similar properties as the matrix case, the tensor tubal rank (see Definition \ref{Tensor tubal rank}) is proposed by Kilmer et al. <|MaskedSetence|> Along those lines, Lu et al. <|MaskedSetence|> <|MaskedSetence|> Therefore, a convex tensor nuclear norm minimization (TNNM) model based on the assumption of low tubal rank for tensor recovery has been proposed in \cite{Lu2019Tensor}, which solves
\begin{equation}\label{tensor nuclear norm min}
\min \limits_{{\boldsymbol{\mathcal{X}}}\in\mathbb{R}^{n_{1} \times n_{2} \times n_{3}}}~\|\boldsymbol{\mathcal{X}}\|_{*},~~s.t.~~\|\boldsymbol{y}-\boldsymbol{\mathfrak{M}}(\boldsymbol{\mathcal{X}})\|_{2}\leq\epsilon,. | **A**: \cite{kilmer2013third}.
**B**: Furthermore, they pointed out that a tensor always has low average rank if it has low tubal rank.
**C**: \cite{Lu2019Tensor} given a new and rigorous way to define the tensor average rank (see Definition \ref{Tensor average rank}) and the tensor nuclear norm (see Definition \ref{Tensor nuclear norm}), and proved that the convex envelop of tensor average rank is tensor nuclear norm within the unit ball of the tensor spectral norm.
| ACB | ACB | BCA | ACB | Selection 4 |
<|MaskedSetence|> First, we pointed out that there was a strong connection between the formulation of the Lippmann-Schwinger equation for the microscopic BVP by using the polarization technique and by using the Galerkin-based projection. Indeed, the same result can be arrived at by two different routes of derivation. Second, a surrogate model for computational homogenization of elasticity at finite strains is built based on a neural network architecture that mimics the high-dimensional model representation. <|MaskedSetence|> <|MaskedSetence|> The comparison of the numerical results with full-field solution as well two-scale homogenized solution validates both the reliability and robustness of the proposed computational framework.
%% REFERENCES. | **A**: The database is constructed by solving numerous microscopic problems with the aid of the FFT-based solver to obtain the set of input-target data.
**B**: This contribution has addressed a surrogate model for two-scale computational homogenization.
**C**: Particularly, this black-box function is an approximator of the macroscopic energy density and is trained upon the space of uniformly distributed random data of macroscopic deformation gradients.
| BCA | ACB | BCA | BCA | Selection 3 |
\section{Introduction}
\lettrine{E}{xo-atmospheric} interception of ballistic targets is particularly challenging due to the hit to kill requirement and relatively small size of a ballistic re-entry vehicle (BRV), typically 45 to 60 centimeters in diameter. Successful interception requires both a small miss distance and a suitable impact angle, with miss distance requirements of 50 cm implied by the BRV and missile dimensions. Moreover, the missile must autonomously discriminate between threats and decoys. <|MaskedSetence|> Both spiral and bang-bang maneuvers could potentially be executed by a BRV without compromising the BRV's accuracy. <|MaskedSetence|> Another complication of exo-atmospheric interception is that the high altitude requires the use of divert thrusters rather than control surfaces, with current implementation using pulsed divert thrusters. <|MaskedSetence|> Fuel efficiency is also critical, as the missile loses all control authority when its fuel is depleted.. | **A**: These maneuvers could be executed either in response to the BRV's sensor input (if so equipped) or periodically executed during the portion of the trajectory where interception is likely.
**B**: As the missile burns fuel, its center of mass shifts, and the divert thrusts cause a tumbling motion that requires compensation from the attitude control thrusters.
**C**: The interception problem is significantly complicated by warheads with limited maneuvering capability.
| CAB | CAB | ABC | CAB | Selection 2 |
<|MaskedSetence|> Specifically, systems can achieve its objective when one node goes away or fails \cite{Chen_CDC,chen2019-games}. Furthermore,
systems can respond to other systems in a non-deterministic/stochastic way and increases the composability and modularity of the system design. <|MaskedSetence|> However, the structured randomness leads to emerging system behaviors that manifest desirable properties for the objective of entire mission.
Systems that have such properties are easily composable and resilient-by-design. <|MaskedSetence|> | **A**: Mosaic distributed system design refers to engineering agents with flexible interoperability and the capability of \textit{self-adaptability}, \textit{self-healing}, and \textit{resiliency}.
**B**: For example, agents can randomly arrive and respond in a stochastic but structured way to other agents in an uncertain environment.
**C**: .
| ABC | ABC | CBA | ABC | Selection 2 |
Section~\ref{sec:results} describes how six attacks affect the water distribution process in WADI. <|MaskedSetence|> In addition to the six attacks mentioned in Section~\ref{sec:results}, several other attacks can be launched on WADI. For example organic and inorganic contaminants may be added to water and the chemical sensors compromised\,\cite{palleti2016sensor} so that the attack is not detected. <|MaskedSetence|> <|MaskedSetence|> | **A**: WADI also has a leakage simulator that can be used to launch leakage or water theft attacks.
**B**: Such attacks and their impact on WADI will be study in the future.
.
**C**: In summary, an attack may lead to any one or more of the following undesirable consequences: (a)~tank overflow, (b)~pressure drop at the consumer end, (c)~no water at consumer end, and (d)~equipment damage.
| CAB | ABC | CAB | CAB | Selection 3 |
Hybrid systems feature both continuous evolution in time and discrete events, and are valuable for the modelling, analysis and control of many engineering applications, see \cite{goe_san_12,lei_wouw_08sh} and the references therein. <|MaskedSetence|> <|MaskedSetence|> they experience jumps at close, but not identical, jump times and during this time-mismatch interval, the state distance between both solutions will generally not be small. To analyse stability of a hybrid solution, set-stability techniques \cite{for_teel_13sh}, ignoring the state difference in this interval \cite{mor_bro_10}, or non-Euclidean distance-like functions \cite{bie_hee_16,bie_wouw_13sh} have been proposed. <|MaskedSetence|> | **A**: %The stability of such jumping solutions is essential for e.g.\ tracking control applications, stabilisation of limit cycles, and observer design problems for hybrid systems.\\
The stability of time-varying solutions to hybrid systems is challenging \cite{lei_wouw_08sh,mor_bro_10} as two nearby solutions typically show `peaking behaviour', i.e.
**B**: While the stability of stationary points and sets for hybrid systems is relatively well understood, far less is known about the stability of a given time-varying and jumping solution.
**C**: The approaches in \cite{for_teel_13sh,mor_bro_10} seem to be hard to generalize, while stability with respect to non-Euclidean distance-like functions is hard to interpret.
.
| BAC | BAC | BAC | CBA | Selection 1 |
The diagram of the 68-bus system is given in Fig.\ref{fig_system}. <|MaskedSetence|> Although the linear model is used in the analysis, the simulation model is much more detailed and realistic. <|MaskedSetence|> AC (nonlinear) power flows are utilized, including non-zero line resistances. The upper bound of $ P_j^l $ is the load demand value at each bus. Detailed simulation model including parameter values can be found in the data files of the toolbox. <|MaskedSetence|> | **A**:
\begin{figure}[t].
**B**: The generator includes a two-axis subtransient reactance model, IEEE type DC1 exciter model, and a classical power system stabilizer model.
**C**:
We run the simulation on Matlab using the Power System Toolbox \cite{PST}.
| CBA | CBA | CAB | CBA | Selection 4 |
<|MaskedSetence|> Defects caused by anomalous products may then be visible more than once at every sensor having contact with the product. <|MaskedSetence|> <|MaskedSetence|> As result a data set for every product passing through the machine is available.
To overcome different length caused by varying machine speeds a scaling was applied to the collected data sets. To be able to analyze data sets in case of machine stops a timeout is implemented for early finishing of digital twins. Missing data in those data sets is filled using median value imputation.. | **A**: There are two basic options to produce equal length data sets: Looking at the machine for a defined amount of time, usually one cycle respectively one turn of the camshaft.
**B**: But these impacts may be very minor.
**C**: To track errors on the product a digital twin was built collecting the sensor data virtually on the product.
| ABC | CAB | ABC | ABC | Selection 1 |
<|MaskedSetence|> <|MaskedSetence|> The triggering conditions of the event-triggered algorithms can be time-dependent~\citep{seyboth2013event}, state-dependent~\citep{nowzariZeno-free_2014, nowzari2016distributed, liu2018fixed}, or a combination of both~\citep{girard2015dynamic, sun2016new, yi2017distributed}. In general, the time-dependent thresholds are easy to design to exclude deadlocks (or Zeno behavior, meaning an infinite number of events triggered in a finite number of time period~\citep{johansson1999regularization}), but require global information to guarantee convergence to exactly a consensus state. While state-dependent thresholds are easier to design, these triggers might be risky to implement as Zeno behavior is harder to exclude. <|MaskedSetence|> | **A**:
The main idea behind distributed event-triggered algorithms is that the iterative communication between agents and their one-hop neighbors only happens when certain conditions/events are triggered.
**B**: Through skipping unnecessary communications, the communication efficiency is increased, and at the same time the desired properties of the system are maintained.
**C**: As the occurrence of Zeno behavior is impossible in a given physical implementation, the exclusion of it is therefore necessary and essential to guarantee the correctness of an event-triggered algorithm..
| ABC | ABC | ABC | BCA | Selection 1 |
The following is an important technical lemma which expresses traces relationships between norms on surface elements (flat or curved) and corresponding norms on bulk elements. An essential component of these estimates is that they allow for surfaces to cut through bulk elements in an arbitrary fashion. Such estimates were essential in the proof of the first a posteriori estimates for trace methods in \cite{DO12}. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> | **A**: In the context of a priori error estimates for trace methods, these provide a substantially simplified proof of error bounds when compared with the original proofs given in \cite{ORG09}; cf.
**B**: \cite{HH02, HH04, BHL15, Re15}.
**C**:
.
| ABC | ABC | BCA | ABC | Selection 4 |
An interesting trace between flatness and hybrid systems can be found in the more system theoretical oriented work of Paulo Tabuada and co-workers. In \cite{tabuada:2004}, the notion of flatness is related to transition systems in the context of bisimulation. It is shown that finite bisimulation systems can be constructed for differentially flat nonlinear discrete--time systems. In \cite{tab_pap_lim:2004}, a class of general control systems capturing both continuous--valued and discrete--event systems as well as hybrid systems with both continuous and discrete inputs is described. <|MaskedSetence|> This consideration of flatness in hybrid systems has a relevance regarding system theoretical development of bisimulation in systems control.
On this background, the present paper addresses in a new way the inversion of hybrid dynamical systems, in order to establish deterministic dynamical behaviour and reachability as well as to explicitly determine control input trajectories. <|MaskedSetence|> For taming complexity, a strong emphasis is put on the aspect of designing the to-be-controlled system such that it is flat. <|MaskedSetence|> | **A**: Towards controlling such systems, model abstraction, bisimulation and composition of abstract control systems is developed.
**B**: Thereby, methods for system construction and trajectory planning are provided in view of technically relevant systems.
**C**: .
| BAC | ABC | ABC | ABC | Selection 4 |
This is a strikingly simple algorithm, but providing rigorous theoretical guarantees has proved challenging. <|MaskedSetence|> The issue is that posterior sampling based approaches are derived from a true Bayesian perspective in which one maintains beliefs over the underlying MDP. The approaches of \cite{osband2017deep, touati2018randomized, osband2018randomized, azizzadenesheli2018efficient, fortunato2018noisy, tziortziotis2019randomised,burda2019exploration} model only the value function, so Bayes rule is not even well defined.\footnote{The precise issue is that, even given a prior over value functions, there is no likelihood function. Given and MDP, there is a well specified likelihood of transitioning from state $s$ to another $s'$, but a value function does not specify a probabilistic data-generating model. } The work of \cite{osband2016generalization, osband2017deep} uses stochastic dominance arguments to relate the value function sampling distribution
of RLSVI to a correct posterior in a Bayesian model where the true MDP is randomly drawn. <|MaskedSetence|> It bounds regret on average over MDPs with transitions kernels drawn from a particular Dirichilet prior, but one may worry that hard reinforcement learning instances are extremely unlikely under this particular prior. <|MaskedSetence|> | **A**: One challenge is that, despite the appealing conceptual connections, there are significant subtleties to any precise link between RLSVI and posterior sampling.
**B**: This gives substantial insight, but the resulting analysis is not entirely satisfying as a robustness guarantee.
**C**:
.
| ABC | CAB | ABC | ABC | Selection 4 |
In this paper, we discuss a forward market implementation of MAMD services via two economic issues. <|MaskedSetence|> The other is the competitive equilibrium, where each member participates rationally in its own interests. <|MaskedSetence|> Furthermore, we prove that the mechanism of this market is in itself capable of leading self-interested consumers to such optimal social welfare. <|MaskedSetence|> | **A**: Thus, we verify theoretically the economic feasibility of MAMD services.
.
**B**: We analyze the optimal social welfare obtained by a social planner who makes decisions on behalf of both the supplier and consumers.
**C**: One is the social welfare maximization problem, where all the market participants are altruistic and cooperative.
| ABC | CBA | CBA | CBA | Selection 2 |
\label{sec.key_challenges}
Although the autonomous driving technology has developed rapidly over the past decade, there are still many challenges. For example, the perception modules cannot perform well in poor weather and/or illumination conditions or in complex urban environments \cite{VanBrummelen2018}. <|MaskedSetence|> Furthermore, the use of current SLAM approaches still remains limited in large-scale experiments, due to its long-term unstability \cite{bresson2017simultaneous}. Another important issue is how to fuse AC sensor data to create more accurate semantic 3D word in a fast and cheap way. <|MaskedSetence|> <|MaskedSetence|> | **A**: In addition, most perception methods are generally computationally-intensive and cannot run in real time on embedded and resource-limited hardware.
**B**: Moreover, ``when can people truly accept autonomous driving and autonomous cars?'' is still a good topic for discussion and poses serious ethical issues.
**C**:
\section{Conclusion}.
| ABC | ABC | ABC | CBA | Selection 3 |
A key quantity in the monitoring of the quality of the transport service level is the demand-supply gap (DSG). <|MaskedSetence|> Due to the difficulty of collecting fine-grained ground-truth DSG estimates (i.e. how many trains commuters are forced to miss before boarding a train), we perform validation on the binary DSG detection problem (i.e. <|MaskedSetence|> <|MaskedSetence|> In Table~\ref{table:modelHierarchy} we present Precision, Recall, and Accuracy, for detecting DSG for a family of models, running at the station, line, or network level.. | **A**:
We used 100K ground-truth DSG event labels (positive and negative instances) collected over a period of $8$ months at about $60$ stations, where a DSG event is declared if any passenger is forcedly left behind due to lack of capacity.
**B**: existence during a time period of a DSG event or not).
**C**: The DSG measures the proportion of commuters intending to travel who are unable to board a train because it is full.
| ABC | CBA | CBA | CBA | Selection 3 |
To demonstrate the effectiveness of the proposed algorithms, we consider a network of wearable sensor units comprising at least an inertial sensing chip, a wireless communication module, and a microcontroller, which is attached to a human body during gait. Such measurement systems are used for real-time biofeedback and control of robotic systems and neuroprostheses \cite{neuroControl}. <|MaskedSetence|> <|MaskedSetence|> This challenge was recently addressed using heuristic approaches \cite{stop}. <|MaskedSetence|> | **A**: If the communication load between each sensor and the receiver can be reduced, higher base sampling rates or a larger number of sensors can be used.
**B**: We now apply the proposed ETL methods to this problem.
.
**C**: The rate at which the network can communicate reliably in real time is limited by the number of sensors.
| CAB | CAB | CAB | CAB | Selection 4 |
\bibitem{20} Shakhno, S. M., Gnatyshyn, O. <|MaskedSetence|> On an iterative algorithm of order 1.839... for solving the nonlinear least squares problems. Appl. <|MaskedSetence|> Comp. <|MaskedSetence|> (2005) https://doi.org/10.1016/j.amc.2003.12.025
\bibitem{21} Shakhno, S.: Some numerical methods for nonlinear least squares problems // In: Alefeld, G., Rohn, J., Rump, S.,~ Yamamoto, T. ((eds.) Symbolic-algebraic Methods and Verification Methods, pp. 235--243. Springer, Vienna (2001) https://link.springer.com/chapter/10.1007/978-3-7091-6280-4\_22. | **A**: P.
**B**: Math.
**C**: 161, 253--264.
| ABC | BCA | ABC | ABC | Selection 1 |
\section{Conclusions}
\label{sec:5}
A new compact sixth-order accurate finite difference scheme for the two and three-dimensional Helmholtz equation is presented. <|MaskedSetence|> Thus the new scheme also works for problems with very large wave number $K$. It is also shown that the new scheme is uniquely solvable for sufficiently small $Kh$. <|MaskedSetence|> Required symbolic derivation is performed using MAPLE with the help of mtaylor command for multi-dimensional Taylor series expansion. The resulting system of equations obtained from difference scheme are solved using BiCGstab(2) iterative method. The new scheme is tested to some model problems governed by the two and three-dimensional Helmholtz equations. Comparison of the new scheme is done with the standard sixth-order schemes \cite{nabavi2007new,sutmann2007compact}. From the results it is shown that the new scheme is highly accurate for very high wave number. <|MaskedSetence|> | **A**: The leading truncation error term of the new scheme does not explicitly depend on the wave number.
**B**: Theoretically, it is proved that the bound of the error norm is explicitly independent of the wave number $K$ for the new scheme.
**C**: This approach can be extended to derive high-order difference schemes for the two and three dimensional problems with variable wave numbers..
| ABC | ABC | ABC | CBA | Selection 3 |
<|MaskedSetence|> Likewise, the ROA of a power system is viewed as a set of operating states such as rotor angles and frequencies able to converge to the stable equilibrium, which corresponds to the solution of power flow problem \cite{zz, Danwu,dhagash, Molzahn, aolaritei2017distributed, ali2017transversality}, after being subject to a disturbance. <|MaskedSetence|> The estimation of ROA is basically dependent on the construction of Lyapunov function and its level set (see Fig. <|MaskedSetence|> In practice, constructing an analytic Lyapunov function for a nonlinear system is a challenging task.
. | **A**: The corresponding convergent trajectory is regarded as a stable state trajectory.
**B**: \ref{level}).
**C**:
\section{The ROA of a General Dynamical System} \label{sec:roa}
The ROA of a general dynamical system normally refers to a region where each state can converge to the stable equilibrium point as time goes to infinity.
| ABC | CAB | CAB | CAB | Selection 3 |
<|MaskedSetence|> In the centralized architecture, all the data should be centrally processed where there is a single point of failure for the entire power system, which potentially reduces the reliability of the operation of the microgrid, as well as increasing the computational complexities of the measured data processing \cite{Olivares2014, Brandao2017, Minchala-Avila2016}. In the decentralized system architecture, a set of local controllers are distributed over the DC microgrid, e.g., the load controllers, distributed generation (DG) controllers, and converter controllers. <|MaskedSetence|> However, there are disadvantages to this method, such as the voltage offset and inadequate response time to load variations, which can both lead to the voltage and frequency instabilities in the DC microgrid. <|MaskedSetence|> | **A**: One of the most famous decentralized control methods is the droop control, where a number of investigations are conducted to overcome the deficiencies of the distributed control methods \cite{Iravani2016, Abessi2016, Vandoorn2012}.
.
**B**:
The microgrid control strategies are categorized as centralized, decentralized, and distributed \cite{Parhizi2015, Samad2017, Zohaib2018}.
**C**: The control objectives are achieved without any direct communication between the controllers, which is specifically useful in cases when the direct communication is not feasible or costly to establish.
| ACB | BCA | BCA | BCA | Selection 2 |
<|MaskedSetence|> <|MaskedSetence|> The \emph{orthogonality} can be induced by applying the Gram-Schmidt method, at the cost of altering other properties of the input functions. Two diffusion basis functions without overlapping supports are always orthogonal; the orthogonality of a set of diffusion basis functions with overlapping supports can be achieved with the Gram-Schmidt method at the cost of a (generally) larger support of the orthonormal diffusion basis functions, as a matter of the linear combination of functions with a different support size during the orthonormalisation iterations. <|MaskedSetence|> Numerical instabilities are generally associated with the computation of the Laplacian and Hamiltonian eigenbasis associated with multiple or numerically close eigenvalues.
\begin{figure}[t]
. | **A**:
The \emph{non-negativity} is satisfied by the harmonic and diffusion bases, as a result of the maximum principle for the Laplace and heat diffusion equations.
**B**: The \emph{numerical stability} is generally satisfied by all those basis functions that are computed through the solution of a linear system (e.g., harmonic, Hamiltonian, spectral basis functions, etc.).
**C**: By definition, the compressed manifold modes are local and orthonormal; however, we cannot center these basis functions at a given seed point and easily control their support, as we can do with the diffusion basis functions.
| CAB | ACB | ACB | ACB | Selection 3 |
\textcolor{black}{In this paper, the design and realistic application of a low-memory, real-time \textsc{\textbf{RobustEstimator}} is studied. Utilizing the GPS receiver on a Google Nexus 9, real GPS data are collected and post-processed by injecting time-synchronization attacks to spoof the clock bias and drift of the device. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> }
\vspace{-0.5cm}. | **A**: Future work will focus on developing robust estimators under spoofing attacks for non-stationary GPS receivers, which involve nonlinearities in the GPS measurement model.
**B**: Two types of attacks are introduced, and tested by the designed estimator.
**C**: The estimator successfully detects and estimates the spoofing attacks on each state, and mitigates the spoofing on both types of attack by furnishing the corrected clock states to the user.
| BCA | ACB | BCA | BCA | Selection 1 |
<|MaskedSetence|> Verification and validation of separate applications working together are challenging because it necessitates the specification of the overall system's behavior. <|MaskedSetence|> A better approach and, we dare say, one that might be more acceptable to the millions of programmers who might invest their efforts in the creation of such smart city apps, is to develop one \emph{integrated} application that can be verified and then, separately, decomposed and distributed to the nodes on which it will run. A representative pseudocode is shown in Figure~\ref{fig:code1}. <|MaskedSetence|> In this section, we will discuss challenges of adding timing concepts to the programming language as well as achieving synchronization for a scattered time-sensitive system.
. | **A**: While the pseudocode may seem simple, it highlights important opportunities and challenges that emerge from the very nature of programming a geographically-distributed aggregate of computing resources.
**B**:
The typical approach would be to write application-specific code for each participating node together with code to coordinate the actions of these nodes.
**C**: Testing is likewise challenging.
| CBA | BCA | BCA | BCA | Selection 2 |
<|MaskedSetence|> <|MaskedSetence|> Second, the method(s) described here, at least for networks, require only the computation and spectral analysis of a single matrix rather than the use of Lyapunov-type methods, Linear Matrix Inequalities, or Semi-Definite Programming methods. <|MaskedSetence|> Moreover, if a system (process) can be shown (designed) to be intrinsically stable, there is no need to formally include delays in the system (model). The reason is that delays do change the qualitative dynamics of the system and it will same asymptotic state whether or not its delays are included. As has been shown, this is not the case for general dynamical networks.. | **A**: Hence, very large systems can be analyzed using this method under the condition that their Lipschitz matrix can be efficiently computed.
**B**:
\section{Conclusion}\label{sec:8}
The method described in this paper for determining whether a network is stable or can be destabilized by time-delays has a number of advantages over other methods.
**C**: First, one need not consider the system itself but rather the simpler undelayed, and therefore lower-dimensional, version of the system.
| BCA | BCA | BAC | BCA | Selection 2 |
<|MaskedSetence|> The data set contains $19,187$ measurements, gathered from different devices: rain gauges and weather radar. <|MaskedSetence|> The remaining points come from raw radar acquisitions, at first as reflectivity measurements with a range of $400$ km. The frequency of mountains over the whole Ligurian territory affects the quality of radar acquisitions and a pre-processing step is needed to remove ground clutter effects; processed data are then combined with observations gathered from rain gauges, which are more reliable measurements but do not cover the whole region. <|MaskedSetence|> Since the temporal interval is different for each acquisition device, rainfall measurements have been cumulated. In this study, a $30$ minutes cumulative step has been used (240 time samples).
This procedure of integrating gauge and radar data was made to alleviate the well-known error and uncertainty that characterize radar estimates. Spurious signals may be caused, for example, by radar failure or by shielding of the radar beam by mountain ranges \cite{Harrison2000}. However, various outliers still perturbs the data.. | **A**: Besides the $143$ rain gauges by Regione Liguria, another ~$25$ measure stations by Genoa municipality are considered.
**B**:
The second event occurred on January, 2014.
**C**: The integration of radar data in the interpolation of the precipitation field makes it possible to extend rainfall fields also to areas surrounding Liguria, and therefore to have a clearer picture about the temporal evolution of precipitation events.
| BAC | BAC | BAC | BCA | Selection 3 |
received his Ph.D degree from EPFL, Switzerland, in 2007. After a journey by bicycle from Switzerland to the Everest base camp in full autonomy, he joined a R\&D group hosted at Strathclyde University focusing on wind turbine control. <|MaskedSetence|> He joined the Department of Signals and Systems at Chalmers University of Technology, G\"{o}teborg in 2013, where he became associate Prof. <|MaskedSetence|> <|MaskedSetence|> at NTNU, Norway and guest Prof. at Chalmers. His main research interests include numerical methods, real-time optimal control, reinforcement learning, and the optimal control of energy-related applications.
\end{IEEEbiography}
. | **A**: in 2017.
**B**: He is now full Prof.
**C**: In 2011, he joined the university of KU Leuven, where his main research focus was on optimal control and fast NMPC for complex mechanical systems.
| CAB | CAB | CBA | CAB | Selection 4 |
This paper is organized as follows. <|MaskedSetence|> \eqref{eq} at integer time is proved to be a time-homogeneous Markov chain as well as exponentially ergodic with a unique invariant measure. In Section 3, we apply the BE method to Eq. <|MaskedSetence|> The time-independent weak error
of the solutions together with the error between invariant measures are given in Section 4. <|MaskedSetence|> | **A**: \eqref{eq} and prove that the BE approximation at integer time preserves the exponential ergodicity with a unique numerical invariant measure.
**B**: In Section 2, some notations are introduced and the solution of Eq.
**C**: In Section 5, numerical experiments are presented to verify the theoretical results..
| BAC | BAC | BAC | ABC | Selection 3 |
<|MaskedSetence|> The primary characteristic of event-based controllers is that they can provide performance very similar to classical control approaches while reducing the transmission of information between plant and controller. This feature is important in a growing number of applications in which limiting transmission rates is a concern. <|MaskedSetence|> In a (classical) time-triggering fashion, data transmission between system elements (such as actuator, sensor and plant) occurs periodically, regardless of whether or not changes in the measured output and/or commands require computation of a new control output. In an event-based scenario, the system decides when to update the control output based on a so called \emph{real time triggering condition} on the measured signals. <|MaskedSetence|> | **A**:
.
**B**: {E}vent-based control systems have been an active area of research over the last decade.
**C**: Examples include
battery-operated systems with wireless transmission between plant and controller, which often have limited energy and/or memory supplies, or network control systems with shared wired or wireless communication channels, \cite{hespanha_survey}.
| BCA | BCA | BCA | BCA | Selection 3 |
<|MaskedSetence|> In Section 3 we formulate the estimation and control problem, and in Section 4 we analyze its closed-loop stability. <|MaskedSetence|> <|MaskedSetence|> The second example compares the performance obtained by the simultaneous and independent approaches applied to the regulation of the state of a van der Pol oscillator for two operational conditions. Finally, conclusion and future work is discussed in Section 6.
\section{Preliminaries and setup}
. | **A**: Section 5 discusses two examples to illustrate the concepts presented in this work.
**B**: The first example uses a simple nonlinear model to analyse the consequences of simultaneously solving the estimation and control problems.
**C**:
The rest of the paper is organized as follows: Section 2 introduces the notation, definitions and properties that will be used through the paper.
| CAB | CAB | ACB | CAB | Selection 1 |
<|MaskedSetence|> <|MaskedSetence|> In order to achieve this objective, we defined the set of sparse functional Tucker tensors and used existing parallel implementation of Tucker decomposition to construct approximation in this set. <|MaskedSetence|> The entire compression scheme was tested on datasets obtained from high fidelity combustion modeling simulations. For small loss of accuracy, the proposed strategy results in compression ratio of up to $ 10^3-10^{5}$ for a third order and fourth order dataset respectively.. | **A**: The key idea is to find a sufficiently accurate representation of data in the set of functional Tucker tensors with complexity smaller by orders of magnitude as compared to the size of dataset.
**B**: \section{Conclusion}
\label{conclusion}
We presented a novel technique to compress large volume of data using functional sparse Tucker decomposition.
**C**: The singular vectors are approximated as functions represented on suitable basis using least squares with sparse regularization.
| BAC | BCA | BAC | BAC | Selection 3 |
We investigate randomized threshold algorithms that accept an item as long as its size exceeds the threshold. We derive two optimal threshold distributions, the first is 0.4324-competitive relative to the optimal offline integral packing, and the second is 0.4285-competitive relative to the optimal offline fractional packing. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> We also show that any randomized algorithm for this problem cannot be more than 0.4605-competitive. This is the first upper bound strictly less than 0.5, which implies the intrinsic challenge of knapsack constraint.
. | **A**: We also consider the generalization to multiple knapsacks, where an arriving item has a different size in each knapsack and must be placed in at most one knapsack.
**B**: We derive a 0.2142-competitive algorithm for this problem.
**C**: Both results require optimizing the cumulative distribution function of the random threshold, which are challenging infinite-dimensional optimization problems.
| CAB | CAB | ACB | CAB | Selection 4 |
To confirm the importance of this paper, it is necessary to show that our method performs well when the computationally simple accuracy method does not. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> For the \verb+Power+ dataset, the accuracy method fails 44.4\% of the time, but our estimated stability criterion succeeds in a quarter of these cases. While this behavior is not universal, it indicates that our method can be a crucial help when standard tools fail.
. | **A**: As mentioned when detailing the construction of these preconditioners, the accuracy criterion would never choose the `use no preconditioner' option over one of the geometric preconditioner.
**B**: Of these times that the accuracy method fails, the estimated stability criterion succeeds exactly half of the time.
**C**: If we were just looking at the purely block-diagonal geometric preconditioner versus the `use no preconditioner option', the accuracy criterion would result in a poor preconditioner (higher number of iterations than possible) exactly a third of the time with the \verb+Concrete+ dataset.
| ACB | CAB | ACB | ACB | Selection 3 |
\subsection{Card ID De-identification}
No details were publicly given about how the cardId field was generated, but it is not the Myki Card Number, nor does it appear to be a random number. Our analysis of the cardId and its distribution indicated some form of mapping was applied. <|MaskedSetence|> Such a low maximum value is in itself unusual, given that there are over 15 million cards in the dataset, and the maximum Myki Card Number far exceeds the maximum value in the dataset. <|MaskedSetence|> <|MaskedSetence|> It could be that the cardId of 1 is an error, but that would not explain why the first fifteen thousand possible IDs are not used. This unexpectedly large gap between cardIds re-occurs, as shown in Table~\ref{tab:cardgaps}.
. | **A**: For example, there are no cardIds between 2 and 15746.
**B**: The cardId is 8 digits, but the range of values in the dataset is only from 1 to 24451922.
**C**: Furthermore, the cardIds are not uniformly distributed throughout the available space, particularly towards the lower end of the range.
| BCA | BCA | CBA | BCA | Selection 1 |
There are two possible outcomes for the agent: winning or losing. If the game was won, the return was defined as discussed above. <|MaskedSetence|> <|MaskedSetence|> We preferred to teach the agent a peaceful course of the game. Losing in space colonization was punished less than being destroyed because it was interpreted as the player being sufficiently strong to withstand the opponents' attacks and only lacking the time to reach the remote star. <|MaskedSetence|> For the state $s$ the return was calculated as $G_s=-(N*2-t)$ or $G_s=-(1000-N+t)$ respectively.
The action space for the trained agent was composed of all the Knowledge Items of the multi-expert knowledge base, so that every KI was considered as an action $a\in A$. In every situation when the inference engine encountered a conflict, the learned policy was applied to the conflict set and one selected KI was executed. After every episode, the state-action values were calculated, and the policy was improved based on the new values for the next episode. . | **A**: For the lost games, we also distinguished between the way they were lost: either the opponent reached Alpha Centauri first, or the player's nation was destroyed in war.
**B**: If the game was lost, however, the return had to indicate that the result of the episode was not desired.
**C**: Therefore, for the first case, the episode's return was given as $G=-N*2$, while for the later, the return was set as $G=1000-N$ (the earlier the player was destroyed, the lower was the return).
| BAC | BAC | ACB | BAC | Selection 1 |
Investigating the level of dependence between variables is still a thorny issue. Such an issue is usually tackled by investigating the properties of mutual information or multiinformation. We believe that considering multiinformation density instead of multiinformation (i.e., the random variable instead of its expectation) couldt contribute to a better characterization of dependence. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> Its generalization to more than two variables could yield a new tool to investigate mutual independence.
\par
. | **A**: For instance, the variance of $i_d$, given in Equation~(\ref{eq:idep:var}), could be considered, in addition to or instead of multiinformation, to quantify the presence or absence of dependence.
**B**: In the case of two variables, an estimator of this quantity, expressed in Equation~(\ref{eq:2:var}), is readily available \gcite{Jupp-1980}.
**C**: An important point to advance in this direction would be to provide estimators for the quantities obtained here.
| ACB | CAB | ACB | ACB | Selection 3 |
To test our algorithm we first use the standard \textit{Scikit} function \textit{make\_blobs} to generate gaussian clustered data. We test the algorithm on 4 datasets of 100 5-dimensional feature vectors with different levels of noise and clustering, represented numerically by increasing standard deviations in datapoints from the cluster centroids. We also ran a classical K-means algorithm on the same data for the sake of comparison. Figure 1 shows the clustering of both algorithms on one dataset, while Figure 2 shows accuracy vs. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> | **A**: It is evident that the quantum algorithm performs very similarly to the classical analogue, only suffering a slight performance dropoff for highly noisy data.
**B**: All simulations were run with \textit{Qiskit Aer} on a Macbook Pro.
.
**C**: standard deviation values for each algorithm.
| CAB | BAC | CAB | CAB | Selection 3 |
For each model (so no matter with which Dropout probability it was trained), it is clearly visible that a higher $p$ in Generation results in a larger difference between outputs. <|MaskedSetence|> The generation series was repeated 3 times. With a higher dropout rate, the images differ more between each generation. <|MaskedSetence|> <|MaskedSetence|> | **A**: Also conspicuous is that with a higher dropout rate, the images tend to have sharper edges.
**B**: This is also visible in Figure \ref{model8} where one noise vector was
used to generate images with different Dropout rates starting at 0 on the left and go up to 0.8 on the right.
**C**: This leads to the conclusion that details are getting lost.
.
| BAC | ACB | BAC | BAC | Selection 4 |
<|MaskedSetence|> <|MaskedSetence|> A standard decision tree classifier was used and the parameters were tuned. However the only parameter that showed a noticeable difference while tuning was the emph{min\_samples\_split} parameter. <|MaskedSetence|> As state of the art models reached accuracies greater than 0.6, it was decided to try out a neural network instead.
. | **A**:
\subsubsection{Decision Tree}
A decision tree was chosen at first chosen due to a combination of it requiring little to no effort in preparation of data as well as its reputation for working well in almost all scenarios.
**B**: The first attempt was to implement a decision tree using Sci-Kit Learn.
**C**: With the parameter set to 40, the overall accuracy was only a mere 0.309.
| ACB | ABC | ABC | ABC | Selection 2 |
We conduct our experiments in two steps. In the first step we train gumbel feature selection matrix for fixed number of epochs which gives us the features selection matrix and in the second step we use trained gumbel feature selected matrix which has the prominent and dominating features to classify the nodes and verify the accuracy of the model.
Table \ref{tab:Benchmark performance for node classification} shows the benchmark performance for node classification on three frequently used data sets. <|MaskedSetence|> The listed methods use the same train/valid/test data split. <|MaskedSetence|> <|MaskedSetence|> | **A**: .
**B**: Cora, Citeseer, and Pubmed are evaluated by classification accuracy.
**C**: Table \ref{tab:feature selection accuracy} gives information of the accuracy for the feature selection.
| BCA | ACB | BCA | BCA | Selection 1 |
<|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> The ReoptWOP and ReoptWP policies operate online. Because operational limitations typically require decisions to be made in a matter of minutes, this precludes the possibility of allocating more computing time to solving MILPs on the fly. In contrast, the Survey Agent may be trained offline for large periods of time without negative service consequences. Then, when it is deployed, actions are selected in a matter of seconds. Thus, as problem size grows, DRL yields better performance in typical operating conditions.
. | **A**: Significant increases in computing time may lead to better policy performance, but this is not practical.
**B**: Because a larger number of expected requests leads to appreciably larger MILPs, the ReoptWOP and ReoptWP policies struggle to select good actions within the allotted computing time of 3.6 minutes per MILP.
**C**:
Increasing the problem size highlights an important difference between the Survey Agent and the benchmark policies.
| CBA | CBA | CBA | CBA | Selection 4 |
Let $\delta$ be known as the perturbation variable. <|MaskedSetence|> The streaming site can use methods like ``targeted advertising" to bias the user choices (i.e. <|MaskedSetence|> <|MaskedSetence|> The following rules regarding $\delta$ is followed,
\begin{itemize}
. | **A**: This variable is used to model the effect of recommendation systems.
**B**: Therefore $\delta$ is used to perturb the probability of transition from a particular state.
**C**: Suggesting they pick another movie, say a ``Documentary" which increases the chance they pick that movie).
| BCA | ACB | ACB | ACB | Selection 2 |
<|MaskedSetence|> The quality of energy transfer hinders from different issues. <|MaskedSetence|> Thus the energy modeling should take into these challenges and keep some resource allocated. On the other hand sensor nodes lifetime depends on the battery or the self energy source. We could model the recharging option using solar energy, but it is very likely that there will be time when the solar energy is not available. Then the smart grid can not be monitored due to inactive sensors. Thus the best option would be the mix of solar and RF energy transfer.
Though the hybrid energy transfer is a feasible option, the implementation is complex due to the hardware requirement and cost. <|MaskedSetence|> The energy used by a sensor node consists of energy consumed by receiving, transmitting, listening for message on the radio channel, sampling data and sleeping \cite {polastre2004versatile} .. | **A**: Wireless recharging technology becomes more sophisticated in case of radio frequency transmission.
**B**: For example, interference and noise can make the energy less transmission from the actual.
**C**: Thus we are considering only RF energy transmission throughout our article.
| ABC | ABC | CAB | ABC | Selection 1 |
<|MaskedSetence|> <|MaskedSetence|> They focus instead on how private signal precisions and the evolution of the state, which changes over time, affect learning. Indeed, they argue that network structure matters much less than the state and information structures in their setting. By contrast, we show that in a standard fixed-state environment learning can be quite efficient on some networks and highly confounded on others. <|MaskedSetence|> | **A**: Variations in the network structure can trace out a wide range of learning efficiencies, including nearly total information loss, which highlights the power of the confounding..
**B**: Related obstructions are also present in \cite*{dasaratha2020learning}, which studies learning failures in network structures similar to our generations networks but has no formal results about how learning differs across networks.
**C**: They note that relaxing this restriction to allow confounds would
lead to ``distributional complications''; our framework and results
resolve these complications and study the implications of the confounds.
| CBA | CBA | BAC | CBA | Selection 1 |
<|MaskedSetence|> Some researchers focus on applying geometry-based approaches to display visual information for head-mounted display (HMD). For example, Lauber and Butz \cite{ref19} propose a layout management technique for HMD in cars. <|MaskedSetence|> Orlosky et al. \cite{ref52} evaluate user tendencies through a see-through HMD to improve view management algorithms. <|MaskedSetence|> | **A**: A user tendency is found to arrange the label locations near the center of the viewing field.
.
**B**: The annotations are rearranged based on the driver’s head rotations to avoid the label occlusions in the driver’s view.
**C**:
Most prior works under the geometry-based approach formulate the view management problem as an optimization problem and propose different algorithms to obtain annotation positions in each frame.
| CBA | BAC | CBA | CBA | Selection 3 |
\subsection{Proposed Framework}
The output of U-net model with pretrained weights of the ResNet is fast and accurate. <|MaskedSetence|> <|MaskedSetence|> Encoder narrow down the information of the image to a latent space. Then decoder taking input latent space regenerates the image. Here the model will be generating the mask. The mask tells us were is Pneumothorax if it exists. <|MaskedSetence|> As real world images are not of the same size and more sometime are corrupted. In preprocessing we have to resize the image in 256 X 256. Then the image is put into correct color range. The image will be normalized. For training we filtered out the corrupted images and done random crop to remove unwanted boundary. As X-ray images usually have a lot of boundary noise which is not required for our model. The output mask is a black and white image. In which white area indicated the position of the problem. We then map this mask image to out original image to get exact position on the X-ray image. The encoder part downscale the images and learn a latent space with help of which we generate the masks. This is kind of mapping an image to a mask and training our model to learn to generalist the mapping. When the model successfully learns to generalize it, we can feed any X-ray image and get the mask for it. The mask will indicate the position of the part which is potentially causing Pneumothorax. If the mask is blank means there is no problem and patient is safe. Prediction should be very fast because if it take too long then it can not help the doctor in needed time. U-net is pretty small network and very effective also. We are using ResNet as encoder and we are going to use the weights from pretrained network. Using weights from pretrained network helps to boost the result and saves us time. Skip connections of U-net help to retain useful features learned in encoder layer while upsampling by decoder. ResNet concept of skip connection helps fight vanishing gradient problem. That’s why decided to try the ResNet pretrained model as backbone. It improves the result.
. | **A**: U-net is consist of an encoder and decoder.
**B**: Image preprocessing is an important part here.
**C**: This helps doctors to start the treatment at the earliest.
| CAB | CAB | ACB | CAB | Selection 2 |
\textbf{\section{Introduction}}
Quantum information theory is an advanced approach and is basically an amalgam of computer science, physics, and mathematics. It uses the principles of quantum mechanics. In quantum mechanics, the state of the system is expressed by a wave function. The fundamental laws of quantum mechanics are widely adopted in computation as well as in communication. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> The novel concepts of process algebra in the light of formal methods have been explained further with a brief overview of previous researches.. | **A**: Many of the principles of quantum mechanics, such as quantum no-cloning theorem, uncertainty principle, and entanglement provide quantum communication protocols the provable security.
**B**: We first introduce the key aspects of quantum mechanics in computer science such as quantum computation and quantum communication.
**C**: As a comparatively modern computational model, the quantum computing\cite{42}\cite{43} brings the dawn of solving the so-called NP problem because of the strong parallel computation power of quantum computing.
| CAB | CAB | BAC | CAB | Selection 4 |
In order to ease the presentation, we restrict the discussion to finite graphs in this section, stressing that all the intuitions built in this easier setting will carry over in the next sections. In this model for which a program is abstracted as a finite graph, computation is represented as the formation of paths in the graph. <|MaskedSetence|> following a path to travel through the graph -- a sequence of instructions. The dynamic process of computation itself is therefore represented as the operation of \emph{execution} $\mathrm{Ex}(G)$ of a graph $G$, which is the set of maximal paths in $G$. <|MaskedSetence|> From the point of view of logic, this operation of execution computes the normal form of a proof, i.e. <|MaskedSetence|> | **A**: In the case of Turing machines the graph represents the transition function, and the process of computation corresponds to the iteration of this transition function, i.e.
**B**: accounts for the cut-elimination procedure.
.
**C**: This alone describes some kind of abstract, untyped, model of computation, which one can structure by defining types depending on how graphs behave.
| ACB | ACB | ABC | ACB | Selection 2 |
\section{Limitations of Balance and Primal-dual for Stochastic Usage Durations} \label{sec:challenges}
Recall that in case of non-reusable resources, the Balance algorithm combined with primal-dual analysis leads to the best possible $(1-1/e)$ guarantee in a variety of settings. <|MaskedSetence|> These examples also illustrate the ability of our new algorithm and analysis approach to address uncertainty in reusability. <|MaskedSetence|> %We start with the case of two point usage distribution. <|MaskedSetence|> | **A**:
.
**B**: Through simple examples we now demonstrate some of the challenges with applying these ideas to the more general case of reusable resources.
**C**: % and the LP free analysis approach.
| BCA | ABC | BCA | BCA | Selection 4 |
In some cases adversarial examples are crafted with the intention of deceiving a classifier, potentially posing threats to security and privacy. <|MaskedSetence|> 2017). In some cases, adversarial examples are so similar to the original examples that they are imperceivable to the human eye (e.g., Deng et al. 2009). <|MaskedSetence|> <|MaskedSetence|> This suggests that the human visual system may in some way learn the noise more effectively, or be less vulnerable to adversarial perturbations.
. | **A**: 2018).
**B**: This is often the case when a small amount of bounded noise is applied over the entire input (Tomsett et al.
**C**: However, naturally occurring variation such as sunlight damage or graffiti modification can fool traffic signs classifiers (Evtimov et al.
| CBA | CBA | ABC | CBA | Selection 1 |
\\
This paper proposes two new distances between arbitrary crystal lattices that are not restricted to the same crystal system or a Bravais type. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> Section 3 discusses past approaches to similarities of crystal structures.
Section 4 introduces three distances based on the Voronoi cell of an arbitrary lattice. Section 5 shows experimental results on the T2 dataset of simulated and real crystal structures that consists of nano-porous crystals structures that are all based on a single T2 molecule.\cite{biblio:linjiangandy}. | **A**: Section 2 defines necessary concepts and states the equivalence and distance problems for crystals and lattices.
**B**: Hence a similarity distance should be well-defined on the whole space of lattices.
**C**: All lattices can be continuously deformed into each other.
| CBA | CBA | ACB | CBA | Selection 1 |
To conclude, we considered the problem of collaborative binary hypothesis testing with no prior joint distributions knowledge available to the observers. We presented different algorithms to solve the problem with emphasis on information exchange between the observers. The algorithms were analyzed and compared from different perspectives. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> In future work, we plan to develop game theoretic approaches to this problem following the methods of Grunwald and Topsoe in \cite{grunwald2004game}, \cite{topsoe2001basic}, \cite{harremoes2002unified}, and \cite{topsoe2009game}.. | **A**:
The binary hypothesis testing problem with two observers and asymmetric information can also be studied as a co-operative game with two observers.
**B**: We compared the performance of the algorithms by comparing the rate of decay of the probability of error and proved it is a function of the information exchanged.
**C**: We proved that the probability space constructed at the observers were a function of information patterns at the observers.
| BAC | CBA | CBA | CBA | Selection 2 |
OCR is computationally expensive, and its accuracy and processing time highly depends upon the quality of the image. Our corpus has varied quality PDFs, with some being very old, handwritten and misaligned. It took approximately four days of continuous running on a single Google Cloud K80 Tesla GPU. The average time taken for OCR and further preprocessing (described below) was around 20 seconds/page. In general, there are many mathematical formulas in the research papers and thesis. <|MaskedSetence|> <|MaskedSetence|> For removing them, we designed regex patterns for common styles of in-text citations used in the literature. <|MaskedSetence|> | **A**: We devised several regular expression patterns to remove them from the corpus.
**B**: Further, there were a lot of references, both in-text and at the references at the end of the main text of the papers.
**C**: We noticed that last 2-3 pages generally contain references, and hence we removed last two pages from each corpus as well.
.
| BCA | ABC | ABC | ABC | Selection 3 |
We will here only evaluate the architecture using max pooling, which is structurally similar to the popular multi-scale OverFeat detector \cite{SerEigZha-arXiv2013}. <|MaskedSetence|> <|MaskedSetence|> When processing regions in the scale channels corresponding to only a single region in the input image, new structures can appear (or disappear) in this region for a rescaled version of the original image. With a linear approach this might be expected to not cause problems.
For a deep neural network, however, there is no guarantee that there cannot be strong erroneous responses for e.g. a partial view of a zoomed in object. <|MaskedSetence|>
. | **A**: We are, here, interested in studying the effects this has on generalisation in the deep learning context.
**B**: For this scale channel network to support invariance, it is not enough that boundary effects resulting from using a finite number of scale channels are mitigated.
**C**: This network will be referred to as the SWMax network.
| CBA | CBA | ACB | CBA | Selection 1 |
\section{Pointer Softmax}
In NMT many of the words in the corpus are generalised by the '<unk>' (unknown) representation so as to reduce the vocabulary size to scale and discard very less frequent words from the vocabulary. However in many cases this depletes the performance of the model. In \cite{pointer_softmax}, the architecture used is plain encoder-decoder\cite{seq2seq} with attention\cite{bahadanau} and there are two softmax layers to predict the next word in the conditional language model; one to predict the location of the word in the model and the second to predict the word in the shortlist vocabulary. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> | **A**: The decision of using which softmax is made by an Multi-Layered Perceptron or Feed Forward Neural Net.
**B**: The intuition to this nmechanism is the human tendency to pointing at objects when not knowing their name.
**C**:
.
| ABC | ABC | ABC | ABC | Selection 4 |
We can expect unlabeling techniques to grow as the semi-supervised and unsupervised methods gets better, since any of those can be used once a sample had its label removed. One could envision utilizing algorithms such as MixMatch \cite{MixMatch} or Unsupervised Data Augmentation \cite{UDA} on unlabeled samples.
Similarly, the label fixing strategies could benefit from unsupervised representation learning to learn prototypes that makes it easier to discriminate hard samples and incorrect samples. <|MaskedSetence|> It would be expected however that those approaches become less accurate as the number of classes grows or the classes get more ambiguous. <|MaskedSetence|> <|MaskedSetence|> | **A**: Deep self-learning \cite{SelfLearning} is shown to scale on Clothing1M and Food-101N.
**B**: Some prior knowledge or assumptions about the classes could be used explicitly by the model.
**C**: Iterative Noise Filtering \cite{IterativeNoiseFiltering} in its entropy loss assumes that all the classes are balanced in the dataset and in each batch..
| BCA | ABC | ABC | ABC | Selection 2 |
<|MaskedSetence|> In addition, we vary the number of filters and neurons in the classification and localization networks, to keep the number of learnable parameters approximately constant. The network architectures and the number of learnable parameters are presented in \Cref{parametertable}. <|MaskedSetence|> Note that the first layers in STN-SL1's localization and classification networks shares parameters, so STN-SL1 uses somewhat fewer parameters than STN-DL1. <|MaskedSetence|> | **A**:
We use the same classification network for (R), (T), and (S), but vary the localization network.
**B**:
.
**C**: C($N$) denotes a convolutional layer with $N$ filters, and F($N$) denotes a fully connected layer with $N$ neurons.
| ACB | ABC | ACB | ACB | Selection 4 |
<|MaskedSetence|> %more general results such as \cite{CohGeiWei-NIPS2019} which characterises all equivariant (covariant) maps between homogeneous spaces using the theory of fibers and fields.
Our contribution is to present an alternative proof based on elementary analysis for the special case of purely spatial transformations of CNN feature maps (as opposed to more general transformations that might mix information between the different feature channels). <|MaskedSetence|>
We also provide an analysis of the general multi-layer case, without relying on any covariance assumptions about the individual layers. <|MaskedSetence|> | **A**: We do not claim much mathematical novelty of these facts, which are in some sense intuitive, and, in the single-layer case, have some parallels with the work in \cite{cohen2016group} and \cite{CohGeiWei-NIPS2019}.
**B**: .
**C**: Since we only consider spatial transformations, we can give a more direct proof.
| ACB | ACB | ACB | ACB | Selection 3 |
<|MaskedSetence|> There is a great need to condense many of those explanations into comprehensive frameworks for machine learning practitioners. Because of that, numerous technical solutions were born that aim to unify the programming language for model analysis \citep{biecek-dalex, alber-innvestigate, greenwell-vip, arya-aix360}. They calculate various instance and model explanations, which help understand the model's predictions next to its overall complex behavior. <|MaskedSetence|> <|MaskedSetence|> | **A**: It is common practice to produce visualizations of these explanations as it might be more straightforward to interpret plots than raw numbers.
**B**: Despite the unquestionable usefulness of the conventional XIML frameworks, they have a high entry threshold that requires programming proficiency and technical knowledge \citep{bhatt-xml-stakeholders}..
**C**:
\paragraph{Practice.} Focusing on overcoming the opacity in black-box machine learning has led to the development of various model-agnostic explanations \citep{friedman-gbm-pdp, ribeiro-lime, lundberg-shap, lei-loco, fisher-vi, apley-ale}.
| CAB | CAB | CAB | CAB | Selection 4 |
\section{Discussion}
\subsection{Integrity}
Previous methods often lack integrity. As mentioned in Sec.\ref{section:pipeline}, DFL consists of three main phases, extraction, training, and conversion. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> It is noteworthy that DFL is not a simple combination of current state-of-the-art methods. Instead, most efficient tools are developed by ourselves according to users' requirements.. | **A**: Thanks to the long development progress, DFL has become the most mature face-swapping system in the world.
**B**: For example, we provide several kinds of face segmentation methods.
**C**: Each phase plays a different role and has various kinds of alternative techniques.
| CAB | CAB | CAB | CAB | Selection 4 |
The fastest way to buy on the market is through a \emph{buy market order}. <|MaskedSetence|> <|MaskedSetence|> A more careful investor would use
\emph{limit buy orders}, orders to buy a security at no more than a specific price. Buy market orders are not frequent in normal transactions and are typically used by investors that need a fast execution. <|MaskedSetence|> Our idea is to use this pattern, along with other information about volume and price, to detect when a pump and dump scheme starts.. | **A**: Just like the members of pump and dump groups in action.
**B**: A buy market order looks up the order book and fills all the pending asks until the requested amount of currency is reached.
**C**: Although a market order is completed almost instantly, the price difference between the first and the last ask needed
to fill the order can be very high, especially in markets with low liquidity, and so the price can rise considerably.
| BCA | BCA | ACB | BCA | Selection 2 |
<|MaskedSetence|> <|MaskedSetence|> M. Salemt, "Gate-variants of Gated Recurrent Unit (GRU) neural networks," 2017 IEEE 60th International Midwest Symposium on Circuits and Systems (MWSCAS), Boston, MA, 2017, pp. <|MaskedSetence|> and Hinton, G. (2017). ImageNet classification with deep convolutional neural networks. Communications of the ACM, 60(6), pp.84-90.
\bibitem{b4} Brendan McMahan and Daniel Ramage, “Federated Learning: Collaborative Machine Learning without Centralized Training Data”. | **A**: \bibitem{b2} R.
**B**: 1597-1600.
\bibitem{b3} Krizhevsky, A., Sutskever, I.
**C**: Dey and F.
| ACB | BAC | ACB | ACB | Selection 3 |
Finally, as suggest by Wu et al. we can cascade additional difference image denoising networks to further improve the output. <|MaskedSetence|> <|MaskedSetence|> This re-introduces the high frequency texture inherent to CT images that is removed throughout the denoising process. This result is shown in Figure \ref{blend}. Figure \ref{blend}B shows the result of adding an additional cascade to A. Analysing this result shows that our two-level cascade architecture has already reached near optimal denoising because the 3-level cascade result shows negligible improvement. This is preferred in contrast to the architecture used in \cite{Wu2017ACC} which requires many additional cascades for comparable performance. <|MaskedSetence|> | **A**: Additionally, Wu et al.
**B**: generated blended images by calculating a weighted average using the input LDCT image and the predicted image using 0.3 and 0.7 weights respectively.
**C**: In addition, individual networks in our two-network cascade are less complex than their counter part in \cite{Wu2017ACC} (CNN10).
.
| ABC | ABC | ABC | BCA | Selection 1 |
To describe the forms of discrimination, we refer to self-identity, industry practice, and history. In self-identity, moral and social importance plays a crucial role in how one people feel about themselves, and it plays a strong role in determining people’s intentions, attitudes and behaviors [7]. Consequently, a poorly developed self-concept could bring about low self-esteem, level of confidence, purpose and reason to live, and level of motivation. <|MaskedSetence|> <|MaskedSetence|> In industry practice, it is common for companies to hire the person that they think are best for the position. With neurotechnology providing its users with augmentations, a complaint regarding this is that this provides those people with an unfair advantage that will allow them to secure jobs more easily, and that those that are augmented may view those that are not augmented as inferior. This unfair advantage can lead to discrimination within industries through employment.
\\ \\
The unfair advantage from augmentations can have serious implications in the workplace especially once employers, administrators and decision-makers realize that those with augmentation perform better than those without them [10]. The realization will likely lead to them having a preference for those with augmentations and a bias against those that do not have any augmentations, leading to things such as them inquiring beforehand (or on the job for current employees) whether an applicant has undergone an augmentation or utilizes neurotechnology, and making decisions simply based on this information along with selected other pieces of information from the applicant (or employee). This can lead to prejudice, resulting in non-augmented people being discriminated against, creating justice and equity issues. Note that this type of discrimination is not limited to employment settings, and extends as far as in areas such as families and academic institutions. Between children, parents will give preference to the child that performs best academically and professionally. Similarly, with academic institutions, the institutions will prefer and reward those that perform best academically, and treat those that perform best academically better than those that perform worse regardless of whether they have undergone any sort of augmentation or not. <|MaskedSetence|> As a result, in the long-term and possibly even in the short-term, the implications from the unfair advantage from augmentations can lead to a collapse in the social process of education, care, interactions, relationships and more as impartiality, open-mindedness, nondiscrimination, acceptance, and unbiasedness would be minimized or lost completely in some groups and societies. . | **A**: We know this as in many academic institutions, if students do not perform well enough academically, they are either removed from their program, suspended or both, and those that perform well academically are rewarded awards, research and other opportunities, and scholarships.
**B**: Within industries, the discrimination can take place as earlier as the job application process to later while the person is employed with the company.
**C**: With groups and communities determining collectively how one is treated by others, poorly formed self-concept could make one subject to discrimination.
| CBA | CBA | CBA | CAB | Selection 2 |
\bibitem{BBCGPSTV20}
Bardet M., Bros M., Cabarcas D., Gaborit P., Perlner R., Smith-Tone D., Tillich J.P, Verbel J.: Improvements of algebraic attacks for solving the rank decoding and MinRank problems. In International Conference on the Theory and Application of Cryptology and Information Security, 2020.
\bibitem{Bet09} Bettale, L., Faug\`{e}re, J. <|MaskedSetence|> and Perret, L.: hybrid approach for solving multivariate systems over finite fields. <|MaskedSetence|> <|MaskedSetence|> Crypt., vol. 3, pp. 177--197 (2009).. | **A**: Math.
**B**: J.
**C**: C.
| CBA | CBA | ACB | CBA | Selection 2 |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
Use the Edit dataset card button to edit it.
- Downloads last month
- 41