arxiv_id
stringlengths 10
10
| published
stringlengths 20
20
| titles
stringlengths 9
243
| authors
sequencelengths 1
389
| abstract
stringlengths 96
3.09k
| categories
sequencelengths 1
10
| selected
bool 2
classes |
---|---|---|---|---|---|---|
2305.04658 | 2023-05-08T12:21:24Z | CSGCL: Community-Strength-Enhanced Graph Contrastive Learning | [
"Han Chen",
"Ziwen Zhao",
"Yuhua Li",
"Yixiong Zou",
"Ruixuan Li",
"Rui Zhang"
] | Graph Contrastive Learning (GCL) is an effective way to learn generalized
graph representations in a self-supervised manner, and has grown rapidly in
recent years. However, the underlying community semantics has not been well
explored by most previous GCL methods. Research that attempts to leverage
communities in GCL regards them as having the same influence on the graph,
leading to extra representation errors. To tackle this issue, we define
''community strength'' to measure the difference of influence among
communities. Under this premise, we propose a Community-Strength-enhanced Graph
Contrastive Learning (CSGCL) framework to preserve community strength
throughout the learning process. Firstly, we present two novel graph
augmentation methods, Communal Attribute Voting (CAV) and Communal Edge
Dropping (CED), where the perturbations of node attributes and edges are guided
by community strength. Secondly, we propose a dynamic ''Team-up'' contrastive
learning scheme, where community strength is used to progressively fine-tune
the contrastive objective. We report extensive experiment results on three
downstream tasks: node classification, node clustering, and link prediction.
CSGCL achieves state-of-the-art performance compared with other GCL methods,
validating that community strength brings effectiveness and generality to graph
representations. Our code is available at
https://github.com/HanChen-HUST/CSGCL. | [
"cs.SI",
"cs.AI",
"cs.LG"
] | false |
2305.04692 | 2023-05-08T13:22:16Z | Anticipatory Planning: Improving Long-Lived Planning by Estimating
Expected Cost of Future Tasks | [
"Roshan Dhakal",
"Md Ridwan Hossain Talukder",
"Gregory J. Stein"
] | We consider a service robot in a household environment given a sequence of
high-level tasks one at a time. Most existing task planners, lacking knowledge
of what they may be asked to do next, solve each task in isolation and so may
unwittingly introduce side effects that make subsequent tasks more costly. In
order to reduce the overall cost of completing all tasks, we consider that the
robot must anticipate the impact its actions could have on future tasks. Thus,
we propose anticipatory planning: an approach in which estimates of the
expected future cost, from a graph neural network, augment model-based task
planning. Our approach guides the robot towards behaviors that encourage
preparation and organization, reducing overall costs in long-lived planning
scenarios. We evaluate our method on blockworld environments and show that our
approach reduces the overall planning costs by 5% as compared to planning
without anticipatory planning. Additionally, if given an opportunity to prepare
the environment in advance (a special case of anticipatory planning), our
planner improves overall cost by 11%. | [
"cs.RO",
"cs.AI",
"cs.LG"
] | false |
2305.04750 | 2023-05-08T14:49:02Z | Sense, Imagine, Act: Multimodal Perception Improves Model-Based
Reinforcement Learning for Head-to-Head Autonomous Racing | [
"Elena Shrestha",
"Chetan Reddy",
"Hanxi Wan",
"Yulun Zhuang",
"Ram Vasudevan"
] | Model-based reinforcement learning (MBRL) techniques have recently yielded
promising results for real-world autonomous racing using high-dimensional
observations. MBRL agents, such as Dreamer, solve long-horizon tasks by
building a world model and planning actions by latent imagination. This
approach involves explicitly learning a model of the system dynamics and using
it to learn the optimal policy for continuous control over multiple timesteps.
As a result, MBRL agents may converge to sub-optimal policies if the world
model is inaccurate. To improve state estimation for autonomous racing, this
paper proposes a self-supervised sensor fusion technique that combines
egocentric LiDAR and RGB camera observations collected from the F1TENTH Gym.
The zero-shot performance of MBRL agents is empirically evaluated on unseen
tracks and against a dynamic obstacle. This paper illustrates that multimodal
perception improves robustness of the world model without requiring additional
training data. The resulting multimodal Dreamer agent safely avoided collisions
and won the most races compared to other tested baselines in zero-shot
head-to-head autonomous racing. | [
"cs.RO",
"cs.AI",
"cs.LG"
] | false |
2305.04819 | 2023-05-08T16:20:03Z | Local Optimization Achieves Global Optimality in Multi-Agent
Reinforcement Learning | [
"Yulai Zhao",
"Zhuoran Yang",
"Zhaoran Wang",
"Jason D. Lee"
] | Policy optimization methods with function approximation are widely used in
multi-agent reinforcement learning. However, it remains elusive how to design
such algorithms with statistical guarantees. Leveraging a multi-agent
performance difference lemma that characterizes the landscape of multi-agent
policy optimization, we find that the localized action value function serves as
an ideal descent direction for each local policy. Motivated by the observation,
we present a multi-agent PPO algorithm in which the local policy of each agent
is updated similarly to vanilla PPO. We prove that with standard regularity
conditions on the Markov game and problem-dependent quantities, our algorithm
converges to the globally optimal policy at a sublinear rate. We extend our
algorithm to the off-policy setting and introduce pessimism to policy
evaluation, which aligns with experiments. To our knowledge, this is the first
provably convergent multi-agent PPO algorithm in cooperative Markov games. | [
"cs.LG",
"cs.GT",
"cs.MA",
"stat.ML"
] | false |
2305.04979 | 2023-05-08T18:21:41Z | FedHB: Hierarchical Bayesian Federated Learning | [
"Minyoung Kim",
"Timothy Hospedales"
] | We propose a novel hierarchical Bayesian approach to Federated Learning (FL),
where our model reasonably describes the generative process of clients' local
data via hierarchical Bayesian modeling: constituting random variables of local
models for clients that are governed by a higher-level global variate.
Interestingly, the variational inference in our Bayesian model leads to an
optimisation problem whose block-coordinate descent solution becomes a
distributed algorithm that is separable over clients and allows them not to
reveal their own private data at all, thus fully compatible with FL. We also
highlight that our block-coordinate algorithm has particular forms that subsume
the well-known FL algorithms including Fed-Avg and Fed-Prox as special cases.
Beyond introducing novel modeling and derivations, we also offer convergence
analysis showing that our block-coordinate FL algorithm converges to an (local)
optimum of the objective at the rate of $O(1/\sqrt{t})$, the same rate as
regular (centralised) SGD, as well as the generalisation error analysis where
we prove that the test error of our model on unseen data is guaranteed to
vanish as we increase the training data size, thus asymptotically optimal. | [
"cs.LG",
"cs.DC",
"stat.ML"
] | false |
2305.05078 | 2023-05-08T22:37:16Z | SECRETS: Subject-Efficient Clinical Randomized Controlled Trials using
Synthetic Intervention | [
"Sayeri Lala",
"Niraj K. Jha"
] | The randomized controlled trial (RCT) is the gold standard for estimating the
average treatment effect (ATE) of a medical intervention but requires
100s-1000s of subjects, making it expensive and difficult to implement. While a
cross-over trial can reduce sample size requirements by measuring the treatment
effect per individual, it is only applicable to chronic conditions and
interventions whose effects dissipate rapidly. Another approach is to replace
or augment data collected from an RCT with external data from prospective
studies or prior RCTs, but it is vulnerable to confounders in the external or
augmented data. We propose to simulate the cross-over trial to overcome its
practical limitations while exploiting its strengths. We propose a novel
framework, SECRETS, which, for the first time, estimates the individual
treatment effect (ITE) per patient in the RCT study without using any external
data by leveraging a state-of-the-art counterfactual estimation algorithm,
called synthetic intervention. It also uses a new hypothesis testing strategy
to determine whether the treatment has a clinically significant ATE based on
the estimated ITEs. We show that SECRETS can improve the power of an RCT while
maintaining comparable significance levels; in particular, on three real-world
clinical RCTs (Phase-3 trials), SECRETS increases power over the baseline
method by $\boldsymbol{6}$-$\boldsymbol{54\%}$ (average: 21.5%, standard
deviation: 15.8%). | [
"eess.SP",
"cs.LG",
"stat.AP",
"stat.ME"
] | false |
2305.05090 | 2023-05-08T23:29:24Z | Performative Federated Learning: A Solution to Model-Dependent and
Heterogeneous Distribution Shifts | [
"Kun Jin",
"Tongxin Yin",
"Zhongzhu Chen",
"Zeyu Sun",
"Xueru Zhang",
"Yang Liu",
"Mingyan Liu"
] | We consider a federated learning (FL) system consisting of multiple clients
and a server, where the clients aim to collaboratively learn a common decision
model from their distributed data. Unlike the conventional FL framework that
assumes the client's data is static, we consider scenarios where the clients'
data distributions may be reshaped by the deployed decision model. In this
work, we leverage the idea of distribution shift mappings in performative
prediction to formalize this model-dependent data distribution shift and
propose a performative federated learning framework. We first introduce
necessary and sufficient conditions for the existence of a unique performative
stable solution and characterize its distance to the performative optimal
solution. Then we propose the performative FedAvg algorithm and show that it
converges to the performative stable solution at a rate of O(1/T) under both
full and partial participation schemes. In particular, we use novel proof
techniques and show how the clients' heterogeneity influences the convergence.
Numerical results validate our analysis and provide valuable insights into
real-world applications. | [
"cs.LG",
"cs.DC",
"math.OC"
] | false |
2305.05525 | 2023-05-08T08:30:05Z | Exploring a Gradient-based Explainable AI Technique for Time-Series
Data: A Case Study of Assessing Stroke Rehabilitation Exercises | [
"Min Hun Lee",
"Yi Jing Choy"
] | Explainable artificial intelligence (AI) techniques are increasingly being
explored to provide insights into why AI and machine learning (ML) models
provide a certain outcome in various applications. However, there has been
limited exploration of explainable AI techniques on time-series data,
especially in the healthcare context. In this paper, we describe a
threshold-based method that utilizes a weakly supervised model and a
gradient-based explainable AI technique (i.e. saliency map) and explore its
feasibility to identify salient frames of time-series data. Using the dataset
from 15 post-stroke survivors performing three upper-limb exercises and labels
on whether a compensatory motion is observed or not, we implemented a
feed-forward neural network model and utilized gradients of each input on model
outcomes to identify salient frames that involve compensatory motions.
According to the evaluation using frame-level annotations, our approach
achieved a recall of 0.96 and an F2-score of 0.91. Our results demonstrated the
potential of a gradient-based explainable AI technique (e.g. saliency map) for
time-series data, such as highlighting the frames of a video that therapists
should focus on reviewing and reducing the efforts on frame-level labeling for
model training. | [
"cs.LG",
"cs.AI",
"cs.HC"
] | false |
2305.05531 | 2023-05-08T17:30:24Z | Modelling Concurrency Bugs Using Machine Learning | [
"Teodor Rares Begu"
] | Artificial Intelligence has gained a lot of traction in the recent years,
with machine learning notably starting to see more applications across a varied
range of fields. One specific machine learning application that is of interest
to us is that of software safety and security, especially in the context of
parallel programs. The issue of being able to detect concurrency bugs
automatically has intrigued programmers for a long time, as the added layer of
complexity makes concurrent programs more prone to failure. The development of
such automatic detection tools provides considerable benefits to programmers in
terms of saving time while debugging, as well as reducing the number of
unexpected bugs. We believe machine learning may help achieve this goal by
providing additional advantages over current approaches, in terms of both
overall tool accuracy as well as programming language flexibility. However, due
to the presence of numerous challenges specific to the machine learning
approach (correctly labelling a sufficiently large dataset, finding the best
model types/architectures and so forth), we have to approach each issue of
developing such a tool separately. Therefore, the focus of this project is on
comparing both common and recent machine learning approaches. We abstract away
the complexity of procuring a labelled dataset of concurrent programs under the
form of a synthetic dataset that we define and generate with the scope of
simulating real-life (concurrent) programs. We formulate hypotheses about
fundamental limits of various machine learning model types which we then
validate by running extensive tests on our synthetic dataset. We hope that our
findings provide more insight in the advantages and disadvantages of various
model types when modelling programs using machine learning, as well as any
other related field (e.g. NLP). | [
"cs.SE",
"cs.LG",
"cs.PL"
] | false |
2305.09673 | 2023-05-08T22:12:34Z | Vulnerability Detection Using Two-Stage Deep Learning Models | [
"Mohamed Mjd Alhafi",
"Mohammad Hammade",
"Khloud Al Jallad"
] | Application security is an essential part of developing modern software, as
lots of attacks depend on vulnerabilities in software. The number of attacks is
increasing globally due to technological advancements. Companies must include
security in every stage of developing, testing, and deploying their software in
order to prevent data breaches. There are several methods to detect software
vulnerability Non-AI-based such as Static Application Security Testing (SAST)
and Dynamic Application Security Testing (DAST). However, these approaches have
substantial false-positive and false-negative rates. On the other side,
researchers have been interested in developing an AI-based vulnerability
detection system employing deep learning models like BERT, BLSTM, etc. In this
paper, we proposed a two-stage solution, two deep learning models were proposed
for vulnerability detection in C/C++ source codes, the first stage is CNN which
detects if the source code contains any vulnerability (binary classification
model) and the second stage is CNN-LTSM that classifies this vulnerability into
a class of 50 different types of vulnerabilities (multiclass classification
model). Experiments were done on SySeVR dataset. Results show an accuracy of
99% for the first and 98% for the second stage. | [
"cs.CR",
"cs.AI",
"cs.LG"
] | false |
2305.04630 | 2023-05-08T11:12:22Z | Federated Learning in Wireless Networks via Over-the-Air Computations | [
"Halil Yigit Oksuz",
"Fabio Molinari",
"Henning Sprekeler",
"Jörg Raisch"
] | In a multi-agent system, agents can cooperatively learn a model from data by
exchanging their estimated model parameters, without the need to exchange the
locally available data used by the agents. This strategy, often called
federated learning, is mainly employed for two reasons: (i) improving
resource-efficiency by avoiding to share potentially large datasets and (ii)
guaranteeing privacy of local agents' data. Efficiency can be further increased
by adopting a beyond-5G communication strategy that goes under the name of
Over-the-Air Computation. This strategy exploits the interference property of
the wireless channel. Standard communication schemes prevent interference by
enabling transmissions of signals from different agents at distinct time or
frequency slots, which is not required with Over-the-Air Computation, thus
saving resources. In this case, the received signal is a weighted sum of
transmitted signals, with unknown weights (fading channel coefficients). State
of the art papers in the field aim at reconstructing those unknown
coefficients. In contrast, the approach presented here does not require
reconstructing channel coefficients by complex encoding-decoding schemes. This
improves both efficiency and privacy. | [
"cs.LG",
"cs.CR",
"cs.IT",
"cs.MA",
"math.IT"
] | false |
2305.06884 | 2023-05-08T17:34:06Z | Risk-limiting Financial Audits via Weighted Sampling without Replacement | [
"Shubhanshu Shekhar",
"Ziyu Xu",
"Zachary C. Lipton",
"Pierre J. Liang",
"Aaditya Ramdas"
] | We introduce the notion of a risk-limiting financial auditing (RLFA): given
$N$ transactions, the goal is to estimate the total misstated monetary
fraction~($m^*$) to a given accuracy $\epsilon$, with confidence $1-\delta$. We
do this by constructing new confidence sequences (CSs) for the weighted average
of $N$ unknown values, based on samples drawn without replacement according to
a (randomized) weighted sampling scheme. Using the idea of importance weighting
to construct test martingales, we first develop a framework to construct CSs
for arbitrary sampling strategies. Next, we develop methods to improve the
quality of CSs by incorporating side information about the unknown values
associated with each item. We show that when the side information is
sufficiently predictive, it can directly drive the sampling. Addressing the
case where the accuracy is unknown a priori, we introduce a method that
incorporates side information via control variates. Crucially, our construction
is adaptive: if the side information is highly predictive of the unknown
misstated amounts, then the benefits of incorporating it are significant; but
if the side information is uncorrelated, our methods learn to ignore it. Our
methods recover state-of-the-art bounds for the special case when the weights
are equal, which has already found applications in election auditing. The
harder weighted case solves our more challenging problem of AI-assisted
financial auditing. | [
"stat.ME",
"cs.AI",
"cs.LG",
"math.ST",
"stat.AP",
"stat.ML",
"stat.TH"
] | false |
2305.05146 | 2023-05-09T03:18:35Z | A Mountain-Shaped Single-Stage Network for Accurate Image Restoration | [
"Hu Gao",
"Jing Yang",
"Ying Zhang",
"Ning Wang",
"Jingfan Yang",
"Depeng Dang"
] | Image restoration is the task of aiming to obtain a high-quality image from a
corrupt input image, such as deblurring and deraining. In image restoration, it
is typically necessary to maintain a complex balance between spatial details
and contextual information. Although a multi-stage network can optimally
balance these competing goals and achieve significant performance, this also
increases the system's complexity. In this paper, we propose a mountain-shaped
single-stage design base on a simple U-Net architecture, which removes or
replaces unnecessary nonlinear activation functions to achieve the above
balance with low system complexity. Specifically, we propose a feature fusion
middleware (FFM) mechanism as an information exchange component between the
encoder-decoder architectural levels. It seamlessly integrates upper-layer
information into the adjacent lower layer, sequentially down to the lowest
layer. Finally, all information is fused into the original image resolution
manipulation level. This preserves spatial details and integrates contextual
information, ensuring high-quality image restoration. In addition, we propose a
multi-head attention middle block (MHAMB) as a bridge between the encoder and
decoder to capture more global information and surpass the limitations of the
receptive field of CNNs. Extensive experiments demonstrate that our approach,
named as M3SNet, outperforms previous state-of-the-art models while using less
than half the computational costs, for several image restoration tasks, such as
image deraining and deblurring. | [
"cs.CV"
] | false |
2305.05154 | 2023-05-09T03:33:43Z | Multi-Granularity Denoising and Bidirectional Alignment for Weakly
Supervised Semantic Segmentation | [
"Tao Chen",
"Yazhou Yao",
"Jinhui Tang"
] | Weakly supervised semantic segmentation (WSSS) models relying on class
activation maps (CAMs) have achieved desirable performance comparing to the
non-CAMs-based counterparts. However, to guarantee WSSS task feasible, we need
to generate pseudo labels by expanding the seeds from CAMs which is complex and
time-consuming, thus hindering the design of efficient end-to-end
(single-stage) WSSS approaches. To tackle the above dilemma, we resort to the
off-the-shelf and readily accessible saliency maps for directly obtaining
pseudo labels given the image-level class labels. Nevertheless, the salient
regions may contain noisy labels and cannot seamlessly fit the target objects,
and saliency maps can only be approximated as pseudo labels for simple images
containing single-class objects. As such, the achieved segmentation model with
these simple images cannot generalize well to the complex images containing
multi-class objects. To this end, we propose an end-to-end multi-granularity
denoising and bidirectional alignment (MDBA) model, to alleviate the noisy
label and multi-class generalization issues. Specifically, we propose the
online noise filtering and progressive noise detection modules to tackle
image-level and pixel-level noise, respectively. Moreover, a bidirectional
alignment mechanism is proposed to reduce the data distribution gap at both
input and output space with simple-to-complex image synthesis and
complex-to-simple adversarial learning. MDBA can reach the mIoU of 69.5\% and
70.2\% on validation and test sets for the PASCAL VOC 2012 dataset. The source
codes and models have been made available at
\url{https://github.com/NUST-Machine-Intelligence-Laboratory/MDBA}. | [
"cs.CV"
] | false |
2305.05161 | 2023-05-09T04:08:14Z | Child Palm-ID: Contactless Palmprint Recognition for Children | [
"Akash Godbole",
"Steven A. Grosz",
"Anil K. Jain"
] | Effective distribution of nutritional and healthcare aid for children,
particularly infants and toddlers, in some of the least developed and most
impoverished countries of the world, is a major problem due to the lack of
reliable identification documents. Biometric authentication technology has been
investigated to address child recognition in the absence of reliable ID
documents. We present a mobile-based contactless palmprint recognition system,
called Child Palm-ID, which meets the requirements of usability, hygiene, cost,
and accuracy for child recognition. Using a contactless child palmprint
database, Child-PalmDB1, consisting of 19,158 images from 1,020 unique palms
(in the age range of 6 mos. to 48 mos.), we report a TAR=94.11% @ FAR=0.1%. The
proposed Child Palm-ID system is also able to recognize adults, achieving a
TAR=99.4% on the CASIA contactless palmprint database and a TAR=100% on the
COEP contactless adult palmprint database, both @ FAR=0.1%. These accuracies
are competitive with the SOTA provided by COTS systems. Despite these high
accuracies, we show that the TAR for time-separated child-palmprints is only
78.1% @ FAR=0.1%. | [
"cs.CV"
] | false |
2305.05175 | 2023-05-09T05:04:35Z | SRIL: Selective Regularization for Class-Incremental Learning | [
"Jisu Han",
"Jaemin Na",
"Wonjun Hwang"
] | Human intelligence gradually accepts new information and accumulates
knowledge throughout the lifespan. However, deep learning models suffer from a
catastrophic forgetting phenomenon, where they forget previous knowledge when
acquiring new information. Class-Incremental Learning aims to create an
integrated model that balances plasticity and stability to overcome this
challenge. In this paper, we propose a selective regularization method that
accepts new knowledge while maintaining previous knowledge. We first introduce
an asymmetric feature distillation method for old and new classes inspired by
cognitive science, using the gradient of classification and knowledge
distillation losses to determine whether to perform pattern completion or
pattern separation. We also propose a method to selectively interpolate the
weight of the previous model for a balance between stability and plasticity,
and we adjust whether to transfer through model confidence to ensure the
performance of the previous class and enable exploratory learning. We validate
the effectiveness of the proposed method, which surpasses the performance of
existing methods through extensive experimental protocols using CIFAR-100,
ImageNet-Subset, and ImageNet-Full. | [
"cs.CV"
] | false |
2305.05177 | 2023-05-09T05:19:16Z | Hybrid Transformer and CNN Attention Network for Stereo Image
Super-resolution | [
"Ming Cheng",
"Haoyu Ma",
"Qiufang Ma",
"Xiaopeng Sun",
"Weiqi Li",
"Zhenyu Zhang",
"Xuhan Sheng",
"Shijie Zhao",
"Junlin Li",
"Li Zhang"
] | Multi-stage strategies are frequently employed in image restoration tasks.
While transformer-based methods have exhibited high efficiency in single-image
super-resolution tasks, they have not yet shown significant advantages over
CNN-based methods in stereo super-resolution tasks. This can be attributed to
two key factors: first, current single-image super-resolution transformers are
unable to leverage the complementary stereo information during the process;
second, the performance of transformers is typically reliant on sufficient
data, which is absent in common stereo-image super-resolution algorithms. To
address these issues, we propose a Hybrid Transformer and CNN Attention Network
(HTCAN), which utilizes a transformer-based network for single-image
enhancement and a CNN-based network for stereo information fusion. Furthermore,
we employ a multi-patch training strategy and larger window sizes to activate
more input pixels for super-resolution. We also revisit other advanced
techniques, such as data augmentation, data ensemble, and model ensemble to
reduce overfitting and data bias. Finally, our approach achieved a score of
23.90dB and emerged as the winner in Track 1 of the NTIRE 2023 Stereo Image
Super-Resolution Challenge. | [
"cs.CV"
] | false |
2305.05233 | 2023-05-09T07:49:21Z | DynamicKD: An Effective Knowledge Distillation via Dynamic Entropy
Correction-Based Distillation for Gap Optimizing | [
"Songling Zhu",
"Ronghua Shang",
"Bo Yuan",
"Weitong Zhang",
"Yangyang Li",
"Licheng Jiao"
] | The knowledge distillation uses a high-performance teacher network to guide
the student network. However, the performance gap between the teacher and
student networks can affect the student's training. This paper proposes a novel
knowledge distillation algorithm based on dynamic entropy correction to reduce
the gap by adjusting the student instead of the teacher. Firstly, the effect of
changing the output entropy (short for output information entropy) in the
student on the distillation loss is analyzed in theory. This paper shows that
correcting the output entropy can reduce the gap. Then, a knowledge
distillation algorithm based on dynamic entropy correction is created, which
can correct the output entropy in real-time with an entropy controller updated
dynamically by the distillation loss. The proposed algorithm is validated on
the CIFAR100 and ImageNet. The comparison with various state-of-the-art
distillation algorithms shows impressive results, especially in the experiment
on the CIFAR100 regarding teacher-student pair resnet32x4-resnet8x4. The
proposed algorithm raises 2.64 points over the traditional distillation
algorithm and 0.87 points over the state-of-the-art algorithm CRD in
classification accuracy, demonstrating its effectiveness and efficiency. | [
"cs.CV"
] | false |
2305.05256 | 2023-05-09T08:25:49Z | Patch-DrosoNet: Classifying Image Partitions With Fly-Inspired Models
For Lightweight Visual Place Recognition | [
"Bruno Arcanjo",
"Bruno Ferrarini",
"Michael Milford",
"Klaus D. McDonald-Maier",
"Shoaib Ehsan"
] | Visual place recognition (VPR) enables autonomous systems to localize
themselves within an environment using image information. While Convolution
Neural Networks (CNNs) currently dominate state-of-the-art VPR performance,
their high computational requirements make them unsuitable for platforms with
budget or size constraints. This has spurred the development of lightweight
algorithms, such as DrosoNet, which employs a voting system based on multiple
bio-inspired units. In this paper, we present a novel training approach for
DrosoNet, wherein separate models are trained on distinct regions of a
reference image, allowing them to specialize in the visual features of that
specific section. Additionally, we introduce a convolutional-like prediction
method, in which each DrosoNet unit generates a set of place predictions for
each portion of the query image. These predictions are then combined using the
previously introduced voting system. Our approach significantly improves upon
the VPR performance of previous work while maintaining an extremely compact and
lightweight algorithm, making it suitable for resource-constrained platforms. | [
"cs.CV"
] | false |
2305.05260 | 2023-05-09T08:32:06Z | Guided Focal Stack Refinement Network for Light Field Salient Object
Detection | [
"Bo Yuan",
"Yao Jiang",
"Keren Fu",
"Qijun Zhao"
] | Light field salient object detection (SOD) is an emerging research direction
attributed to the richness of light field data. However, most existing methods
lack effective handling of focal stacks, therefore making the latter involved
in a lot of interfering information and degrade the performance of SOD. To
address this limitation, we propose to utilize multi-modal features to refine
focal stacks in a guided manner, resulting in a novel guided focal stack
refinement network called GFRNet. To this end, we propose a guided refinement
and fusion module (GRFM) to refine focal stacks and aggregate multi-modal
features. In GRFM, all-in-focus (AiF) and depth modalities are utilized to
refine focal stacks separately, leading to two novel sub-modules for different
modalities, namely AiF-based refinement module (ARM) and depth-based refinement
module (DRM). Such refinement modules enhance structural and positional
information of salient objects in focal stacks, and are able to improve SOD
accuracy. Experimental results on four benchmark datasets demonstrate the
superiority of our GFRNet model against 12 state-of-the-art models. | [
"cs.CV"
] | false |
2305.05322 | 2023-05-09T10:16:43Z | TPS++: Attention-Enhanced Thin-Plate Spline for Scene Text Recognition | [
"Tianlun Zheng",
"Zhineng Chen",
"Jinfeng Bai",
"Hongtao Xie",
"Yu-Gang Jiang"
] | Text irregularities pose significant challenges to scene text recognizers.
Thin-Plate Spline (TPS)-based rectification is widely regarded as an effective
means to deal with them. Currently, the calculation of TPS transformation
parameters purely depends on the quality of regressed text borders. It ignores
the text content and often leads to unsatisfactory rectified results for
severely distorted text. In this work, we introduce TPS++, an
attention-enhanced TPS transformation that incorporates the attention mechanism
to text rectification for the first time. TPS++ formulates the parameter
calculation as a joint process of foreground control point regression and
content-based attention score estimation, which is computed by a dedicated
designed gated-attention block. TPS++ builds a more flexible content-aware
rectifier, generating a natural text correction that is easier to read by the
subsequent recognizer. Moreover, TPS++ shares the feature backbone with the
recognizer in part and implements the rectification at feature-level rather
than image-level, incurring only a small overhead in terms of parameters and
inference time. Experiments on public benchmarks show that TPS++ consistently
improves the recognition and achieves state-of-the-art accuracy. Meanwhile, it
generalizes well on different backbones and recognizers. Code is at
https://github.com/simplify23/TPS_PP. | [
"cs.CV"
] | false |
2305.05490 | 2023-05-09T14:43:38Z | Real-time instance segmentation with polygons using an
Intersection-over-Union loss | [
"Katia Jodogne-Del Litto",
"Guillaume-Alexandre Bilodeau"
] | Predicting a binary mask for an object is more accurate but also more
computationally expensive than a bounding box. Polygonal masks as developed in
CenterPoly can be a good compromise. In this paper, we improve over CenterPoly
by enhancing the classical regression L1 loss with a novel region-based loss
and a novel order loss, as well as with a new training process for the vertices
prediction head. Moreover, the previous methods that predict polygonal masks
use different coordinate systems, but it is not clear if one is better than
another, if we abstract the architecture requirement. We therefore investigate
their impact on the prediction. We also use a new evaluation protocol with
oracle predictions for the detection head, to further isolate the segmentation
process and better compare the polygonal masks with binary masks. Our instance
segmentation method is trained and tested with challenging datasets containing
urban scenes, with a high density of road users. Experiments show, in
particular, that using a combination of a regression loss and a region-based
loss allows significant improvements on the Cityscapes and IDD test set
compared to CenterPoly. Moreover the inference stage remains fast enough to
reach real-time performance with an average of 0.045 s per frame for
2048$\times$1024 images on a single RTX 2070 GPU. The code is available
$\href{https://github.com/KatiaJDL/CenterPoly-v2}{\text{here}}$. | [
"cs.CV"
] | false |
2305.05523 | 2023-05-09T15:22:18Z | RMES: Real-Time Micro-Expression Spotting Using Phase From Riesz Pyramid | [
"Yini Fang",
"Didan Deng",
"Liang Wu",
"Frederic Jumelle",
"Bertram Shi"
] | Micro-expressions (MEs) are involuntary and subtle facial expressions that
are thought to reveal feelings people are trying to hide. ME spotting detects
the temporal intervals containing MEs in videos. Detecting such quick and
subtle motions from long videos is difficult. Recent works leverage detailed
facial motion representations, such as the optical flow, and deep learning
models, leading to high computational complexity. To reduce computational
complexity and achieve real-time operation, we propose RMES, a real-time ME
spotting framework. We represent motion using phase computed by Riesz Pyramid,
and feed this motion representation into a three-stream shallow CNN, which
predicts the likelihood of each frame belonging to an ME. In comparison to
optical flow, phase provides more localized motion estimates, which are
essential for ME spotting, resulting in higher performance. Using phase also
reduces the required computation of the ME spotting pipeline by 77.8%. Despite
its relative simplicity and low computational complexity, our framework
achieves state-of-the-art performance on two public datasets: CAS(ME)2 and SAMM
Long Videos. | [
"cs.CV"
] | false |
2305.05526 | 2023-05-09T15:25:45Z | EFE: End-to-end Frame-to-Gaze Estimation | [
"Haldun Balim",
"Seonwook Park",
"Xi Wang",
"Xucong Zhang",
"Otmar Hilliges"
] | Despite the recent development of learning-based gaze estimation methods,
most methods require one or more eye or face region crops as inputs and produce
a gaze direction vector as output. Cropping results in a higher resolution in
the eye regions and having fewer confounding factors (such as clothing and
hair) is believed to benefit the final model performance. However, this
eye/face patch cropping process is expensive, erroneous, and
implementation-specific for different methods. In this paper, we propose a
frame-to-gaze network that directly predicts both 3D gaze origin and 3D gaze
direction from the raw frame out of the camera without any face or eye
cropping. Our method demonstrates that direct gaze regression from the raw
downscaled frame, from FHD/HD to VGA/HVGA resolution, is possible despite the
challenges of having very few pixels in the eye region. The proposed method
achieves comparable results to state-of-the-art methods in Point-of-Gaze (PoG)
estimation on three public gaze datasets: GazeCapture, MPIIFaceGaze, and EVE,
and generalizes well to extreme camera view changes. | [
"cs.CV"
] | false |
2305.05583 | 2023-05-09T16:18:18Z | Group Activity Recognition via Dynamic Composition and Interaction | [
"Youliang Zhang",
"Zhuo Zhou",
"Wenxuan Liu",
"Danni Xu",
"Zheng Wang"
] | Previous group activity recognition approaches were limited to reasoning
using human relations or finding important subgroups and tended to ignore
indispensable group composition and human-object interactions. This absence
makes a partial interpretation of the scene and increases the interference of
irrelevant actions on the results. Therefore, we propose our DynamicFormer with
Dynamic composition Module (DcM) and Dynamic interaction Module (DiM) to model
relations and locations of persons and discriminate the contribution of
participants, respectively. Our findings on group composition and human-object
interaction inspire our core idea. Group composition tells us the location of
people and their relations inside the group, while interaction reflects the
relation between humans and objects outside the group. We utilize spatial and
temporal encoders in DcM to model our dynamic composition and build DiM to
explore interaction with a novel GCN, which has a transformer inside to
consider the temporal neighbors of human/object. Also, a Multi-level Dynamic
Integration is employed to integrate features from different levels. We conduct
extensive experiments on two public datasets and show that our method achieves
state-of-the-art. | [
"cs.CV"
] | false |
2305.05598 | 2023-05-09T16:46:33Z | Region-based Contrastive Pretraining for Medical Image Retrieval with
Anatomic Query | [
"Ho Hin Lee",
"Alberto Santamaria-Pang",
"Jameson Merkow",
"Ozan Oktay",
"Fernando Pérez-García",
"Javier Alvarez-Valle",
"Ivan Tarapov"
] | We introduce a novel Region-based contrastive pretraining for Medical Image
Retrieval (RegionMIR) that demonstrates the feasibility of medical image
retrieval with similar anatomical regions. RegionMIR addresses two major
challenges for medical image retrieval i) standardization of clinically
relevant searching criteria (e.g., anatomical, pathology-based), and ii)
localization of anatomical area of interests that are semantically meaningful.
In this work, we propose an ROI image retrieval image network that retrieves
images with similar anatomy by extracting anatomical features (via bounding
boxes) and evaluate similarity between pairwise anatomy-categorized features
between the query and the database of images using contrastive learning. ROI
queries are encoded using a contrastive-pretrained encoder that was fine-tuned
for anatomy classification, which generates an anatomical-specific latent space
for region-correlated image retrieval. During retrieval, we compare the
anatomically encoded query to find similar features within a feature database
generated from training samples, and retrieve images with similar regions from
training samples. We evaluate our approach on both anatomy classification and
image retrieval tasks using the Chest ImaGenome Dataset. Our proposed strategy
yields an improvement over state-of-the-art pretraining and co-training
strategies, from 92.24 to 94.12 (2.03%) classification accuracy in anatomies.
We qualitatively evaluate the image retrieval performance demonstrating
generalizability across multiple anatomies with different morphology. | [
"cs.CV"
] | false |
2305.05651 | 2023-05-09T17:49:27Z | SwinIA: Self-Supervised Blind-Spot Image Denoising with Zero
Convolutions | [
"Mikhail Papkov",
"Pavel Chizhov"
] | The essence of self-supervised image denoising is to restore the signal from
the noisy image alone. State-of-the-art solutions for this task rely on the
idea of masking pixels and training a fully-convolutional neural network to
impute them. This most often requires multiple forward passes, information
about the noise model, and intricate regularization functions. In this paper,
we propose a Swin Transformer-based Image Autoencoder (SwinIA), the first
convolution-free architecture for self-supervised denoising. It can be trained
end-to-end with a simple mean squared error loss without masking and does not
require any prior knowledge about clean data or noise distribution. Despite its
simplicity, SwinIA establishes state-of-the-art on several common benchmarks. | [
"cs.CV"
] | false |
2305.05705 | 2023-05-09T18:24:33Z | An Evaluation and Ranking of Different Voting Schemes for Improved
Visual Place Recognition | [
"Maria Waheed",
"Michael Milford",
"Xiaojun Zhai",
"Klaus McDonald-Maier",
"Shoaib Ehsan"
] | Visual Place Recognition has recently seen a surge of endeavours utilizing
different ensemble approaches to improve VPR performance. Ideas like
multi-process fusion or switching involve combining different VPR techniques
together, utilizing different strategies. One major aspect often common to many
of these strategies is voting. Voting is widely used in many ensemble methods,
so it is potentially a relevant subject to explore in terms of its application
and significance for improving VPR performance. This paper attempts to looks
into detail and analyze a variety of voting schemes to evaluate which voting
technique is optimal for an ensemble VPR set up. We take inspiration from a
variety of voting schemes that exist and are widely employed in other research
fields such as politics and sociology. The idea is inspired by an observation
that different voting methods result in different outcomes for the same type of
data and each voting scheme is utilized for specific cases in different
academic fields. Some of these voting schemes include Condorcet voting, Broda
Count and Plurality voting. Voting employed in any aspect requires that a fair
system be established, that outputs the best and most favourable results which
in our case would involve improving VPR performance. We evaluate some of these
voting techniques in a standardized testing of different VPR techniques, using
a variety of VPR data sets. We aim to determine whether a single optimal voting
scheme exists or, much like in other fields of research, the selection of a
voting technique is relative to its application and environment. We also aim to
propose a ranking of these different voting methods from best to worst
according to our results as this will allow for better selection of voting
schemes. | [
"cs.CV"
] | false |
2305.05776 | 2023-05-09T21:34:38Z | Visual Place Recognition with Low-Resolution Images | [
"Mihnea-Alexandru Tomita",
"Bruno Ferrarini",
"Michael Milford",
"Klaus McDonald-Maier",
"Shoaib Ehsan"
] | Images incorporate a wealth of information from a robot's surroundings. With
the widespread availability of compact cameras, visual information has become
increasingly popular for addressing the localisation problem, which is then
termed as Visual Place Recognition (VPR). While many applications use
high-resolution cameras and high-end systems to achieve optimal place-matching
performance, low-end commercial systems face limitations due to resource
constraints and relatively low-resolution and low-quality cameras. In this
paper, we analyse the effects of image resolution on the accuracy and
robustness of well-established handcrafted VPR pipelines. Handcrafted designs
have low computational demands and can adapt to flexible image resolutions,
making them a suitable approach to scale to any image source and to operate
under resource limitations. This paper aims to help academic researchers and
companies in the hardware and software industry co-design VPR solutions and
expand the use of VPR algorithms in commercial products. | [
"cs.CV"
] | false |
2305.05785 | 2023-05-09T22:13:04Z | Regular Splitting Graph Network for 3D Human Pose Estimation | [
"Tanvir Hassan",
"A. Ben Hamza"
] | In human pose estimation methods based on graph convolutional architectures,
the human skeleton is usually modeled as an undirected graph whose nodes are
body joints and edges are connections between neighboring joints. However, most
of these methods tend to focus on learning relationships between body joints of
the skeleton using first-order neighbors, ignoring higher-order neighbors and
hence limiting their ability to exploit relationships between distant joints.
In this paper, we introduce a higher-order regular splitting graph network
(RS-Net) for 2D-to-3D human pose estimation using matrix splitting in
conjunction with weight and adjacency modulation. The core idea is to capture
long-range dependencies between body joints using multi-hop neighborhoods and
also to learn different modulation vectors for different body joints as well as
a modulation matrix added to the adjacency matrix associated to the skeleton.
This learnable modulation matrix helps adjust the graph structure by adding
extra graph edges in an effort to learn additional connections between body
joints. Instead of using a shared weight matrix for all neighboring body
joints, the proposed RS-Net model applies weight unsharing before aggregating
the feature vectors associated to the joints in order to capture the different
relations between them. Experiments and ablations studies performed on two
benchmark datasets demonstrate the effectiveness of our model, achieving
superior performance over recent state-of-the-art methods for 3D human pose
estimation. | [
"cs.CV"
] | false |
2305.05136 | 2023-05-09T02:46:13Z | Localisation of Mammographic masses by Greedy Backtracking of
Activations in the Stacked Auto-Encoders | [
"Shamna Pootheri",
"Govindan V K"
] | Mammographic image analysis requires accurate localisation of salient
mammographic masses. In mammographic computer-aided diagnosis, mass or Region
of Interest (ROI) is often marked by physicians and features are extracted from
the marked ROI. In this paper, we present a novel mammographic mass
localisation framework, based on the maximal class activations of the stacked
auto-encoders. We hypothesize that the image regions activating abnormal
classes in mammographic images will be the breast masses which causes the
anomaly. The experiment is conducted using randomly selected 200 mammographic
images (100 normal and 100 abnormal) from IRMA mammographic dataset. Abnormal
mass regions marked by an expert radiologist are used as the ground truth. The
proposed method outperforms existing Deep Convolutional Neural Network (DCNN)
based techniques in terms of salient region detection accuracy. The proposed
greedy backtracking method is more efficient and does not require a vast number
of labelled training images as in DCNN based method. Such automatic
localisation method will assist physicians to make accurate decisions on biopsy
recommendations and treatment evaluations. | [
"cs.CV",
"cs.LG"
] | false |
2305.05200 | 2023-05-09T06:25:59Z | LSAS: Lightweight Sub-attention Strategy for Alleviating Attention Bias
Problem | [
"Shanshan Zhong",
"Wushao Wen",
"Jinghui Qin",
"Qiangpu Chen",
"Zhongzhan Huang"
] | In computer vision, the performance of deep neural networks (DNNs) is highly
related to the feature extraction ability, i.e., the ability to recognize and
focus on key pixel regions in an image. However, in this paper, we
quantitatively and statistically illustrate that DNNs have a serious attention
bias problem on many samples from some popular datasets: (1) Position bias:
DNNs fully focus on label-independent regions; (2) Range bias: The focused
regions from DNN are not completely contained in the ideal region. Moreover, we
find that the existing self-attention modules can alleviate these biases to a
certain extent, but the biases are still non-negligible. To further mitigate
them, we propose a lightweight sub-attention strategy (LSAS), which utilizes
high-order sub-attention modules to improve the original self-attention
modules. The effectiveness of LSAS is demonstrated by extensive experiments on
widely-used benchmark datasets and popular attention networks. We release our
code to help other researchers to reproduce the results of
LSAS~\footnote{https://github.com/Qrange-group/LSAS}. | [
"cs.CV",
"cs.AI"
] | false |
2305.05268 | 2023-05-09T08:46:05Z | Rotation Synchronization via Deep Matrix Factorization | [
"Gk Tejus",
"Giacomo Zara",
"Paolo Rota",
"Andrea Fusiello",
"Elisa Ricci",
"Federica Arrigoni"
] | In this paper we address the rotation synchronization problem, where the
objective is to recover absolute rotations starting from pairwise ones, where
the unknowns and the measures are represented as nodes and edges of a graph,
respectively. This problem is an essential task for structure from motion and
simultaneous localization and mapping. We focus on the formulation of
synchronization via neural networks, which has only recently begun to be
explored in the literature. Inspired by deep matrix completion, we express
rotation synchronization in terms of matrix factorization with a deep neural
network. Our formulation exhibits implicit regularization properties and, more
importantly, is unsupervised, whereas previous deep approaches are supervised.
Our experiments show that we achieve comparable accuracy to the closest
competitors in most scenes, while working under weaker assumptions. | [
"cs.CV",
"cs.AI"
] | false |
2305.05301 | 2023-05-09T09:43:27Z | Eiffel Tower: A Deep-Sea Underwater Dataset for Long-Term Visual
Localization | [
"Clémentin Boittiaux",
"Claire Dune",
"Maxime Ferrera",
"Aurélien Arnaubec",
"Ricard Marxer",
"Marjolaine Matabos",
"Loïc Van Audenhaege",
"Vincent Hugel"
] | Visual localization plays an important role in the positioning and navigation
of robotics systems within previously visited environments. When visits occur
over long periods of time, changes in the environment related to seasons or
day-night cycles present a major challenge. Under water, the sources of
variability are due to other factors such as water conditions or growth of
marine organisms. Yet it remains a major obstacle and a much less studied one,
partly due to the lack of data. This paper presents a new deep-sea dataset to
benchmark underwater long-term visual localization. The dataset is composed of
images from four visits to the same hydrothermal vent edifice over the course
of five years. Camera poses and a common geometry of the scene were estimated
using navigation data and Structure-from-Motion. This serves as a reference
when evaluating visual localization techniques. An analysis of the data
provides insights about the major changes observed throughout the years.
Furthermore, several well-established visual localization methods are evaluated
on the dataset, showing there is still room for improvement in underwater
long-term visual localization. The data is made publicly available at
https://www.seanoe.org/data/00810/92226/. | [
"cs.CV",
"cs.RO"
] | false |
2305.05321 | 2023-05-09T10:16:02Z | Application of Artificial Intelligence in the Classification of
Microscopical Starch Images for Drug Formulation | [
"Marvellous Ajala",
"Blessing Oko",
"David Oba-Fidelis",
"Joycelyn Iyasele",
"Joy I. Odimegwu"
] | Starches are important energy sources found in plants with many uses in the
pharmaceutical industry such as binders, disintegrants, bulking agents in drugs
and thus require very careful physicochemical analysis for proper
identification and verification which includes microscopy. In this work, we
applied artificial intelligence techniques (using transfer learning and deep
convolution neural network CNNs to microscopical images obtained from 9 starch
samples of different botanical sources. Our approach obtained an accuracy of
61% when the machine learning model was pretrained on microscopic images from
MicroNet dataset. However the accuracy jumped to 81% for model pretrained on
random day to day images obtained from Imagenet dataset. The model pretrained
on the imagenet dataset also showed a better precision, recall and f1 score
than that pretrained on the imagenet dataset. | [
"cs.CV",
"cs.AI"
] | false |
2305.05349 | 2023-05-09T11:20:11Z | Towards the Characterization of Representations Learned via
Capsule-based Network Architectures | [
"Saja AL-Tawalbeh",
"José Oramas"
] | Capsule Networks (CapsNets) have been re-introduced as a more compact and
interpretable alternative to standard deep neural networks. While recent
efforts have proved their compression capabilities, to date, their
interpretability properties have not been fully assessed. Here, we conduct a
systematic and principled study towards assessing the interpretability of these
types of networks. Moreover, we pay special attention towards analyzing the
level to which part-whole relationships are indeed encoded within the learned
representation. Our analysis in the MNIST, SVHN, PASCAL-part and CelebA
datasets suggest that the representations encoded in CapsNets might not be as
disentangled nor strictly related to parts-whole relationships as is commonly
stated in the literature. | [
"cs.LG",
"cs.CV",
"ACM-class"
] | false |
2305.05422 | 2023-05-09T13:14:40Z | Egocentric Hierarchical Visual Semantics | [
"Luca Erculiani",
"Andrea Bontempelli",
"Andrea Passerini",
"Fausto Giunchiglia"
] | We are interested in aligning how people think about objects and what
machines perceive, meaning by this the fact that object recognition, as
performed by a machine, should follow a process which resembles that followed
by humans when thinking of an object associated with a certain concept. The
ultimate goal is to build systems which can meaningfully interact with their
users, describing what they perceive in the users' own terms. As from the field
of Lexical Semantics, humans organize the meaning of words in hierarchies where
the meaning of, e.g., a noun, is defined in terms of the meaning of a more
general noun, its genus, and of one or more differentiating properties, its
differentia. The main tenet of this paper is that object recognition should
implement a hierarchical process which follows the hierarchical semantic
structure used to define the meaning of words. We achieve this goal by
implementing an algorithm which, for any object, recursively recognizes its
visual genus and its visual differentia. In other words, the recognition of an
object is decomposed in a sequence of steps where the locally relevant visual
features are recognized. This paper presents the algorithm and a first
evaluation. | [
"cs.AI",
"cs.CV"
] | false |
2305.05423 | 2023-05-09T13:15:19Z | High-throughput Cotton Phenotyping Big Data Pipeline Lambda Architecture
Computer Vision Deep Neural Networks | [
"Amanda Issac",
"Alireza Ebrahimi",
"Javad Mohammadpour Velni",
"Glen Rains"
] | In this study, we propose a big data pipeline for cotton bloom detection
using a Lambda architecture, which enables real-time and batch processing of
data. Our proposed approach leverages Azure resources such as Data Factory,
Event Grids, Rest APIs, and Databricks. This work is the first to develop and
demonstrate the implementation of such a pipeline for plant phenotyping through
Azure's cloud computing service. The proposed pipeline consists of data
preprocessing, object detection using a YOLOv5 neural network model trained
through Azure AutoML, and visualization of object detection bounding boxes on
output images. The trained model achieves a mean Average Precision (mAP) score
of 0.96, demonstrating its high performance for cotton bloom classification. We
evaluate our Lambda architecture pipeline using 9000 images yielding an
optimized runtime of 34 minutes. The results illustrate the scalability of the
proposed pipeline as a solution for deep learning object detection, with the
potential for further expansion through additional Azure processing cores. This
work advances the scientific research field by providing a new method for
cotton bloom detection on a large dataset and demonstrates the potential of
utilizing cloud computing resources, specifically Azure, for efficient and
accurate big data processing in precision agriculture. | [
"cs.CV",
"cs.LG"
] | false |
2305.05430 | 2023-05-09T13:18:35Z | Bone Marrow Cytomorphology Cell Detection using InceptionResNetV2 | [
"Raisa Fairooz Meem",
"Khandaker Tabin Hasan"
] | Critical clinical decision points in haematology are influenced by the
requirement of bone marrow cytology for a haematological diagnosis. Bone marrow
cytology, however, is restricted to reference facilities with expertise, and
linked to inter-observer variability which requires a long time to process that
could result in a delayed or inaccurate diagnosis, leaving an unmet need for
cutting-edge supporting technologies. This paper presents a novel transfer
learning model for Bone Marrow Cell Detection to provide a solution to all the
difficulties faced for the task along with considerable accuracy. The proposed
model achieved 96.19\% accuracy which can be used in the future for analysis of
other medical images in this domain. | [
"eess.IV",
"cs.CV"
] | false |
2305.05432 | 2023-05-09T13:20:59Z | WikiWeb2M: A Page-Level Multimodal Wikipedia Dataset | [
"Andrea Burns",
"Krishna Srinivasan",
"Joshua Ainslie",
"Geoff Brown",
"Bryan A. Plummer",
"Kate Saenko",
"Jianmo Ni",
"Mandy Guo"
] | Webpages have been a rich resource for language and vision-language tasks.
Yet only pieces of webpages are kept: image-caption pairs, long text articles,
or raw HTML, never all in one place. Webpage tasks have resultingly received
little attention and structured image-text data underused. To study multimodal
webpage understanding, we introduce the Wikipedia Webpage 2M (WikiWeb2M) suite;
the first to retain the full set of images, text, and structure data available
in a page. WikiWeb2M can be used for tasks like page description generation,
section summarization, and contextual image captioning. | [
"cs.CL",
"cs.CV"
] | true |
2305.05464 | 2023-05-09T14:03:27Z | Style-A-Video: Agile Diffusion for Arbitrary Text-based Video Style
Transfer | [
"Nisha Huang",
"Yuxin Zhang",
"Weiming Dong"
] | Large-scale text-to-video diffusion models have demonstrated an exceptional
ability to synthesize diverse videos. However, due to the lack of extensive
text-to-video datasets and the necessary computational resources for training,
directly applying these models for video stylization remains difficult. Also,
given that the noise addition process on the input content is random and
destructive, fulfilling the style transfer task's content preservation criteria
is challenging. This paper proposes a zero-shot video stylization method named
Style-A-Video, which utilizes a generative pre-trained transformer with an
image latent diffusion model to achieve a concise text-controlled video
stylization. We improve the guidance condition in the denoising process,
establishing a balance between artistic expression and structure preservation.
Furthermore, to decrease inter-frame flicker and avoid the formation of
additional artifacts, we employ a sampling optimization and a temporal
consistency module. Extensive experiments show that we can attain superior
content preservation and stylistic performance while incurring less consumption
than previous solutions. Code will be available at
https://github.com/haha-lisa/Style-A-Video. | [
"cs.CV",
"cs.MM"
] | false |
2305.05534 | 2023-05-09T15:28:24Z | Integrating Holistic and Local Information to Estimate Emotional
Reaction Intensity | [
"Yini Fang",
"Liang Wu",
"Frederic Jumelle",
"Bertram Shi"
] | Video-based Emotional Reaction Intensity (ERI) estimation measures the
intensity of subjects' reactions to stimuli along several emotional dimensions
from videos of the subject as they view the stimuli. We propose a multi-modal
architecture for video-based ERI combining video and audio information. Video
input is encoded spatially first, frame-by-frame, combining features encoding
holistic aspects of the subjects' facial expressions and features encoding
spatially localized aspects of their expressions. Input is then combined across
time: from frame-to-frame using gated recurrent units (GRUs), then globally by
a transformer. We handle variable video length with a regression token that
accumulates information from all frames into a fixed-dimensional vector
independent of video length. Audio information is handled similarly: spectral
information extracted within each frame is integrated across time by a cascade
of GRUs and a transformer with regression token. The video and audio regression
tokens' outputs are merged by concatenation, then input to a final fully
connected layer producing intensity estimates. Our architecture achieved
excellent performance on the Hume-Reaction dataset in the ERI Esimation
Challenge of the Fifth Competition on Affective Behavior Analysis in-the-Wild
(ABAW5). The Pearson Correlation Coefficients between estimated and subject
self-reported scores, averaged across all emotions, were 0.455 on the
validation dataset and 0.4547 on the test dataset, well above the baselines.
The transformer's self-attention mechanism enables our architecture to focus on
the most critical video frames regardless of length. Ablation experiments
establish the advantages of combining holistic/local features and of
multi-modal integration. Code available at https://github.com/HKUST-NISL/ABAW5. | [
"cs.CV",
"cs.LG"
] | false |
2305.05580 | 2023-05-09T16:14:57Z | Fashion CUT: Unsupervised domain adaptation for visual pattern
classification in clothes using synthetic data and pseudo-labels | [
"Enric Moreu",
"Alex Martinelli",
"Martina Naughton",
"Philip Kelly",
"Noel E. O'Connor"
] | Accurate product information is critical for e-commerce stores to allow
customers to browse, filter, and search for products. Product data quality is
affected by missing or incorrect information resulting in poor customer
experience. While machine learning can be used to correct inaccurate or missing
information, achieving high performance on fashion image classification tasks
requires large amounts of annotated data, but it is expensive to generate due
to labeling costs. One solution can be to generate synthetic data which
requires no manual labeling. However, training a model with a dataset of solely
synthetic images can lead to poor generalization when performing inference on
real-world data because of the domain shift. We introduce a new unsupervised
domain adaptation technique that converts images from the synthetic domain into
the real-world domain. Our approach combines a generative neural network and a
classifier that are jointly trained to produce realistic images while
preserving the synthetic label information. We found that using real-world
pseudo-labels during training helps the classifier to generalize in the
real-world domain, reducing the synthetic bias. We successfully train a visual
pattern classification model in the fashion domain without real-world
annotations. Experiments show that our method outperforms other unsupervised
domain adaptation algorithms. | [
"cs.CV",
"cs.LG"
] | false |
2305.05766 | 2023-05-09T20:59:14Z | Instant-NeRF: Instant On-Device Neural Radiance Field Training via
Algorithm-Accelerator Co-Designed Near-Memory Processing | [
"Yang Zhao",
"Shang Wu",
"Jingqun Zhang",
"Sixu Li",
"Chaojian Li",
"Yingyan Lin"
] | Instant on-device Neural Radiance Fields (NeRFs) are in growing demand for
unleashing the promise of immersive AR/VR experiences, but are still limited by
their prohibitive training time. Our profiling analysis reveals a memory-bound
inefficiency in NeRF training. To tackle this inefficiency, near-memory
processing (NMP) promises to be an effective solution, but also faces
challenges due to the unique workloads of NeRFs, including the random hash
table lookup, random point processing sequence, and heterogeneous bottleneck
steps. Therefore, we propose the first NMP framework, Instant-NeRF, dedicated
to enabling instant on-device NeRF training. Experiments on eight datasets
consistently validate the effectiveness of Instant-NeRF. | [
"cs.CV",
"cs.AR"
] | false |
2305.05768 | 2023-05-09T21:03:13Z | DifFIQA: Face Image Quality Assessment Using Denoising Diffusion
Probabilistic Models | [
"Žiga Babnik",
"Peter Peer",
"Vitomir Štruc"
] | Modern face recognition (FR) models excel in constrained scenarios, but often
suffer from decreased performance when deployed in unconstrained (real-world)
environments due to uncertainties surrounding the quality of the captured
facial data. Face image quality assessment (FIQA) techniques aim to mitigate
these performance degradations by providing FR models with sample-quality
predictions that can be used to reject low-quality samples and reduce false
match errors. However, despite steady improvements, ensuring reliable quality
estimates across facial images with diverse characteristics remains
challenging. In this paper, we present a powerful new FIQA approach, named
DifFIQA, which relies on denoising diffusion probabilistic models (DDPM) and
ensures highly competitive results. The main idea behind the approach is to
utilize the forward and backward processes of DDPMs to perturb facial images
and quantify the impact of these perturbations on the corresponding image
embeddings for quality prediction. Because the diffusion-based perturbations
are computationally expensive, we also distill the knowledge encoded in DifFIQA
into a regression-based quality predictor, called DifFIQA(R), that balances
performance and execution time. We evaluate both models in comprehensive
experiments on 7 datasets, with 4 target FR models and against 10
state-of-the-art FIQA techniques with highly encouraging results. The source
code will be made publicly available. | [
"cs.CV",
"cs.LG"
] | false |
2305.05784 | 2023-05-09T22:09:35Z | Comprehensive Dataset of Synthetic and Manipulated Overhead Imagery for
Development and Evaluation of Forensic Tools | [
"Brandon B. May",
"Kirill Trapeznikov",
"Shengbang Fang",
"Matthew C. Stamm"
] | We present a first of its kind dataset of overhead imagery for development
and evaluation of forensic tools. Our dataset consists of real, fully synthetic
and partially manipulated overhead imagery generated from a custom diffusion
model trained on two sets of different zoom levels and on two sources of
pristine data. We developed our model to support controllable generation of
multiple manipulation categories including fully synthetic imagery conditioned
on real and generated base maps, and location. We also support partial
in-painted imagery with same conditioning options and with several types of
manipulated content. The data consist of raw images and ground truth
annotations describing the manipulation parameters. We also report benchmark
performance on several tasks supported by our dataset including detection of
fully and partially manipulated imagery, manipulation localization and
classification. | [
"cs.CV",
"cs.CR"
] | false |
2305.05100 | 2023-05-09T00:11:00Z | Adaptive Domain Generalization for Digital Pathology Images | [
"Andrew Walker"
] | In AI-based histopathology, domain shifts are common and well-studied.
However, this research focuses on stain and scanner variations, which do not
show the full picture -- shifts may be combinations of other shifts, or
"invisible" shifts that are not obvious but still damage performance of machine
learning models. Furthermore, it is important for models to generalize to these
shifts without expensive or scarce annotations, especially in the
histopathology space and if wanting to deploy models on a larger scale. Thus,
there is a need for "reactive" domain generalization techniques: ones that
adapt to domain shifts at test-time rather than requiring predictions of or
examples of the shifts at training time. We conduct a literature review and
introduce techniques that react to domain shifts rather than requiring a
prediction of them in advance. We investigate test time training, a technique
for domain generalization that adapts model parameters at test-time through
optimization of a secondary self-supervised task. | [
"eess.IV",
"cs.CV",
"cs.LG"
] | false |
2305.05101 | 2023-05-09T00:11:35Z | Towards unraveling calibration biases in medical image analysis | [
"María Agustina Ricci Lara",
"Candelaria Mosquera",
"Enzo Ferrante",
"Rodrigo Echeveste"
] | In recent years the development of artificial intelligence (AI) systems for
automated medical image analysis has gained enormous momentum. At the same
time, a large body of work has shown that AI systems can systematically and
unfairly discriminate against certain populations in various application
scenarios. These two facts have motivated the emergence of algorithmic fairness
studies in this field. Most research on healthcare algorithmic fairness to date
has focused on the assessment of biases in terms of classical discrimination
metrics such as AUC and accuracy. Potential biases in terms of model
calibration, however, have only recently begun to be evaluated. This is
especially important when working with clinical decision support systems, as
predictive uncertainty is key for health professionals to optimally evaluate
and combine multiple sources of information. In this work we study
discrimination and calibration biases in models trained for automatic detection
of malignant dermatological conditions from skin lesions images. Importantly,
we show how several typically employed calibration metrics are systematically
biased with respect to sample sizes, and how this can lead to erroneous
fairness analysis if not taken into consideration. This is of particular
relevance to fairness studies, where data imbalance results in drastic sample
size differences between demographic sub-groups, which, if not taken into
account, can act as confounders. | [
"eess.IV",
"cs.CV",
"cs.LG"
] | false |
2305.05153 | 2023-05-09T03:33:14Z | DeepTree: Modeling Trees with Situated Latents | [
"Xiaochen Zhou",
"Bosheng Li",
"Bedrich Benes",
"Songlin Fei",
"Sören Pirk"
] | In this paper, we propose DeepTree, a novel method for modeling trees based
on learning developmental rules for branching structures instead of manually
defining them. We call our deep neural model situated latent because its
behavior is determined by the intrinsic state -- encoded as a latent space of a
deep neural model -- and by the extrinsic (environmental) data that is situated
as the location in the 3D space and on the tree structure. We use a neural
network pipeline to train a situated latent space that allows us to locally
predict branch growth only based on a single node in the branch graph of a tree
model. We use this representation to progressively develop new branch nodes,
thereby mimicking the growth process of trees. Starting from a root node, a
tree is generated by iteratively querying the neural network on the newly added
nodes resulting in the branching structure of the whole tree. Our method
enables generating a wide variety of tree shapes without the need to define
intricate parameters that control their growth and behavior. Furthermore, we
show that the situated latents can also be used to encode the environmental
response of tree models, e.g., when trees grow next to obstacles. We validate
the effectiveness of our method by measuring the similarity of our tree models
and by procedurally generated ones based on a number of established metrics for
tree form. | [
"cs.LG",
"cs.CV",
"cs.GR"
] | false |
2305.05445 | 2023-05-09T13:38:13Z | StyleSync: High-Fidelity Generalized and Personalized Lip Sync in
Style-based Generator | [
"Jiazhi Guan",
"Zhanwang Zhang",
"Hang Zhou",
"Tianshu Hu",
"Kaisiyuan Wang",
"Dongliang He",
"Haocheng Feng",
"Jingtuo Liu",
"Errui Ding",
"Ziwei Liu",
"Jingdong Wang"
] | Despite recent advances in syncing lip movements with any audio waves,
current methods still struggle to balance generation quality and the model's
generalization ability. Previous studies either require long-term data for
training or produce a similar movement pattern on all subjects with low
quality. In this paper, we propose StyleSync, an effective framework that
enables high-fidelity lip synchronization. We identify that a style-based
generator would sufficiently enable such a charming property on both one-shot
and few-shot scenarios. Specifically, we design a mask-guided spatial
information encoding module that preserves the details of the given face. The
mouth shapes are accurately modified by audio through modulated convolutions.
Moreover, our design also enables personalized lip-sync by introducing style
space and generator refinement on only limited frames. Thus the identity and
talking style of a target person could be accurately preserved. Extensive
experiments demonstrate the effectiveness of our method in producing
high-fidelity results on a variety of scenes. Resources can be found at
https://hangz-nju-cuhk.github.io/projects/StyleSync. | [
"cs.CV",
"cs.GR",
"cs.MM"
] | false |
2305.05505 | 2023-05-09T14:54:41Z | Recursions Are All You Need: Towards Efficient Deep Unfolding Networks | [
"Rawwad Alhejaili",
"Motaz Alfarraj",
"Hamzah Luqman",
"Ali Al-Shaikhi"
] | The use of deep unfolding networks in compressive sensing (CS) has seen wide
success as they provide both simplicity and interpretability. However, since
most deep unfolding networks are iterative, this incurs significant
redundancies in the network. In this work, we propose a novel recursion-based
framework to enhance the efficiency of deep unfolding models. First, recursions
are used to effectively eliminate the redundancies in deep unfolding networks.
Secondly, we randomize the number of recursions during training to decrease the
overall training time. Finally, to effectively utilize the power of recursions,
we introduce a learnable unit to modulate the features of the model based on
both the total number of iterations and the current iteration index. To
evaluate the proposed framework, we apply it to both ISTA-Net+ and COAST.
Extensive testing shows that our proposed framework allows the network to cut
down as much as 75% of its learnable parameters while mostly maintaining its
performance, and at the same time, it cuts around 21% and 42% from the training
time for ISTA-Net+ and COAST respectively. Moreover, when presented with a
limited training dataset, the recursive models match or even outperform their
respective non-recursive baseline. Codes and pretrained models are available at
https://github.com/Rawwad-Alhejaili/Recursions-Are-All-You-Need . | [
"cs.CV",
"cs.LG",
"eess.IV"
] | false |
2305.05591 | 2023-05-09T16:28:07Z | AudioSlots: A slot-centric generative model for audio separation | [
"Pradyumna Reddy",
"Scott Wisdom",
"Klaus Greff",
"John R. Hershey",
"Thomas Kipf"
] | In a range of recent works, object-centric architectures have been shown to
be suitable for unsupervised scene decomposition in the vision domain. Inspired
by these methods we present AudioSlots, a slot-centric generative model for
blind source separation in the audio domain. AudioSlots is built using
permutation-equivariant encoder and decoder networks. The encoder network based
on the Transformer architecture learns to map a mixed audio spectrogram to an
unordered set of independent source embeddings. The spatial broadcast decoder
network learns to generate the source spectrograms from the source embeddings.
We train the model in an end-to-end manner using a permutation invariant loss
function. Our results on Libri2Mix speech separation constitute a proof of
concept that this approach shows promise. We discuss the results and
limitations of our approach in detail, and further outline potential ways to
overcome the limitations and directions for future work. | [
"cs.SD",
"cs.CV",
"eess.AS"
] | true |
2305.05594 | 2023-05-09T16:35:39Z | PET-NeuS: Positional Encoding Tri-Planes for Neural Surfaces | [
"Yiqun Wang",
"Ivan Skorokhodov",
"Peter Wonka"
] | A signed distance function (SDF) parametrized by an MLP is a common
ingredient of neural surface reconstruction. We build on the successful recent
method NeuS to extend it by three new components. The first component is to
borrow the tri-plane representation from EG3D and represent signed distance
fields as a mixture of tri-planes and MLPs instead of representing it with MLPs
only. Using tri-planes leads to a more expressive data structure but will also
introduce noise in the reconstructed surface. The second component is to use a
new type of positional encoding with learnable weights to combat noise in the
reconstruction process. We divide the features in the tri-plane into multiple
frequency scales and modulate them with sin and cos functions of different
frequencies. The third component is to use learnable convolution operations on
the tri-plane features using self-attention convolution to produce features
with different frequency bands. The experiments show that PET-NeuS achieves
high-fidelity surface reconstruction on standard datasets. Following previous
work and using the Chamfer metric as the most important way to measure surface
reconstruction quality, we are able to improve upon the NeuS baseline by 57% on
Nerf-synthetic (0.84 compared to 1.97) and by 15.5% on DTU (0.71 compared to
0.84). The qualitative evaluation reveals how our method can better control the
interference of high-frequency noise. Code available at
\url{https://github.com/yiqun-wang/PET-NeuS}. | [
"cs.CV",
"cs.AI",
"cs.GR"
] | false |
2305.05648 | 2023-05-09T17:46:43Z | Predicting Cardiovascular Disease Risk using Photoplethysmography and
Deep Learning | [
"Wei-Hung Weng",
"Sebastien Baur",
"Mayank Daswani",
"Christina Chen",
"Lauren Harrell",
"Sujay Kakarmath",
"Mariam Jabara",
"Babak Behsaz",
"Cory Y. McLean",
"Yossi Matias",
"Greg S. Corrado",
"Shravya Shetty",
"Shruthi Prabhakara",
"Yun Liu",
"Goodarz Danaei",
"Diego Ardila"
] | Cardiovascular diseases (CVDs) are responsible for a large proportion of
premature deaths in low- and middle-income countries. Early CVD detection and
intervention is critical in these populations, yet many existing CVD risk
scores require a physical examination or lab measurements, which can be
challenging in such health systems due to limited accessibility. Here we
investigated the potential to use photoplethysmography (PPG), a sensing
technology available on most smartphones that can potentially enable
large-scale screening at low cost, for CVD risk prediction. We developed a deep
learning PPG-based CVD risk score (DLS) to predict the probability of having
major adverse cardiovascular events (MACE: non-fatal myocardial infarction,
stroke, and cardiovascular death) within ten years, given only age, sex,
smoking status and PPG as predictors. We compared the DLS with the office-based
refit-WHO score, which adopts the shared predictors from WHO and Globorisk
scores (age, sex, smoking status, height, weight and systolic blood pressure)
but refitted on the UK Biobank (UKB) cohort. In UKB cohort, DLS's C-statistic
(71.1%, 95% CI 69.9-72.4) was non-inferior to office-based refit-WHO score
(70.9%, 95% CI 69.7-72.2; non-inferiority margin of 2.5%, p<0.01). The
calibration of the DLS was satisfactory, with a 1.8% mean absolute calibration
error. Adding DLS features to the office-based score increased the C-statistic
by 1.0% (95% CI 0.6-1.4). DLS predicts ten-year MACE risk comparable with the
office-based refit-WHO score. It provides a proof-of-concept and suggests the
potential of a PPG-based approach strategies for community-based primary
prevention in resource-limited regions. | [
"cs.CV",
"cs.AI",
"cs.LG"
] | false |
2305.05706 | 2023-05-09T18:30:58Z | DexArt: Benchmarking Generalizable Dexterous Manipulation with
Articulated Objects | [
"Chen Bao",
"Helin Xu",
"Yuzhe Qin",
"Xiaolong Wang"
] | To enable general-purpose robots, we will require the robot to operate daily
articulated objects as humans do. Current robot manipulation has heavily relied
on using a parallel gripper, which restricts the robot to a limited set of
objects. On the other hand, operating with a multi-finger robot hand will allow
better approximation to human behavior and enable the robot to operate on
diverse articulated objects. To this end, we propose a new benchmark called
DexArt, which involves Dexterous manipulation with Articulated objects in a
physical simulator. In our benchmark, we define multiple complex manipulation
tasks, and the robot hand will need to manipulate diverse articulated objects
within each task. Our main focus is to evaluate the generalizability of the
learned policy on unseen articulated objects. This is very challenging given
the high degrees of freedom of both hands and objects. We use Reinforcement
Learning with 3D representation learning to achieve generalization. Through
extensive studies, we provide new insights into how 3D representation learning
affects decision making in RL with 3D point cloud inputs. More details can be
found at https://www.chenbao.tech/dexart/. | [
"cs.RO",
"cs.CV",
"cs.LG"
] | true |
2305.05732 | 2023-05-09T19:24:09Z | Duke Spleen Data Set: A Publicly Available Spleen MRI and CT dataset for
Training Segmentation | [
"Yuqi Wang",
"Jacob A. Macdonald",
"Katelyn R. Morgan",
"Danielle Hom",
"Sarah Cubberley",
"Kassi Sollace",
"Nicole Casasanto",
"Islam H. Zaki",
"Kyle J. Lafata",
"Mustafa R. Bashir"
] | Spleen volumetry is primarily associated with patients suffering from chronic
liver disease and portal hypertension, as they often have spleens with abnormal
shapes and sizes. However, manually segmenting the spleen to obtain its volume
is a time-consuming process. Deep learning algorithms have proven to be
effective in automating spleen segmentation, but a suitable dataset is
necessary for training such algorithms. To our knowledge, the few publicly
available datasets for spleen segmentation lack confounding features such as
ascites and abdominal varices. To address this issue, the Duke Spleen Data Set
(DSDS) has been developed, which includes 109 CT and MRI volumes from patients
with chronic liver disease and portal hypertension. The dataset includes a
diverse range of image types, vendors, planes, and contrasts, as well as
varying spleen shapes and sizes due to underlying disease states. The DSDS aims
to facilitate the creation of robust spleen segmentation models that can take
into account these variations and confounding factors. | [
"eess.IV",
"cs.CV",
"cs.LG"
] | false |
2305.05813 | 2023-05-09T23:52:37Z | Change Detection Methods for Remote Sensing in the Last Decade: A
Comprehensive Review | [
"Guangliang Cheng",
"Yunmeng Huang",
"Xiangtai Li",
"Shuchang Lyu",
"Zhaoyang Xu",
"Qi Zhao",
"Shiming Xiang"
] | Change detection is an essential and widely utilized task in remote sensing
that aims to detect and analyze changes occurring in the same geographical area
over time, which has broad applications in urban development, agricultural
surveys, and land cover monitoring. Detecting changes in remote sensing images
is a complex challenge due to various factors, including variations in image
quality, noise, registration errors, illumination changes, complex landscapes,
and spatial heterogeneity. In recent years, deep learning has emerged as a
powerful tool for feature extraction and addressing these challenges. Its
versatility has resulted in its widespread adoption for numerous
image-processing tasks. This paper presents a comprehensive survey of
significant advancements in change detection for remote sensing images over the
past decade. We first introduce some preliminary knowledge for the change
detection task, such as problem definition, datasets, evaluation metrics, and
transformer basics, as well as provide a detailed taxonomy of existing
algorithms from three different perspectives: algorithm granularity,
supervision modes, and learning frameworks in the methodology section. This
survey enables readers to gain systematic knowledge of change detection tasks
from various angles. We then summarize the state-of-the-art performance on
several dominant change detection datasets, providing insights into the
strengths and limitations of existing algorithms. Based on our survey, some
future research directions for change detection in remote sensing are well
identified. This survey paper will shed some light on the community and inspire
further research efforts in the change detection task. | [
"cs.CV",
"cs.LG",
"eess.IV"
] | false |
2305.05661 | 2023-05-09T17:55:48Z | ShapeCoder: Discovering Abstractions for Visual Programs from
Unstructured Primitives | [
"R. Kenny Jones",
"Paul Guerrero",
"Niloy J. Mitra",
"Daniel Ritchie"
] | Programs are an increasingly popular representation for visual data, exposing
compact, interpretable structure that supports manipulation. Visual programs
are usually written in domain-specific languages (DSLs). Finding "good"
programs, that only expose meaningful degrees of freedom, requires access to a
DSL with a "good" library of functions, both of which are typically authored by
domain experts. We present ShapeCoder, the first system capable of taking a
dataset of shapes, represented with unstructured primitives, and jointly
discovering (i) useful abstraction functions and (ii) programs that use these
abstractions to explain the input shapes. The discovered abstractions capture
common patterns (both structural and parametric) across the dataset, so that
programs rewritten with these abstractions are more compact, and expose fewer
degrees of freedom. ShapeCoder improves upon previous abstraction discovery
methods, finding better abstractions, for more complex inputs, under less
stringent input assumptions. This is principally made possible by two
methodological advancements: (a) a shape to program recognition network that
learns to solve sub-problems and (b) the use of e-graphs, augmented with a
conditional rewrite scheme, to determine when abstractions with complex
parametric expressions can be applied, in a tractable manner. We evaluate
ShapeCoder on multiple datasets of 3D shapes, where primitive decompositions
are either parsed from manual annotations or produced by an unsupervised cuboid
abstraction method. In all domains, ShapeCoder discovers a library of
abstractions that capture high-level relationships, remove extraneous degrees
of freedom, and achieve better dataset compression compared with alternative
approaches. Finally, we investigate how programs rewritten to use discovered
abstractions prove useful for downstream tasks. | [
"cs.GR",
"cs.AI",
"cs.CV",
"cs.LG",
"cs.PL"
] | false |
2305.14359 | 2023-05-09T02:37:29Z | Zero-shot personalized lip-to-speech synthesis with face image based
voice control | [
"Zheng-Yan Sheng",
"Yang Ai",
"Zhen-Hua Ling"
] | Lip-to-Speech (Lip2Speech) synthesis, which predicts corresponding speech
from talking face images, has witnessed significant progress with various
models and training strategies in a series of independent studies. However,
existing studies can not achieve voice control under zero-shot condition,
because extra speaker embeddings need to be extracted from natural reference
speech and are unavailable when only the silent video of an unseen speaker is
given. In this paper, we propose a zero-shot personalized Lip2Speech synthesis
method, in which face images control speaker identities. A variational
autoencoder is adopted to disentangle the speaker identity and linguistic
content representations, which enables speaker embeddings to control the voice
characteristics of synthetic speech for unseen speakers. Furthermore, we
propose associated cross-modal representation learning to promote the ability
of face-based speaker embeddings (FSE) on voice control. Extensive experiments
verify the effectiveness of the proposed method whose synthetic utterances are
more natural and matching with the personality of input video than the compared
methods. To our best knowledge, this paper makes the first attempt on zero-shot
personalized Lip2Speech synthesis with a face image rather than reference audio
to control voice characteristics. | [
"cs.MM",
"cs.AI",
"cs.CV",
"cs.SD",
"eess.AS"
] | false |
2305.05138 | 2023-05-09T02:49:09Z | Read, Diagnose and Chat: Towards Explainable and Interactive
LLMs-Augmented Depression Detection in Social Media | [
"Wei Qin",
"Zetong Chen",
"Lei Wang",
"Yunshi Lan",
"Weijieying Ren",
"Richang Hong"
] | This paper proposes a new depression detection system based on LLMs that is
both interpretable and interactive. It not only provides a diagnosis, but also
diagnostic evidence and personalized recommendations based on natural language
dialogue with the user. We address challenges such as the processing of large
amounts of text and integrate professional diagnostic criteria. Our system
outperforms traditional methods across various settings and is demonstrated
through case studies. | [
"cs.CL"
] | false |
2305.05171 | 2023-05-09T04:45:24Z | Summarization with Precise Length Control | [
"Lesly Miculicich",
"Yujia Xie",
"Song Wang",
"Pengcheng He"
] | Many applications of text generation such as summarization benefit from
accurately controlling the text length. Existing approaches on
length-controlled summarization either result in degraded performance or can
only control the length approximately. In this work, we present a framework to
generate summaries with precisely the specified number of tokens or sentences,
while maintaining or even improving the text quality. In addition, we jointly
train the models to predict the lengths, so our model can generate summaries
with optimal length. We evaluate the proposed framework on the CNNDM dataset
and show improved performance compared to existing methods. | [
"cs.CL"
] | false |
2305.05253 | 2023-05-09T08:21:11Z | Attack Named Entity Recognition by Entity Boundary Interference | [
"Yifei Yang",
"Hongqiu Wu",
"Hai Zhao"
] | Named Entity Recognition (NER) is a cornerstone NLP task while its robustness
has been given little attention. This paper rethinks the principles of NER
attacks derived from sentence classification, as they can easily violate the
label consistency between the original and adversarial NER examples. This is
due to the fine-grained nature of NER, as even minor word changes in the
sentence can result in the emergence or mutation of any entities, resulting in
invalid adversarial examples. To this end, we propose a novel one-word
modification NER attack based on a key insight, NER models are always
vulnerable to the boundary position of an entity to make their decision. We
thus strategically insert a new boundary into the sentence and trigger the
Entity Boundary Interference that the victim model makes the wrong prediction
either on this boundary word or on other words in the sentence. We call this
attack Virtual Boundary Attack (ViBA), which is shown to be remarkably
effective when attacking both English and Chinese models with a 70%-90% attack
success rate on state-of-the-art language models (e.g. RoBERTa, DeBERTa) and
also significantly faster than previous methods. | [
"cs.CL"
] | false |
2305.05302 | 2023-05-09T09:45:44Z | The Perfect Victim: Computational Analysis of Judicial Attitudes towards
Victims of Sexual Violence | [
"Eliya Habba",
"Renana Keydar",
"Dan Bareket",
"Gabriel Stanovsky"
] | We develop computational models to analyze court statements in order to
assess judicial attitudes toward victims of sexual violence in the Israeli
court system. The study examines the resonance of "rape myths" in the criminal
justice system's response to sex crimes, in particular in judicial assessment
of victim's credibility. We begin by formulating an ontology for evaluating
judicial attitudes toward victim's credibility, with eight ordinal labels and
binary categorizations. Second, we curate a manually annotated dataset for
judicial assessments of victim's credibility in the Hebrew language, as well as
a model that can extract credibility labels from court cases. The dataset
consists of 855 verdict decision documents in sexual assault cases from
1990-2021, annotated with the help of legal experts and trained law students.
The model uses a combined approach of syntactic and latent structures to find
sentences that convey the judge's attitude towards the victim and classify them
according to the credibility label set. Our ontology, data, and models will be
made available upon request, in the hope they spur future progress in this
judicial important task. | [
"cs.CL"
] | false |
2305.05334 | 2023-05-09T10:49:45Z | ArgU: A Controllable Factual Argument Generator | [
"Sougata Saha",
"Rohini Srihari"
] | Effective argumentation is essential towards a purposeful conversation with a
satisfactory outcome. For example, persuading someone to reconsider smoking
might involve empathetic, well founded arguments based on facts and expert
opinions about its ill-effects and the consequences on one's family. However,
the automatic generation of high-quality factual arguments can be challenging.
Addressing existing controllability issues can make the recent advances in
computational models for argument generation a potential solution. In this
paper, we introduce ArgU: a neural argument generator capable of producing
factual arguments from input facts and real-world concepts that can be
explicitly controlled for stance and argument structure using Walton's argument
scheme-based control codes. Unfortunately, computational argument generation is
a relatively new field and lacks datasets conducive to training. Hence, we have
compiled and released an annotated corpora of 69,428 arguments spanning six
topics and six argument schemes, making it the largest publicly available
corpus for identifying argument schemes; the paper details our annotation and
dataset creation framework. We further experiment with an argument generation
strategy that establishes an inference strategy by generating an ``argument
template'' before actual argument generation. Our results demonstrate that it
is possible to automatically generate diverse arguments exhibiting different
inference patterns for the same set of facts by using control codes based on
argument schemes and stance. | [
"cs.CL"
] | false |
2305.05335 | 2023-05-09T10:54:34Z | Rudolf Christoph Eucken at SemEval-2023 Task 4: An Ensemble Approach for
Identifying Human Values from Arguments | [
"Sougata Saha",
"Rohini Srihari"
] | The subtle human values we acquire through life experiences govern our
thoughts and gets reflected in our speech. It plays an integral part in
capturing the essence of our individuality and making it imperative to identify
such values in computational systems that mimic human actions. Computational
argumentation is a field that deals with the argumentation capabilities of
humans and can benefit from identifying such values. Motivated by that, we
present an ensemble approach for detecting human values from argument text. Our
ensemble comprises three models: (i) An entailment-based model for determining
the human values based on their descriptions, (ii) A Roberta-based classifier
that predicts the set of human values from an argument. (iii) A Roberta-based
classifier to predict a reduced set of human values from an argument. We
experiment with different ways of combining the models and report our results.
Furthermore, our best combination achieves an overall F1 score of 0.48 on the
main test set. | [
"cs.CL"
] | false |
2305.05378 | 2023-05-09T12:19:10Z | PLM-GNN: A Webpage Classification Method based on Joint Pre-trained
Language Model and Graph Neural Network | [
"Qiwei Lang",
"Jingbo Zhou",
"Haoyi Wang",
"Shiqi Lyu",
"Rui Zhang"
] | The number of web pages is growing at an exponential rate, accumulating
massive amounts of data on the web. It is one of the key processes to classify
webpages in web information mining. Some classical methods are based on
manually building features of web pages and training classifiers based on
machine learning or deep learning. However, building features manually requires
specific domain knowledge and usually takes a long time to validate the
validity of features. Considering webpages generated by the combination of text
and HTML Document Object Model(DOM) trees, we propose a representation and
classification method based on a pre-trained language model and graph neural
network, named PLM-GNN. It is based on the joint encoding of text and HTML DOM
trees in the web pages. It performs well on the KI-04 and SWDE datasets and on
practical dataset AHS for the project of scholar's homepage crawling. | [
"cs.CL"
] | false |
2305.05390 | 2023-05-09T12:36:58Z | COKE: A Cognitive Knowledge Graph for Machine Theory of Mind | [
"Jincenzi Wu",
"Zhuang Chen",
"Jiawen Deng",
"Sahand Sabour",
"Minlie Huang"
] | Theory of mind (ToM) refers to humans' ability to understand and infer the
desires, beliefs, and intentions of others. The acquisition of ToM plays a key
role in humans' social cognition and interpersonal relations. Though
indispensable for social intelligence, ToM is still lacking for modern AI and
NLP systems since they cannot access the human mental state and cognitive
process beneath the training corpus. To empower AI systems with the ToM ability
and narrow the gap between them and humans, in this paper, we propose COKE: the
first cognitive knowledge graph for machine theory of mind. Specifically, COKE
formalizes ToM as a collection of 45k+ manually verified cognitive chains that
characterize human mental activities and subsequent behavioral/affective
responses when facing specific social circumstances. Beyond that, we further
generalize COKE using pre-trained language models and build a powerful
cognitive generation model COKE+. Experimental results in both automatic and
human evaluation demonstrate the high quality of COKE and the superior ToM
ability of COKE+. | [
"cs.CL"
] | false |
2305.05420 | 2023-05-09T13:13:26Z | Estimating related words computationally using language model from the
Mahabharata -- an Indian epic | [
"Vrunda Gadesha",
"Keyur D Joshi",
"Shefali Naik"
] | 'Mahabharata' is the most popular among many Indian pieces of literature
referred to in many domains for completely different purposes. This text itself
is having various dimension and aspects which is useful for the human being in
their personal life and professional life. This Indian Epic is originally
written in the Sanskrit Language. Now in the era of Natural Language
Processing, Artificial Intelligence, Machine Learning, and Human-Computer
interaction this text can be processed according to the domain requirement. It
is interesting to process this text and get useful insights from Mahabharata.
The limitation of the humans while analyzing Mahabharata is that they always
have a sentiment aspect towards the story narrated by the author. Apart from
that, the human cannot memorize statistical or computational details, like
which two words are frequently coming in one sentence? What is the average
length of the sentences across the whole literature? Which word is the most
popular word across the text, what are the lemmas of the words used across the
sentences? Thus, in this paper, we propose an NLP pipeline to get some
statistical and computational insights along with the most relevant word
searching method from the largest epic 'Mahabharata'. We stacked the different
text-processing approaches to articulate the best results which can be further
used in the various domain where Mahabharata needs to be referred. | [
"cs.CL"
] | false |
2305.05461 | 2023-05-09T14:00:15Z | What is the best recipe for character-level encoder-only modelling? | [
"Kris Cao"
] | This paper aims to benchmark recent progress in language understanding models
that output contextualised representations at the character level. Many such
modelling architectures and methods to train those architectures have been
proposed, but it is currently unclear what the relative contributions of the
architecture vs. the pretraining objective are to final model performance. We
explore the design space of such models, comparing architectural innovations
and a variety of different pretraining objectives on a suite of evaluation
tasks with a fixed training procedure in order to find the currently optimal
way to build and train character-level BERT-like models. We find that our best
performing character-level model exceeds the performance of a token-based model
trained with the same settings on the same data, suggesting that
character-level models are ready for more widespread adoption. Unfortunately,
the best method to train character-level models still relies on a subword-level
tokeniser during pretraining, and final model performance is highly dependent
on tokeniser quality. We believe our results demonstrate the readiness of
character-level models for multilingual language representation, and encourage
NLP practitioners to try them as drop-in replacements for token-based models. | [
"cs.CL"
] | false |
2305.05486 | 2023-05-09T14:36:04Z | MAUPQA: Massive Automatically-created Polish Question Answering Dataset | [
"Piotr Rybak"
] | Recently, open-domain question answering systems have begun to rely heavily
on annotated datasets to train neural passage retrievers. However, manually
annotating such datasets is both difficult and time-consuming, which limits
their availability for less popular languages. In this work, we experiment with
several methods for automatically collecting weakly labeled datasets and show
how they affect the performance of the neural passage retrieval models. As a
result of our work, we publish the MAUPQA dataset, consisting of nearly 400,000
question-passage pairs for Polish, as well as the HerBERT-QA neural retriever. | [
"cs.CL"
] | false |
2305.05627 | 2023-05-09T17:13:53Z | An Exploration of Encoder-Decoder Approaches to Multi-Label
Classification for Legal and Biomedical Text | [
"Yova Kementchedjhieva",
"Ilias Chalkidis"
] | Standard methods for multi-label text classification largely rely on
encoder-only pre-trained language models, whereas encoder-decoder models have
proven more effective in other classification tasks. In this study, we compare
four methods for multi-label classification, two based on an encoder only, and
two based on an encoder-decoder. We carry out experiments on four datasets --
two in the legal domain and two in the biomedical domain, each with two levels
of label granularity -- and always depart from the same pre-trained model, T5.
Our results show that encoder-decoder methods outperform encoder-only methods,
with a growing advantage on more complex datasets and labeling schemes of finer
granularity. Using encoder-decoder models in a non-autoregressive fashion, in
particular, yields the best performance overall, so we further study this
approach through ablations to better understand its strengths. | [
"cs.CL"
] | false |
2305.05672 | 2023-05-09T05:33:32Z | $2 * n$ is better than $n^2$: Decomposing Event Coreference Resolution
into Two Tractable Problems | [
"Shafiuddin Rehan Ahmed",
"Abhijnan Nath",
"James H. Martin",
"Nikhil Krishnaswamy"
] | Event Coreference Resolution (ECR) is the task of linking mentions of the
same event either within or across documents. Most mention pairs are not
coreferent, yet many that are coreferent can be identified through simple
techniques such as lemma matching of the event triggers or the sentences in
which they appear. Existing methods for training coreference systems sample
from a largely skewed distribution, making it difficult for the algorithm to
learn coreference beyond surface matching. Additionally, these methods are
intractable because of the quadratic operations needed. To address these
challenges, we break the problem of ECR into two parts: a) a heuristic to
efficiently filter out a large number of non-coreferent pairs, and b) a
training approach on a balanced set of coreferent and non-coreferent mention
pairs. By following this approach, we show that we get comparable results to
the state of the art on two popular ECR datasets while significantly reducing
compute requirements. We also analyze the mention pairs that are "hard" to
accurately classify as coreferent or non-coreferent. Code at
https://github.com/ahmeshaf/lemma_ce_coref | [
"cs.CL"
] | false |
2305.05748 | 2023-05-09T20:02:18Z | Multilevel Sentence Embeddings for Personality Prediction | [
"Paolo Tirotta",
"Akira Yuasa",
"Masashi Morita"
] | Representing text into a multidimensional space can be done with sentence
embedding models such as Sentence-BERT (SBERT). However, training these models
when the data has a complex multilevel structure requires individually trained
class-specific models, which increases time and computing costs. We propose a
two step approach which enables us to map sentences according to their
hierarchical memberships and polarity. At first we teach the upper level
sentence space through an AdaCos loss function and then finetune with a novel
loss function mainly based on the cosine similarity of intra-level pairs. We
apply this method to three different datasets: two weakly supervised Big Five
personality dataset obtained from English and Japanese Twitter data and the
benchmark MNLI dataset. We show that our single model approach performs better
than multiple class-specific classification models. | [
"cs.CL"
] | false |
2305.06154 | 2023-05-09T11:00:02Z | Alleviating Over-smoothing for Unsupervised Sentence Representation | [
"Nuo Chen",
"Linjun Shou",
"Ming Gong",
"Jian Pei",
"Bowen Cao",
"Jianhui Chang",
"Daxin Jiang",
"Jia Li"
] | Currently, learning better unsupervised sentence representations is the
pursuit of many natural language processing communities. Lots of approaches
based on pre-trained language models (PLMs) and contrastive learning have
achieved promising results on this task. Experimentally, we observe that the
over-smoothing problem reduces the capacity of these powerful PLMs, leading to
sub-optimal sentence representations. In this paper, we present a Simple method
named Self-Contrastive Learning (SSCL) to alleviate this issue, which samples
negatives from PLMs intermediate layers, improving the quality of the sentence
representation. Our proposed method is quite simple and can be easily extended
to various state-of-the-art models for performance boosting, which can be seen
as a plug-and-play contrastive framework for learning unsupervised sentence
representation. Extensive results prove that SSCL brings the superior
performance improvements of different strong baselines (e.g., BERT and SimCSE)
on Semantic Textual Similarity and Transfer datasets. Our codes are available
at https://github.com/nuochenpku/SSCL. | [
"cs.CL"
] | false |
2305.05133 | 2023-05-09T02:38:05Z | Generating Phishing Attacks using ChatGPT | [
"Sayak Saha Roy",
"Krishna Vamsi Naragam",
"Shirin Nilizadeh"
] | The ability of ChatGPT to generate human-like responses and understand
context has made it a popular tool for conversational agents, content creation,
data analysis, and research and innovation. However, its effectiveness and ease
of accessibility makes it a prime target for generating malicious content, such
as phishing attacks, that can put users at risk. In this work, we identify
several malicious prompts that can be provided to ChatGPT to generate
functional phishing websites. Through an iterative approach, we find that these
phishing websites can be made to imitate popular brands and emulate several
evasive tactics that have been known to avoid detection by anti-phishing
entities. These attacks can be generated using vanilla ChatGPT without the need
of any prior adversarial exploits (jailbreaking). | [
"cs.CR",
"cs.CL"
] | false |
2305.05162 | 2023-05-09T04:14:20Z | Effective Medical Code Prediction via Label Internal Alignment | [
"Guodong Liu"
] | The clinical notes are usually typed into the system by physicians. They are
typically required to be marked by standard medical codes, and each code
represents a diagnosis or medical treatment procedure. Annotating these notes
is time consuming and prone to error. In this paper, we proposed a multi-view
attention based Neural network to predict medical codes from clinical texts.
Our method incorporates three aspects of information, the semantic context of
the clinical text, the relationship among the label (medical codes) space, and
the alignment between each pair of a clinical text and medical code. Our method
is verified to be effective on the open source dataset. The experimental result
shows that our method achieves better performance against the prior
state-of-art on multiple metrics. | [
"cs.LG",
"cs.CL"
] | false |
2305.05183 | 2023-05-09T05:33:31Z | CSED: A Chinese Semantic Error Diagnosis Corpus | [
"Bo Sun",
"Baoxin Wang",
"Yixuan Wang",
"Wanxiang Che",
"Dayong Wu",
"Shijin Wang",
"Ting Liu"
] | Recently, much Chinese text error correction work has focused on Chinese
Spelling Check (CSC) and Chinese Grammatical Error Diagnosis (CGED). In
contrast, little attention has been paid to the complicated problem of Chinese
Semantic Error Diagnosis (CSED), which lacks relevant datasets. The study of
semantic errors is important because they are very common and may lead to
syntactic irregularities or even problems of comprehension. To investigate
this, we build the CSED corpus, which includes two datasets. The one is for the
CSED-Recognition (CSED-R) task. The other is for the CSED-Correction (CSED-C)
task. Our annotation guarantees high-quality data through quality assurance
mechanisms. Our experiments show that powerful pre-trained models perform
poorly on this corpus. We also find that the CSED task is challenging, as
evidenced by the fact that even humans receive a low score. This paper proposes
syntax-aware models to specifically adapt to the CSED task. The experimental
results show that the introduction of the syntax-aware approach is meaningful. | [
"cs.CL",
"cs.AI"
] | false |
2305.05191 | 2023-05-09T05:56:58Z | COLA: Contextualized Commonsense Causal Reasoning from the Causal
Inference Perspective | [
"Zhaowei Wang",
"Quyet V. Do",
"Hongming Zhang",
"Jiayao Zhang",
"Weiqi Wang",
"Tianqing Fang",
"Yangqiu Song",
"Ginny Y. Wong",
"Simon See"
] | Detecting commonsense causal relations (causation) between events has long
been an essential yet challenging task. Given that events are complicated, an
event may have different causes under various contexts. Thus, exploiting
context plays an essential role in detecting causal relations. Meanwhile,
previous works about commonsense causation only consider two events and ignore
their context, simplifying the task formulation. This paper proposes a new task
to detect commonsense causation between two events in an event sequence (i.e.,
context), called contextualized commonsense causal reasoning. We also design a
zero-shot framework: COLA (Contextualized Commonsense Causality Reasoner) to
solve the task from the causal inference perspective. This framework obtains
rich incidental supervision from temporality and balances covariates from
multiple timestamps to remove confounding effects. Our extensive experiments
show that COLA can detect commonsense causality more accurately than baselines. | [
"cs.CL",
"cs.AI"
] | false |
2305.05290 | 2023-05-09T09:28:23Z | Dialogue Planning via Brownian Bridge Stochastic Process for
Goal-directed Proactive Dialogue | [
"Jian Wang",
"Dongding Lin",
"Wenjie Li"
] | Goal-directed dialogue systems aim to proactively reach a pre-determined
target through multi-turn conversations. The key to achieving this task lies in
planning dialogue paths that smoothly and coherently direct conversations
towards the target. However, this is a challenging and under-explored task. In
this work, we propose a coherent dialogue planning approach that uses a
stochastic process to model the temporal dynamics of dialogue paths. We define
a latent space that captures the coherence of goal-directed behavior using a
Brownian bridge process, which allows us to incorporate user feedback flexibly
in dialogue planning. Based on the derived latent trajectories, we generate
dialogue paths explicitly using pre-trained language models. We finally employ
these paths as natural language prompts to guide dialogue generation. Our
experiments show that our approach generates more coherent utterances and
achieves the goal with a higher success rate. | [
"cs.CL",
"cs.LG"
] | false |
2305.05311 | 2023-05-09T10:03:34Z | Structured Sentiment Analysis as Transition-based Dependency Parsing | [
"Daniel Fernández-González"
] | Structured sentiment analysis (SSA) aims to automatically extract people's
opinions from a text in natural language and adequately represent that
information in a graph structure. One of the most accurate methods for
performing SSA was recently proposed and consists of approaching it as a
dependency parsing task. Although we can find in the literature how
transition-based algorithms excel in dependency parsing in terms of accuracy
and efficiency, all proposed attempts to tackle SSA following that approach
were based on graph-based models. In this article, we present the first
transition-based method to address SSA as dependency parsing. Specifically, we
design a transition system that processes the input text in a left-to-right
pass, incrementally generating the graph structure containing all identified
opinions. To effectively implement our final transition-based model, we resort
to a Pointer Network architecture as a backbone. From an extensive evaluation,
we demonstrate that our model offers the best performance to date in
practically all cases among prior dependency-based methods, and surpass recent
task-specific techniques on the most challenging datasets. We additionally
include an in-depth analysis and empirically prove that the overall
time-complexity cost of our approach is quadratic in the sentence length, being
more efficient than top-performing graph-based parsers. | [
"cs.CL",
"cs.AI",
"68T50",
"I.2.7"
] | false |
2305.05325 | 2023-05-09T10:21:14Z | Detection of depression on social networks using transformers and
ensembles | [
"Ilija Tavchioski",
"Marko Robnik-Šikonja",
"Senja Pollak"
] | As the impact of technology on our lives is increasing, we witness increased
use of social media that became an essential tool not only for communication
but also for sharing information with community about our thoughts and
feelings. This can be observed also for people with mental health disorders
such as depression where they use social media for expressing their thoughts
and asking for help. This opens a possibility to automatically process social
media posts and detect signs of depression. We build several large pre-trained
language model based classifiers for depression detection from social media
posts. Besides fine-tuning BERT, RoBERTA, BERTweet, and mentalBERT were also
construct two types of ensembles. We analyze the performance of our models on
two data sets of posts from social platforms Reddit and Twitter, and
investigate also the performance of transfer learning across the two data sets.
The results show that transformer ensembles improve over the single
transformer-based classifiers. | [
"cs.CL",
"cs.AI"
] | false |
2305.05393 | 2023-05-09T12:40:19Z | CaseEncoder: A Knowledge-enhanced Pre-trained Model for Legal Case
Encoding | [
"Yixiao Ma",
"Yueyue Wu",
"Weihang Su",
"Qingyao Ai",
"Yiqun Liu"
] | Legal case retrieval is a critical process for modern legal information
systems. While recent studies have utilized pre-trained language models (PLMs)
based on the general domain self-supervised pre-training paradigm to build
models for legal case retrieval, there are limitations in using general domain
PLMs as backbones. Specifically, these models may not fully capture the
underlying legal features in legal case documents. To address this issue, we
propose CaseEncoder, a legal document encoder that leverages fine-grained legal
knowledge in both the data sampling and pre-training phases. In the data
sampling phase, we enhance the quality of the training data by utilizing
fine-grained law article information to guide the selection of positive and
negative examples. In the pre-training phase, we design legal-specific
pre-training tasks that align with the judging criteria of relevant legal
cases. Based on these tasks, we introduce an innovative loss function called
Biased Circle Loss to enhance the model's ability to recognize case relevance
in fine grains. Experimental results on multiple benchmarks demonstrate that
CaseEncoder significantly outperforms both existing general pre-training models
and legal-specific pre-training models in zero-shot legal case retrieval. | [
"cs.IR",
"cs.CL"
] | false |
2305.05474 | 2023-05-09T14:21:29Z | Going beyond research datasets: Novel intent discovery in the industry
setting | [
"Aleksandra Chrabrowa",
"Tsimur Hadeliya",
"Dariusz Kajtoch",
"Robert Mroczkowski",
"Piotr Rybak"
] | Novel intent discovery automates the process of grouping similar messages
(questions) to identify previously unknown intents. However, current research
focuses on publicly available datasets which have only the question field and
significantly differ from real-life datasets. This paper proposes methods to
improve the intent discovery pipeline deployed in a large e-commerce platform.
We show the benefit of pre-training language models on in-domain data: both
self-supervised and with weak supervision. We also devise the best method to
utilize the conversational structure (i.e., question and answer) of real-life
datasets during fine-tuning for clustering tasks, which we call Conv. All our
methods combined to fully utilize real-life datasets give up to 33pp
performance boost over state-of-the-art Constrained Deep Adaptive Clustering
(CDAC) model for question only. By comparison CDAC model for the question data
only gives only up to 13pp performance boost over the naive baseline. | [
"cs.CL",
"cs.LG"
] | false |
2305.05576 | 2023-05-09T16:05:36Z | Large Language Models Humanize Technology | [
"Pratyush Kumar"
] | Large Language Models (LLMs) have made rapid progress in recent months and
weeks, garnering significant public attention. This has sparked concerns about
aligning these models with human values, their impact on labor markets, and the
potential need for regulation in further research and development. However, the
discourse often lacks a focus on the imperative to widely diffuse the societal
benefits of LLMs. To qualify this societal benefit, we assert that LLMs exhibit
emergent abilities to humanize technology more effectively than previous
technologies, and for people across language, occupation, and accessibility
divides. We argue that they do so by addressing three mechanizing bottlenecks
in today's computing technologies: creating diverse and accessible content,
learning complex digital tools, and personalizing machine learning algorithms.
We adopt a case-based approach and illustrate each bottleneck with two examples
where current technology imposes bottlenecks that LLMs demonstrate the ability
to address. Given this opportunity to humanize technology widely, we advocate
for more widespread understanding of LLMs, tools and methods to simplify use of
LLMs, and cross-cutting institutional capacity. | [
"cs.CY",
"cs.CL"
] | false |
2305.05609 | 2023-05-09T16:58:32Z | The Case Records of ChatGPT: Language Models and Complex Clinical
Questions | [
"Timothy Poterucha",
"Pierre Elias",
"Christopher M. Haggerty"
] | Background: Artificial intelligence language models have shown promise in
various applications, including assisting with clinical decision-making as
demonstrated by strong performance of large language models on medical
licensure exams. However, their ability to solve complex, open-ended cases,
which may be representative of clinical practice, remains unexplored. Methods:
In this study, the accuracy of large language AI models GPT4 and GPT3.5 in
diagnosing complex clinical cases was investigated using published Case Records
of the Massachusetts General Hospital. A total of 50 cases requiring a
diagnosis and diagnostic test published from January 1, 2022 to April 16, 2022
were identified. For each case, models were given a prompt requesting the top
three specific diagnoses and associated diagnostic tests, followed by case
text, labs, and figure legends. Model outputs were assessed in comparison to
the final clinical diagnosis and whether the model-predicted test would result
in a correct diagnosis. Results: GPT4 and GPT3.5 accurately provided the
correct diagnosis in 26% and 22% of cases in one attempt, and 46% and 42%
within three attempts, respectively. GPT4 and GPT3.5 provided a correct
essential diagnostic test in 28% and 24% of cases in one attempt, and 44% and
50% within three attempts, respectively. No significant differences were found
between the two models, and multiple trials with identical prompts using the
GPT3.5 model provided similar results. Conclusions: In summary, these models
demonstrate potential usefulness in generating differential diagnoses but
remain limited in their ability to provide a single unifying diagnosis in
complex, open-ended cases. Future research should focus on evaluating model
performance in larger datasets of open-ended clinical challenges and exploring
potential human-AI collaboration strategies to enhance clinical
decision-making. | [
"cs.CL",
"stat.AP"
] | false |
2305.06159 | 2023-05-09T17:55:29Z | A Review of Vision-Language Models and their Performance on the Hateful
Memes Challenge | [
"Bryan Zhao",
"Andrew Zhang",
"Blake Watson",
"Gillian Kearney",
"Isaac Dale"
] | Moderation of social media content is currently a highly manual task, yet
there is too much content posted daily to do so effectively. With the advent of
a number of multimodal models, there is the potential to reduce the amount of
manual labor for this task. In this work, we aim to explore different models
and determine what is most effective for the Hateful Memes Challenge, a
challenge by Meta designed to further machine learning research in content
moderation. Specifically, we explore the differences between early fusion and
late fusion models in classifying multimodal memes containing text and images.
We first implement a baseline using unimodal models for text and images
separately using BERT and ResNet-152, respectively. The outputs from these
unimodal models were then concatenated together to create a late fusion model.
In terms of early fusion models, we implement ConcatBERT, VisualBERT, ViLT,
CLIP, and BridgeTower. It was found that late fusion performed significantly
worse than early fusion models, with the best performing model being CLIP which
achieved an AUROC of 70.06. The code for this work is available at
https://github.com/bzhao18/CS-7643-Project. | [
"cs.CL",
"cs.AI"
] | false |
2305.05098 | 2023-05-09T00:01:32Z | Who Needs Decoders? Efficient Estimation of Sequence-level Attributes | [
"Yassir Fathullah",
"Puria Radmard",
"Adian Liusie",
"Mark J. F. Gales"
] | State-of-the-art sequence-to-sequence models often require autoregressive
decoding, which can be highly expensive. However, for some downstream tasks
such as out-of-distribution (OOD) detection and resource allocation, the actual
decoding output is not needed just a scalar attribute of this sequence. In
these scenarios, where for example knowing the quality of a system's output to
predict poor performance prevails over knowing the output itself, is it
possible to bypass the autoregressive decoding? We propose Non-Autoregressive
Proxy (NAP) models that can efficiently predict general scalar-valued
sequence-level attributes. Importantly, NAPs predict these metrics directly
from the encodings, avoiding the expensive autoregressive decoding stage. We
consider two sequence-to-sequence task: Machine Translation (MT); and Automatic
Speech Recognition (ASR). In OOD for MT, NAPs outperform a deep ensemble while
being significantly faster. NAPs are also shown to be able to predict
performance metrics such as BERTScore (MT) or word error rate (ASR). For
downstream tasks, such as data filtering and resource optimization, NAPs
generate performance predictions that outperform predictive uncertainty while
being highly inference efficient. | [
"cs.LG",
"cs.AI",
"cs.CL"
] | false |
2305.05176 | 2023-05-09T05:11:02Z | FrugalGPT: How to Use Large Language Models While Reducing Cost and
Improving Performance | [
"Lingjiao Chen",
"Matei Zaharia",
"James Zou"
] | There is a rapidly growing number of large language models (LLMs) that users
can query for a fee. We review the cost associated with querying popular LLM
APIs, e.g. GPT-4, ChatGPT, J1-Jumbo, and find that these models have
heterogeneous pricing structures, with fees that can differ by two orders of
magnitude. In particular, using LLMs on large collections of queries and text
can be expensive. Motivated by this, we outline and discuss three types of
strategies that users can exploit to reduce the inference cost associated with
using LLMs: 1) prompt adaptation, 2) LLM approximation, and 3) LLM cascade. As
an example, we propose FrugalGPT, a simple yet flexible instantiation of LLM
cascade which learns which combinations of LLMs to use for different queries in
order to reduce cost and improve accuracy. Our experiments show that FrugalGPT
can match the performance of the best individual LLM (e.g. GPT-4) with up to
98% cost reduction or improve the accuracy over GPT-4 by 4% with the same cost.
The ideas and findings presented here lay a foundation for using LLMs
sustainably and efficiently. | [
"cs.LG",
"cs.AI",
"cs.CL",
"cs.SE"
] | true |
2305.05201 | 2023-05-09T06:28:10Z | Exploration of Language Dependency for Japanese Self-Supervised Speech
Representation Models | [
"Takanori Ashihara",
"Takafumi Moriya",
"Kohei Matsuura",
"Tomohiro Tanaka"
] | Self-supervised learning (SSL) has been dramatically successful not only in
monolingual but also in cross-lingual settings. However, since the two settings
have been studied individually in general, there has been little research
focusing on how effective a cross-lingual model is in comparison with a
monolingual model. In this paper, we investigate this fundamental question
empirically with Japanese automatic speech recognition (ASR) tasks. First, we
begin by comparing the ASR performance of cross-lingual and monolingual models
for two different language tasks while keeping the acoustic domain as identical
as possible. Then, we examine how much unlabeled data collected in Japanese is
needed to achieve performance comparable to a cross-lingual model pre-trained
with tens of thousands of hours of English and/or multilingual data. Finally,
we extensively investigate the effectiveness of SSL in Japanese and demonstrate
state-of-the-art performance on multiple ASR tasks. Since there is no
comprehensive SSL study for Japanese, we hope this study will guide Japanese
SSL research. | [
"cs.CL",
"cs.SD",
"eess.AS"
] | false |
2305.05271 | 2023-05-09T08:51:44Z | Robust Acoustic and Semantic Contextual Biasing in Neural Transducers
for Speech Recognition | [
"Xuandi Fu",
"Kanthashree Mysore Sathyendra",
"Ankur Gandhe",
"Jing Liu",
"Grant P. Strimel",
"Ross McGowan",
"Athanasios Mouchtaris"
] | Attention-based contextual biasing approaches have shown significant
improvements in the recognition of generic and/or personal rare-words in
End-to-End Automatic Speech Recognition (E2E ASR) systems like neural
transducers. These approaches employ cross-attention to bias the model towards
specific contextual entities injected as bias-phrases to the model. Prior
approaches typically relied on subword encoders for encoding the bias phrases.
However, subword tokenizations are coarse and fail to capture granular
pronunciation information which is crucial for biasing based on acoustic
similarity. In this work, we propose to use lightweight character
representations to encode fine-grained pronunciation features to improve
contextual biasing guided by acoustic similarity between the audio and the
contextual entities (termed acoustic biasing). We further integrate pretrained
neural language model (NLM) based encoders to encode the utterance's semantic
context along with contextual entities to perform biasing informed by the
utterance's semantic context (termed semantic biasing). Experiments using a
Conformer Transducer model on the Librispeech dataset show a 4.62% - 9.26%
relative WER improvement on different biasing list sizes over the baseline
contextual model when incorporating our proposed acoustic and semantic biasing
approach. On a large-scale in-house dataset, we observe 7.91% relative WER
improvement compared to our baseline model. On tail utterances, the
improvements are even more pronounced with 36.80% and 23.40% relative WER
improvements on Librispeech rare words and an in-house testset respectively. | [
"cs.CL",
"cs.LG",
"cs.SD",
"eess.AS"
] | false |
2305.05364 | 2023-05-09T11:55:36Z | Large Language Model Programs | [
"Imanol Schlag",
"Sainbayar Sukhbaatar",
"Asli Celikyilmaz",
"Wen-tau Yih",
"Jason Weston",
"Jürgen Schmidhuber",
"Xian Li"
] | In recent years, large pre-trained language models (LLMs) have demonstrated
the ability to follow instructions and perform novel tasks from a few examples.
The possibility to parameterise an LLM through such in-context examples widens
their capability at a much lower cost than finetuning. We extend this line of
reasoning and present a method which further expands the capabilities of an LLM
by embedding it within an algorithm or program. To demonstrate the benefits of
this approach, we present an illustrative example of evidence-supported
question-answering. We obtain a 6.4\% improvement over the chain of thought
baseline through a more algorithmic approach without any finetuning.
Furthermore, we highlight recent work from this perspective and discuss the
advantages and disadvantages in comparison to the standard approaches. | [
"cs.LG",
"cs.AI",
"cs.CL"
] | true |
2305.05754 | 2023-05-09T20:23:17Z | When and What to Ask Through World States and Text Instructions: IGLU
NLP Challenge Solution | [
"Zhengxiang Shi",
"Jerome Ramos",
"To Eun Kim",
"Xi Wang",
"Hossein A. Rahmani",
"Aldo Lipani"
] | In collaborative tasks, effective communication is crucial for achieving
joint goals. One such task is collaborative building where builders must
communicate with each other to construct desired structures in a simulated
environment such as Minecraft. We aim to develop an intelligent builder agent
to build structures based on user input through dialogue. However, in
collaborative building, builders may encounter situations that are difficult to
interpret based on the available information and instructions, leading to
ambiguity. In the NeurIPS 2022 Competition NLP Task, we address two key
research questions, with the goal of filling this gap: when should the agent
ask for clarification, and what clarification questions should it ask? We move
towards this target with two sub-tasks, a classification task and a ranking
task. For the classification task, the goal is to determine whether the agent
should ask for clarification based on the current world state and dialogue
history. For the ranking task, the goal is to rank the relevant clarification
questions from a pool of candidates. In this report, we briefly introduce our
methods for the classification and ranking task. For the classification task,
our model achieves an F1 score of 0.757, which placed the 3rd on the
leaderboard. For the ranking task, our model achieves about 0.38 for Mean
Reciprocal Rank by extending the traditional ranking model. Lastly, we discuss
various neural approaches for the ranking task and future direction. | [
"cs.CL",
"cs.AI",
"cs.LG"
] | false |
2305.05759 | 2023-05-09T20:37:16Z | Ranking & Reweighting Improves Group Distributional Robustness | [
"Yachuan Liu",
"Bohan Zhang",
"Qiaozhu Mei",
"Paramveer Dhillon"
] | Recent work has shown that standard training via empirical risk minimization
(ERM) can produce models that achieve high accuracy on average but low accuracy
on underrepresented groups due to the prevalence of spurious features. A
predominant approach to tackle this group robustness problem minimizes the
worst group error (akin to a minimax strategy) on the training data, hoping it
will generalize well on the testing data. However, this is often suboptimal,
especially when the out-of-distribution (OOD) test data contains previously
unseen groups. Inspired by ideas from the information retrieval and
learning-to-rank literature, this paper first proposes to use Discounted
Cumulative Gain (DCG) as a metric of model quality for facilitating better
hyperparameter tuning and model selection. Being a ranking-based metric, DCG
weights multiple poorly-performing groups (instead of considering just the
group with the worst performance). As a natural next step, we build on our
results to propose a ranking-based training method called Discounted Rank
Upweighting (DRU), which differentially reweights a ranked list of
poorly-performing groups in the training data to learn models that exhibit
strong OOD performance on the test data. Results on several synthetic and
real-world datasets highlight the superior generalization ability of our
group-ranking-based (akin to soft-minimax) approach in selecting and learning
models that are robust to group distributional shifts. | [
"cs.LG",
"cs.AI",
"cs.CL",
"stat.ML"
] | false |
2305.07445 | 2023-05-09T07:21:46Z | QVoice: Arabic Speech Pronunciation Learning Application | [
"Yassine El Kheir",
"Fouad Khnaisser",
"Shammur Absar Chowdhury",
"Hamdy Mubarak",
"Shazia Afzal",
"Ahmed Ali"
] | This paper introduces a novel Arabic pronunciation learning application
QVoice, powered with end-to-end mispronunciation detection and feedback
generator module. The application is designed to support non-native Arabic
speakers in enhancing their pronunciation skills, while also helping native
speakers mitigate any potential influence from regional dialects on their
Modern Standard Arabic (MSA) pronunciation. QVoice employs various learning
cues to aid learners in comprehending meaning, drawing connections with their
existing knowledge of English language, and offers detailed feedback for
pronunciation correction, along with contextual examples showcasing word usage.
The learning cues featured in QVoice encompass a wide range of meaningful
information, such as visualizations of phrases/words and their translations, as
well as phonetic transcriptions and transliterations. QVoice provides
pronunciation feedback at the character level and assesses performance at the
word level. | [
"eess.AS",
"cs.CL",
"cs.SD"
] | false |
2305.05116 | 2023-05-09T01:29:46Z | Communication-Robust Multi-Agent Learning by Adaptable Auxiliary
Multi-Agent Adversary Generation | [
"Lei Yuan",
"Feng Chen",
"Zhongzhang Zhang",
"Yang Yu"
] | Communication can promote coordination in cooperative Multi-Agent
Reinforcement Learning (MARL). Nowadays, existing works mainly focus on
improving the communication efficiency of agents, neglecting that real-world
communication is much more challenging as there may exist noise or potential
attackers. Thus the robustness of the communication-based policies becomes an
emergent and severe issue that needs more exploration. In this paper, we posit
that the ego system trained with auxiliary adversaries may handle this
limitation and propose an adaptable method of Multi-Agent Auxiliary Adversaries
Generation for robust Communication, dubbed MA3C, to obtain a robust
communication-based policy. In specific, we introduce a novel message-attacking
approach that models the learning of the auxiliary attacker as a cooperative
problem under a shared goal to minimize the coordination ability of the ego
system, with which every information channel may suffer from distinct message
attacks. Furthermore, as naive adversarial training may impede the
generalization ability of the ego system, we design an attacker population
generation approach based on evolutionary learning. Finally, the ego system is
paired with an attacker population and then alternatively trained against the
continuously evolving attackers to improve its robustness, meaning that both
the ego system and the attackers are adaptable. Extensive experiments on
multiple benchmarks indicate that our proposed MA3C provides comparable or
better robustness and generalization ability than other baselines. | [
"cs.LG"
] | false |
2305.05495 | 2023-05-09T14:47:16Z | Self-Supervised Anomaly Detection of Rogue Soil Moisture Sensors | [
"Boje Deforce",
"Bart Baesens",
"Jan Diels",
"Estefanía Serral Asensio"
] | IoT data is a central element in the successful digital transformation of
agriculture. However, IoT data comes with its own set of challenges. E.g., the
risk of data contamination due to rogue sensors. A sensor is considered rogue
when it provides incorrect measurements over time. To ensure correct analytical
results, an essential preprocessing step when working with IoT data is the
detection of such rogue sensors. Existing methods assume that well-behaving
sensors are known or that a large majority of the sensors is well-behaving.
However, real-world data is often completely unlabeled and voluminous, calling
for self-supervised methods that can detect rogue sensors without prior
information. We present a self-supervised anomalous sensor detector based on a
neural network with a contrastive loss, followed by DBSCAN. A core contribution
of our paper is the use of Dynamic Time Warping in the negative sampling for
the triplet loss. This novelty makes the use of triplet networks feasible for
anomalous sensor detection. Our method shows promising results on a challenging
dataset of soil moisture sensors deployed in multiple pear orchards. | [
"cs.LG"
] | false |
2305.05518 | 2023-05-09T15:16:50Z | Minimal Learning Machine for Multi-Label Learning | [
"Joonas Hämäläinen",
"Amauri Souza",
"César L. C. Mattos",
"João P. P. Gomes",
"Tommi Kärkkäinen"
] | Distance-based supervised method, the minimal learning machine, constructs a
predictive model from data by learning a mapping between input and output
distance matrices. In this paper, we propose methods and evaluate how this
technique and its core component, the distance mapping, can be adapted to
multi-label learning. The proposed approach is based on combining the distance
mapping with an inverse distance weighting. Although the proposal is one of the
simplest methods in the multi-label learning literature, it achieves
state-of-the-art performance for small to moderate-sized multi-label learning
problems. Besides its simplicity, the proposed method is fully deterministic
and its hyper-parameter can be selected via ranking loss-based statistic which
has a closed form, thus avoiding conventional cross-validation-based
hyper-parameter tuning. In addition, due to its simple linear distance
mapping-based construction, we demonstrate that the proposed method can assess
predictions' uncertainty for multi-label classification, which is a valuable
capability for data-centric machine learning pipelines. | [
"cs.LG"
] | false |
2305.05110 | 2023-05-09T00:46:12Z | Semi-Supervised Federated Learning for Keyword Spotting | [
"Enmao Diao",
"Eric W. Tramel",
"Jie Ding",
"Tao Zhang"
] | Keyword Spotting (KWS) is a critical aspect of audio-based applications on
mobile devices and virtual assistants. Recent developments in Federated
Learning (FL) have significantly expanded the ability to train machine learning
models by utilizing the computational and private data resources of numerous
distributed devices. However, existing FL methods typically require that
devices possess accurate ground-truth labels, which can be both expensive and
impractical when dealing with local audio data. In this study, we first
demonstrate the effectiveness of Semi-Supervised Federated Learning (SSL) and
FL for KWS. We then extend our investigation to Semi-Supervised Federated
Learning (SSFL) for KWS, where devices possess completely unlabeled data, while
the server has access to a small amount of labeled data. We perform numerical
analyses using state-of-the-art SSL, FL, and SSFL techniques to demonstrate
that the performance of KWS models can be significantly improved by leveraging
the abundant unlabeled heterogeneous data available on devices. | [
"cs.LG",
"eess.AS"
] | false |
2305.05111 | 2023-05-09T00:55:09Z | When a CBR in Hand is Better than Twins in the Bush | [
"Mobyen Uddin Ahmed",
"Shaibal Barua",
"Shahina Begum",
"Mir Riyanul Islam",
"Rosina O Weber"
] | AI methods referred to as interpretable are often discredited as inaccurate
by supporters of the existence of a trade-off between interpretability and
accuracy. In many problem contexts however this trade-off does not hold. This
paper discusses a regression problem context to predict flight take-off delays
where the most accurate data regression model was trained via the XGBoost
implementation of gradient boosted decision trees. While building an XGB-CBR
Twin and converting the XGBoost feature importance into global weights in the
CBR model, the resultant CBR model alone provides the most accurate local
prediction, maintains the global importance to provide a global explanation of
the model, and offers the most interpretable representation for local
explanations. This resultant CBR model becomes a benchmark of accuracy and
interpretability for this problem context, and hence it is used to evaluate the
two additive feature attribute methods SHAP and LIME to explain the XGBoost
regression model. The results with respect to local accuracy and feature
attribution lead to potentially valuable future work. | [
"cs.LG",
"cs.AI"
] | false |
2305.05128 | 2023-05-09T02:16:48Z | A Kriging-Random Forest Hybrid Model for Real-time Ground Property
Prediction during Earth Pressure Balance Shield Tunneling | [
"Ziheng Geng",
"Chao Zhang",
"Yuhao Ren",
"Minxiang Zhu",
"Renpeng Chen",
"Hongzhan Cheng"
] | A kriging-random forest hybrid model is developed for real-time ground
property prediction ahead of the earth pressure balanced shield by integrating
Kriging extrapolation and random forest, which can guide shield operating
parameter selection thereby mitigate construction risks. The proposed KRF
algorithm synergizes two types of information: prior information and real-time
information. The previously predicted ground properties with EPB operating
parameters are extrapolated via the Kriging algorithm to provide prior
information for the prediction of currently being excavated ground properties.
The real-time information refers to the real-time operating parameters of the
EPB shield, which are input into random forest to provide a real-time
prediction of ground properties. The integration of these two predictions is
achieved by assigning weights to each prediction according to their
uncertainties, ensuring the prediction of KRF with minimum uncertainty. The
performance of the KRF algorithm is assessed via a case study of the Changsha
Metro Line 4 project. It reveals that the proposed KRF algorithm can predict
ground properties with an accuracy of 93%, overperforming the existing
algorithms of LightGBM, AdaBoost-CART, and DNN by 29%, 8%, and 12%,
respectively. Another dataset from Shenzhen Metro Line 13 project is utilized
to further evaluate the model generalization performance, revealing that the
model can transfer its learned knowledge from one region to another with an
accuracy of 89%. | [
"cs.LG",
"cs.AI"
] | false |