text
stringlengths
62
2.94k
Wasserstein Index Generation Model Automatic Generation of Timeseries Index with Application to Economic Policy Uncertainty ; I propose a novel method, the Wasserstein Index Generation model WIG, to generate a public sentiment index automatically. To test the models effectiveness, an application to generate Economic Policy Uncertainty EPU index is showcased.
Potential Flow Generator with L2 Optimal Transport Regularity for Generative Models ; We propose a potential flow generator with L2 optimal transport regularity, which can be easily integrated into a wide range of generative models including different versions of GANs and flowbased models. We show the correctness and robustness of the potential flow generator in several 2D problems, and illustrate the concept of proximity due to the L2 optimal transport regularity. Subsequently, we demonstrate the effectiveness of the potential flow generator in image translation tasks with unpaired training data from the MNIST dataset and the CelebA dataset.
Creative GANs for generating poems, lyrics, and metaphors ; Generative models for text have substantially contributed to tasks like machine translation and language modeling, using maximum likelihood optimization MLE. However, for creative text generation, where multiple outputs are possible and originality and uniqueness are encouraged, MLE falls short. Methods optimized for MLE lead to outputs that can be generic, repetitive and incoherent. In this work, we use a Generative Adversarial Network framework to alleviate this problem. We evaluate our framework on poetry, lyrics and metaphor datasets, each with widely different characteristics, and report better performance of our objective function over other generative models.
Generative Flows with Matrix Exponential ; Generative flows models enjoy the properties of tractable exact likelihood and efficient sampling, which are composed of a sequence of invertible functions. In this paper, we incorporate matrix exponential into generative flows. Matrix exponential is a map from matrices to invertible matrices, this property is suitable for generative flows. Based on matrix exponential, we propose matrix exponential coupling layers that are a general case of affine coupling layers and matrix exponential invertible 1 x 1 convolutions that do not collapse during training. And we modify the networks architecture to make trainingstable andsignificantly speed up the training process. Our experiments show that our model achieves great performance on density estimation amongst generative flows models.
Noncommutative Yang model and its generalizations ; Long time ago, C.N. Yang proposed a model of noncommutative spacetime that generalized the Snyder model to a curved background. In this paper we review his proposal and the generalizations that have been suggested during the years. In particular, we discuss the most general algebras that contain as subalgebras both de Sitter and Snyder algebras, preserving Lorentz invariance, and are generated by a twoparameter deformation of the canonical Heisenberg algebra. We also define their realizations on quantum phase space, giving explicit examples, both exact and in terms of a perturbative expansion in the deformation parameters.
Causally Disentangled Generative Variational AutoEncoder ; We propose a new supervised learning method for Variational AutoEncoder VAE which has a causally disentangled representation and achieves the causally disentangled generation CDG simultaneously. In this paper, CDG is defined as a generative model able to decode an output precisely according to the causally disentangled representation. We found that the supervised regularization of the encoder is not enough to obtain a generative model with CDG. Consequently, we explore sufficient and necessary conditions for the decoder and the causal effect to achieve CDG. Moreover, we propose a generalized metric measuring how a model is causally disentangled generative. Numerical results with the image and tabular datasets corroborate our arguments.
GRM Generative Relevance Modeling Using RelevanceAware Sample Estimation for Document Retrieval ; Recent studies show that Generative Relevance Feedback GRF, using text generated by Large Language Models LLMs, can enhance the effectiveness of query expansion. However, LLMs can generate irrelevant information that harms retrieval effectiveness. To address this, we propose Generative Relevance Modeling GRM that uses RelevanceAware Sample Estimation RASE for more accurate weighting of expansion terms. Specifically, we identify similar real documents for each generated document and use a neural reranker to estimate their relevance. Experiments on three standard document ranking benchmarks show that GRM improves MAP by 69 and R1k by 24, surpassing previous methods.
DaST Datafree Substitute Training for Adversarial Attacks ; Machine learning models are vulnerable to adversarial examples. For the blackbox setting, current substitute attacks need pretrained models to generate adversarial examples. However, pretrained models are hard to obtain in realworld tasks. In this paper, we propose a datafree substitute training method DaST to obtain substitute models for adversarial blackbox attacks without the requirement of any real data. To achieve this, DaST utilizes specially designed generative adversarial networks GANs to train the substitute models. In particular, we design a multibranch architecture and labelcontrol loss for the generative model to deal with the uneven distribution of synthetic samples. The substitute model is then trained by the synthetic samples generated by the generative model, which are labeled by the attacked model subsequently. The experiments demonstrate the substitute models produced by DaST can achieve competitive performance compared with the baseline models which are trained by the same train set with attacked models. Additionally, to evaluate the practicability of the proposed method on the realworld task, we attack an online machine learning model on the Microsoft Azure platform. The remote model misclassifies 98.35 of the adversarial examples crafted by our method. To the best of our knowledge, we are the first to train a substitute model for adversarial attacks without any real data.
Plan To Predict Learning an UncertaintyForeseeing Model for ModelBased Reinforcement Learning ; In Modelbased Reinforcement Learning MBRL, model learning is critical since an inaccurate model can bias policy learning via generating misleading samples. However, learning an accurate model can be difficult since the policy is continually updated and the induced distribution over visited states used for model learning shifts accordingly. Prior methods alleviate this issue by quantifying the uncertainty of modelgenerated samples. However, these methods only quantify the uncertainty passively after the samples were generated, rather than foreseeing the uncertainty before model trajectories fall into those highly uncertain regions. The resulting lowquality samples can induce unstable learning targets and hinder the optimization of the policy. Moreover, while being learned to minimize onestep prediction errors, the model is generally used to predict for multiple steps, leading to a mismatch between the objectives of model learning and model usage. To this end, we propose emphPlan To Predict P2P, an MBRL framework that treats the model rollout process as a sequential decision making problem by reversely considering the model as a decision maker and the current policy as the dynamics. In this way, the model can quickly adapt to the current policy and foresee the multistep future uncertainty when generating trajectories. Theoretically, we show that the performance of P2P can be guaranteed by approximately optimizing a lower bound of the true environment return. Empirical results demonstrate that P2P achieves stateoftheart performance on several challenging benchmark tasks.
Toy amphiphiles on the computer What can we learn from generic models ; Generic coarsegrained models are designed such that they are i simple and ii computationally efficient. They do not aim at representing particular materials, but classes of materials, hence they can offer insight into universal properties of these classes. Here we review generic models for amphiphilic molecules and discuss applications in studies of selfassembling nanostructures and the local structure of bilayer membranes, i.e. their phases and their interactions with nanosized inclusions. Special attention is given to the comparison of simulations with elastic continuum models, which are, in some sense, generic models on a higher coarsegraining level. In many cases, it is possible to bridge quantitatively between generic particle models and continuum models, hence multiscale modeling works on principle. On the other side, generic simulations can help to interpret experiments by providing information that is not accessible otherwise.
Support and Plausibility Degrees in Generalized Functional Models ; By discussing several examples, the theory of generalized functional models is shown to be very natural for modeling some situations of reasoning under uncertainty. A generalized functional model is a pair f, P where f is a function describing the interactions between a parameter variable, an observation variable and a random source, and P is a probability distribution for the random source. Unlike traditional functional models, generalized functional models do not require that there is only one value of the parameter variable that is compatible with an observation and a realization of the random source. As a consequence, the results of the analysis of a generalized functional model are not expressed in terms of probability distributions but rather by support and plausibility functions. The analysis of a generalized functional model is very logical and is inspired from ideas already put forward by R.A. Fisher in his theory of fiducial probability.
Effect of anisotropy on generalized Chaplygin gas scalar field and its interaction with other dark energy models ; In this work, we establish a correspondence between the interacting holographic, new agegraphic dark energy and generalized Chaplygin gas model in Bianchi type I universe. In continue, we reconstruct the potential of the scalar field which describes the generalized Chaplygin cosmology. Cosmological solutions are obtained when the kinetic energy of the phantom field is order of the anisotropy and dominates over the potential energy of the field. We investigate observational constraints on the generalized Chaplygin gas, holographic and new agegraphic dark energy models as the unification of dark matter and dark energy, by using the latest observational data. To do this we focus on observational determinations of the expansion history Hz. It is shown that the HDE model is better than the NADE and generalized Chaplygin gas models in an anisotropic universe. Then, we calculate the evolution of density perturbations in the linear regime for three models of dark energy and compare the results LambdaCDM model. Finally, the analysis shows that the increase in anisotropy leads to more correspondence between the dark energy scalar field model and observational data
Generative and Discriminative Text Classification with Recurrent Neural Networks ; We empirically characterize the performance of discriminative and generative LSTM models for text classification. We find that although RNNbased generative models are more powerful than their bagofwords ancestors e.g., they account for conditional dependencies across words in a document, they have higher asymptotic error rates than discriminatively trained RNN models. However we also find that generative models approach their asymptotic error rate more rapidly than their discriminative counterpartsthe same pattern that Ng Jordan 2001 proved holds for linear classification models that make more naive conditional independence assumptions. Building on this finding, we hypothesize that RNNbased generative classification models will be more robust to shifts in the data distribution. This hypothesis is confirmed in a series of experiments in zeroshot and continual learning settings that show that generative models substantially outperform discriminative models.
Learning EnergyBased Models as Generative ConvNets via Multigrid Modeling and Sampling ; This paper proposes a multigrid method for learning energybased generative ConvNet models of images. For each grid, we learn an energybased probabilistic model where the energy function is defined by a bottomup convolutional neural network ConvNet or CNN. Learning such a model requires generating synthesized examples from the model. Within each iteration of our learning algorithm, for each observed training image, we generate synthesized images at multiple grids by initializing the finitestep MCMC sampling from a minimal 1 x 1 version of the training image. The synthesized image at each subsequent grid is obtained by a finitestep MCMC initialized from the synthesized image generated at the previous coarser grid. After obtaining the synthesized examples, the parameters of the models at multiple grids are updated separately and simultaneously based on the differences between synthesized and observed examples. We show that this multigrid method can learn realistic energybased generative ConvNet models, and it outperforms the original contrastive divergence CD and persistent CD.
On generalized residue network for deep learning of unknown dynamical systems ; We present a general numerical approach for learning unknown dynamical systems using deep neural networks DNNs. Our method is built upon recent studies that identified the residue network ResNet as an effective neural network structure. In this paper, we present a generalized ResNet framework and broadly define residue as the discrepancy between observation data and prediction made by another model, which can be an existing coarse model or reducedorder model. In this case, the generalized ResNet serves as a model correction to the existing model and recovers the unresolved dynamics. When an existing coarse model is not available, we present numerical strategies for fast creation of coarse models, to be used in conjunction with the generalized ResNet. These coarse models are constructed using the same data set and thus do not require additional resources. The generalized ResNet is capable of learning the underlying unknown equations and producing predictions with accuracy higher than the standard ResNet structure. This is demonstrated via several numerical examples, including longterm prediction of a chaotic system.
A Classifying Variational Autoencoder with Application to Polyphonic Music Generation ; The variational autoencoder VAE is a popular probabilistic generative model. However, one shortcoming of VAEs is that the latent variables cannot be discrete, which makes it difficult to generate data from different modes of a distribution. Here, we propose an extension of the VAE framework that incorporates a classifier to infer the discrete class of the modeled data. To model sequential data, we can combine our Classifying VAE with a recurrent neural network such as an LSTM. We apply this model to algorithmic music generation, where our model learns to generate musical sequences in different keys. Most previous work in this area avoids modeling key by transposing data into only one or two keys, as opposed to the 10 different keys in the original music. We show that our Classifying VAE and Classifying VAELSTM models outperform the corresponding nonclassifying models in generating musical samples that stay in key. This benefit is especially apparent when trained on untransposed music data in the original keys.
Permutation Invariant Graph Generation via ScoreBased Generative Modeling ; Learning generative models for graphstructured data is challenging because graphs are discrete, combinatorial, and the underlying data distribution is invariant to the ordering of nodes. However, most of the existing generative models for graphs are not invariant to the chosen ordering, which might lead to an undesirable bias in the learned distribution. To address this difficulty, we propose a permutation invariant approach to modeling graphs, using the recent framework of scorebased generative modeling. In particular, we design a permutation equivariant, multichannel graph neural network to model the gradient of the data distribution at the input graph a.k.a., the score function. This permutation equivariant model of gradients implicitly defines a permutation invariant distribution for graphs. We train this graph neural network with score matching and sample from it with annealed Langevin dynamics. In our experiments, we first demonstrate the capacity of this new architecture in learning discrete graph algorithms. For graph generation, we find that our learning approach achieves better or comparable results to existing models on benchmark datasets.
Discovering Generative Models from Event Logs Datadriven Simulation vs Deep Learning ; A generative model is a statistical model that is able to generate new data instances from previously observed ones. In the context of business processes, a generative model creates new execution traces from a set of historical traces, also known as an event log. Two families of generative process simulation models have been developed in previous work datadriven simulation models and deep learning models. Until now, these two approaches have evolved independently and their relative performance has not been studied. This paper fills this gap by empirically comparing a datadriven simulation technique with multiple deep learning techniques, which construct models are capable of generating execution traces with timestamped events. The study sheds light into the relative strengths of both approaches and raises the prospect of developing hybrid approaches that combine these strengths.
Randomwalk Based Generative Model for Classifying Document Networks ; Document networks are found in various collections of realworld data, such as citation networks, hyperlinked web pages, and online social networks. A large number of generative models have been proposed because they offer intuitive and useful pictures for analyzing document networks. Prominent examples are relational topic models, where documents are linked according to their topic similarities. However, existing generative models do not make full use of network structures because they are largely dependent on topic modeling of documents. In particular, centrality of graph nodes is missing in generative processes of previous models. In this paper, we propose a novel generative model for document networks by introducing random walkers on networks to integrate the node centrality into link generation processes. The developed method is evaluated in semisupervised classification tasks with realworld citation networks. We show that the proposed model outperforms existing probabilistic approaches especially in detecting communities in connected networks.
Discrete Point Flow Networks for Efficient Point Cloud Generation ; Generative models have proven effective at modeling 3D shapes and their statistical variations. In this paper we investigate their application to point clouds, a 3D shape representation widely used in computer vision for which, however, only few generative models have yet been proposed. We introduce a latent variable model that builds on normalizing flows with affine coupling layers to generate 3D point clouds of an arbitrary size given a latent shape representation. To evaluate its benefits for shape modeling we apply this model for generation, autoencoding, and singleview shape reconstruction tasks. We improve over recent GANbased models in terms of most metrics that assess generation and autoencoding. Compared to recent work based on continuous flows, our model offers a significant speedup in both training and inference times for similar or better performance. For singleview shape reconstruction we also obtain results on par with stateoftheart voxel, point cloud, and meshbased methods.
Learning Contextual Representations for Semantic Parsing with GenerationAugmented PreTraining ; Most recently, there has been significant interest in learning contextual representations for various NLP tasks, by leveraging large scale text corpora to train large neural language models with selfsupervised learning objectives, such as Masked Language Model MLM. However, based on a pilot study, we observe three issues of existing generalpurpose language models when they are applied to texttoSQL semantic parsers fail to detect column mentions in the utterances, fail to infer column mentions from cell values, and fail to compose complex SQL queries. To mitigate these issues, we present a model pretraining framework, GenerationAugmented Pretraining GAP, that jointly learns representations of natural language utterances and table schemas by leveraging generation models to generate pretrain data. GAP MODEL is trained on 2M utteranceschema pairs and 30K utteranceschemaSQL triples, whose utterances are produced by generative models. Based on experimental results, neural semantic parsers that leverage GAP MODEL as a representation encoder obtain new stateoftheart results on both SPIDER and CRITERIATOSQL benchmarks.
General Robot Dynamics Learning and Gen2Real ; Acquiring dynamics is an essential topic in robot learning, but uptodate methods, such as dynamics randomization, need to restart to check nominal parameters, generate simulation data, and train networks whenever they face different robots. To improve it, we novelly investigate general robot dynamics, its inverse models, and Gen2Real, which means transferring to reality. Our motivations are to build a model that learns the intrinsic dynamics of various robots and lower the threshold of dynamics learning by enabling an amateur to obtain robot models without being trapped in details. This paper achieves the generality by randomizing dynamics parameters, topology configurations, and model dimensions, which in sequence cover the property, the connection, and the number of robot links. A structure modified from GPT is applied to access the pretraining model of general dynamics. We also study various inverse models of dynamics to facilitate different applications. We step further to investigate a new concept, Gen2Real, to transfer simulated, general models to physical, specific robots. Simulation and experiment results demonstrate the validity of the proposed models and method.footnote These authors contribute equally.
Meta Internal Learning ; Internal learning for singleimage generation is a framework, where a generator is trained to produce novel images based on a single image. Since these models are trained on a single image, they are limited in their scale and application. To overcome these issues, we propose a metalearning approach that enables training over a collection of images, in order to model the internal statistics of the sample image more effectively. In the presented metalearning approach, a singleimage GAN model is generated given an input image, via a convolutional feedforward hypernetwork f. This network is trained over a dataset of images, allowing for feature sharing among different models, and for interpolation in the space of generative models. The generated singleimage model contains a hierarchy of multiple generators and discriminators. It is therefore required to train the metalearner in an adversarial manner, which requires careful design choices that we justify by a theoretical analysis. Our results show that the models obtained are as suitable as singleimage GANs for many common image applications, significantly reduce the training time per image without loss in performance, and introduce novel capabilities, such as interpolation and feedforward modeling of novel images.
Improving Nonautoregressive Generation with Mixup Training ; While pretrained language models have achieved great success on various natural language understanding tasks, how to effectively leverage them into nonautoregressive generation tasks remains a challenge. To solve this problem, we present a nonautoregressive generation model based on pretrained transformer models. To bridge the gap between autoregressive and nonautoregressive models, we propose a simple and effective iterative training method called MIx Source and pseudo Target MIST. Unlike other iterative decoding methods, which sacrifice the inference speed to achieve better performance based on multiple decoding iterations, MIST works in the training stage and has no effect on inference time. Our experiments on three generation benchmarks including question generation, summarization and paraphrase generation, show that the proposed framework achieves the new stateoftheart results for fully nonautoregressive models. We also demonstrate that our method can be used to a variety of pretrained models. For instance, MIST based on the small pretrained model also obtains comparable performance with seq2seq models.
De Novo Molecular Generation with Stacked Adversarial Model ; Generating novel drug molecules with desired biological properties is a time consuming and complex task. Conditional generative adversarial models have recently been proposed as promising approaches for de novo drug design. In this paper, we propose a new generative model which extends an existing adversarial autoencoder AAE based model by stacking two models together. Our stacked approach generates more valid molecules, as well as molecules that are more similar to known drugs. We break down this challenging task into two subproblems. A first stage model to learn primitive features from the molecules and gene expression data. A second stage model then takes these features to learn properties of the molecules and refine more valid molecules. Experiments and comparison to baseline methods on the LINCS L1000 dataset demonstrate that our proposed model has promising performance for molecular generation.
Multilingual Generative Language Models for ZeroShot CrossLingual Event Argument Extraction ; We present a study on leveraging multilingual pretrained generative language models for zeroshot crosslingual event argument extraction EAE. By formulating EAE as a language generation task, our method effectively encodes event structures and captures the dependencies between arguments. We design languageagnostic templates to represent the event argument structures, which are compatible with any language, hence facilitating the crosslingual transfer. Our proposed model finetunes multilingual pretrained generative language models to generate sentences that fill in the languageagnostic template with arguments extracted from the input passage. The model is trained on source languages and is then directly applied to target languages for event argument extraction. Experiments demonstrate that the proposed model outperforms the current stateoftheart models on zeroshot crosslingual EAE. Comprehensive studies and error analyses are presented to better understand the advantages and the current limitations of using generative language models for zeroshot crosslingual transfer EAE.
Mix and Match Learningfree Controllable Text Generation using Energy Language Models ; Recent work on controlled text generation has either required attributebased finetuning of the base language model LM, or has restricted the parameterization of the attribute discriminator to be compatible with the base autoregressive LM. In this work, we propose Mix and Match LM, a global scorebased alternative for controllable text generation that combines arbitrary pretrained blackbox models for achieving the desired attributes in the generated text without involving any finetuning or structural assumptions about the blackbox models. We interpret the task of controllable generation as drawing samples from an energybased model whose energy values are a linear combination of scores from blackbox models that are separately responsible for fluency, the control attribute, and faithfulness to any conditioning context. We use a MetropolisHastings sampling scheme to sample from this energybased model using bidirectional context and global attribute features. We validate the effectiveness of our approach on various controlled generation and stylebased text revision tasks by outperforming recently proposed methods that involve extra training, finetuning, or restrictive assumptions over the form of models.
Temporal Domain Generalization with DriftAware Dynamic Neural Networks ; Temporal domain generalization is a promising yet extremely challenging area where the goal is to learn models under temporally changing data distributions and generalize to unseen data distributions following the trends of the change. The advancement of this area is challenged by 1 characterizing data distribution drift and its impacts on models, 2 expressiveness in tracking the model dynamics, and 3 theoretical guarantee on the performance. To address them, we propose a Temporal Domain Generalization with DriftAware Dynamic Neural Network DRAIN framework. Specifically, we formulate the problem into a Bayesian framework that jointly models the relation between data and model dynamics. We then build a recurrent graph generation scenario to characterize the dynamic graphstructured neural networks learned across different time points. It captures the temporal drift of model parameters and data distributions and can predict models in the future without the presence of future data. In addition, we explore theoretical guarantees of the model performance under the challenging temporal DG setting and provide theoretical analysis, including uncertainty and generalization error. Finally, extensive experiments on several realworld benchmarks with temporal drift demonstrate the effectiveness and efficiency of the proposed method.
Applying Regularized SchrodingerBridgeBased Stochastic Process in Generative Modeling ; Compared to the existing functionbased models in deep generative modeling, the recently proposed diffusion models have achieved outstanding performance with a stochasticprocessbased approach. But a long sampling time is required for this approach due to many timesteps for discretization. Schrodinger bridge SBbased models attempt to tackle this problem by training bidirectional stochastic processes between distributions. However, they still have a slow sampling speed compared to generative models such as generative adversarial networks. And due to the training of the bidirectional stochastic processes, they require a relatively long training time. Therefore, this study tried to reduce the number of timesteps and training time required and proposed regularization terms to the existing SB models to make the bidirectional stochastic processes consistent and stable with a reduced number of timesteps. Each regularization term was integrated into a single term to enable more efficient training in computation time and memory usage. Applying this regularized stochastic process to various generation tasks, the desired translations between different distributions were obtained, and accordingly, the possibility of generative modeling based on a stochastic process with faster sampling speed could be confirmed. The code is available at httpsgithub.comKiUngSongRSB.
Digital twins for city simulation Automatic, efficient, and robust mesh generation for largescale city modeling and simulation ; The concept of creating digital twins, connected digital models of physical systems, is gaining increasing attention for modeling and simulation of whole cities. The basis for building a digital twin of a city is the generation of a 3D city model, often represented as a mesh. Creating and updating such models is a tedious process that requires manual work and considerable effort, especially in the modeling of building geometries. In the current paper, we present a novel algorithm and implementation for automatic, efficient, and robust mesh generation for largescale city modeling and simulation. The algorithm relies on standard, publicly available data, in particular 2D cadastral maps building footprints and 3D point clouds obtained from aerial scanning. The algorithm generates LoD1.2 city models in the form of both triangular surface meshes, suitable for visualisation, and highquality tetrahedral volume meshes, suitable for simulation. Our tests demonstrate good performance and scaling and indicate good avenues for further optimization based on parallelisation. The longterm goal is a generic digital twin of cities volume mesh generator that provides nearly realtime mesh manipulation in LoD2.x.
Leveraging Pretrained Models for Failure Analysis Triplets Generation ; Pretrained Language Models recently gained traction in the Natural Language Processing NLP domain for text summarization, generation and questionanswering tasks. This stems from the innovation introduced in Transformer models and their overwhelming performance compared with Recurrent Neural Network Models Long Short Term Memory LSTM. In this paper, we leverage the attention mechanism of pretrained causal language models such as Transformer model for the downstream task of generating Failure Analysis Triplets FATs a sequence of steps for analyzing defected components in the semiconductor industry. We compare different transformer models for this generative task and observe that Generative Pretrained Transformer 2 GPT2 outperformed other transformer model for the failure analysis triplet generation FATG task. In particular, we observe that GPT2 trained on 1.5B parameters outperforms pretrained BERT, BART and GPT3 by a large margin on ROUGE. Furthermore, we introduce Levenshstein Sequential Evaluation metric LESE for better evaluation of the structured FAT data and show that it compares exactly with human judgment than existing metrics.
Fast Graph Generation via Spectral Diffusion ; Generating graphstructured data is a challenging problem, which requires learning the underlying distribution of graphs. Various models such as graph VAE, graph GANs, and graph diffusion models have been proposed to generate meaningful and reliable graphs, among which the diffusion models have achieved stateoftheart performance. In this paper, we argue that running fullrank diffusion SDEs on the whole graph adjacency matrix space hinders diffusion models from learning graph topology generation, and hence significantly deteriorates the quality of generated graph data. To address this limitation, we propose an efficient yet effective Graph Spectral Diffusion Model GSDM, which is driven by lowrank diffusion SDEs on the graph spectrum space. Our spectral diffusion model is further proven to enjoy a substantially stronger theoretical guarantee than standard diffusion models. Extensive experiments across various datasets demonstrate that, our proposed GSDM turns out to be the SOTA model, by exhibiting both significantly higher generation quality and much less computational consumption than the baselines.
Tensor Formulation of the General Linear Model with Einstein Notation ; The general linear model is a universally accepted method to conduct and test multiple linear regression models. Using this model one has the ability to simultaneously regress covariates among different groups of data. Moreover, there are hundreds of applications and statistical tests associated with the general linear model. However, the conventional matrix formulation is relatively inelegant which yields multiple difficulties including slow computation speed due to a large number of computations, increased memory usage due to needlessly large data structures, and organizational inconsistency. This is due to the fundamental incongruence between the degrees of freedom of the information the data structures in the conventional formulation of the general linear model are intended to represent and the rank of the data structures themselves. Here, I briefly suggest an elegant reformulation of the general linear model which involves the use of tensors and multidimensional arrays as opposed to exclusively flat structures in the conventional formulation. To demonstrate the efficacy of this approach I translate a few common applications of the general linear model from the conventional formulation to the tensor formulation.
Geometric Latent Diffusion Models for 3D Molecule Generation ; Generative models, especially diffusion models DMs, have achieved promising results for generating featurerich geometries and advancing foundational science problems such as molecule design. Inspired by the recent huge success of Stable latent Diffusion models, we propose a novel and principled method for 3D molecule generation named Geometric Latent Diffusion Models GeoLDM. GeoLDM is the first latent DM model for the molecular geometry domain, composed of autoencoders encoding structures into continuous latent codes and DMs operating in the latent space. Our key innovation is that for modeling the 3D molecular geometries, we capture its critical rototranslational equivariance constraints by building a pointstructured latent space with both invariant scalars and equivariant tensors. Extensive experiments demonstrate that GeoLDM can consistently achieve better performance on multiple molecule generation benchmarks, with up to 7 improvement for the valid percentage of large biomolecules. Results also demonstrate GeoLDM's higher capacity for controllable generation thanks to the latent modeling. Code is provided at urlhttpsgithub.comMinkaiXuGeoLDM.
PoET A generative model of protein families as sequencesofsequences ; Generative protein language models are a natural way to design new proteins with desired functions. However, current models are either difficult to direct to produce a protein from a specific family of interest, or must be trained on a large multiple sequence alignment MSA from the specific family of interest, making them unable to benefit from transfer learning across families. To address this, we propose textbfPrtextbfotein textbfEvolutionary textbfTransformer PoET, an autoregressive generative model of whole protein families that learns to generate sets of related proteins as sequencesofsequences across tens of millions of natural protein sequence clusters. PoET can be used as a retrievalaugmented language model to generate and score arbitrary modifications conditioned on any protein family of interest, and can extrapolate from short context lengths to generalize well even for small families. This is enabled by a unique Transformer layer; we model tokens sequentially within sequences while attending between sequences order invariantly, allowing PoET to scale to context lengths beyond those used during training. PoET outperforms existing protein language models and evolutionary sequence models for variant function prediction in extensive experiments on deep mutational scanning datasets, improving variant effect prediction across proteins of all MSA depths.
Generative Prompt Model for Weakly Supervised Object Localization ; Weakly supervised object localization WSOL remains challenging when learning object localization models from image category labels. Conventional methods that discriminatively train activation models ignore representative yet less discriminative object parts. In this study, we propose a generative prompt model GenPromp, defining the first generative pipeline to localize less discriminative object parts by formulating WSOL as a conditional image denoising procedure. During training, GenPromp converts image category labels to learnable prompt embeddings which are fed to a generative model to conditionally recover the input image with noise and learn representative embeddings. During inference, enPromp combines the representative embeddings with discriminative embeddings queried from an offtheshelf visionlanguage model for both representative and discriminative capacity. The combined embeddings are finally used to generate multiscale highquality attention maps, which facilitate localizing full object extent. Experiments on CUB2002011 and ILSVRC show that GenPromp respectively outperforms the best discriminative models by 5.2 and 5.6 Top1 Loc, setting a solid baseline for WSOL with the generative model. Code is available at httpsgithub.comcallsysGenPromp.
Generative Visual Question Answering ; Multimodal tasks involving vision and language in deep learning continue to rise in popularity and are leading to the development of newer models that can generalize beyond the extent of their training data. The current models lack temporal generalization which enables models to adapt to changes in future data. This paper discusses a viable approach to creating an advanced Visual Question Answering VQA model which can produce successful results on temporal generalization. We propose a new data set, GenVQA, utilizing images and captions from the VQAv2 and MSCOCO dataset to generate new images through stable diffusion. This augmented dataset is then used to test a combination of seven baseline and cutting edge VQA models. Performance evaluation focuses on questions mirroring the original VQAv2 dataset, with the answers having been adjusted to the new images. This paper's purpose is to investigate the robustness of several successful VQA models to assess their performance on future data distributions. Model architectures are analyzed to identify common stylistic choices that improve generalization under temporal distribution shifts. This research highlights the importance of creating a largescale future shifted dataset. This data can enhance the robustness of VQA models, allowing their future peers to have improved ability to adapt to temporal distribution shifts.
The FiveDollar Model Generating Game Maps and Sprites from Sentence Embeddings ; The fivedollar model is a lightweight texttoimage generative architecture that generates low dimensional images from an encoded text prompt. This model can successfully generate accurate and aesthetically pleasing content in low dimensional domains, with limited amounts of training data. Despite the small size of both the model and datasets, the generated images are still able to maintain the encoded semantic meaning of the textual prompt. We apply this model to three small datasets pixel art video game maps, video game sprite images, and downscaled emoji images and apply novel augmentation strategies to improve the performance of our model on these limited datasets. We evaluate our models performance using cosine similarity score between textimage pairs generated by the CLIP VITB32 model.
An Autoethnographic Exploration of XAI in Algorithmic Composition ; Machine Learning models are capable of generating complex music across a range of genres from folk to classical music. However, current generative music AI models are typically difficult to understand and control in meaningful ways. Whilst research has started to explore how explainable AI XAI generative models might be created for music, no generative XAI models have been studied in music making practice. This paper introduces an autoethnographic study of the use of the MeasureVAE generative music XAI model with interpretable latent dimensions trained on Irish folk music. Findings suggest that the exploratory nature of the musicmaking workflow foregrounds musical features of the training dataset rather than features of the generative model itself. The appropriation of an XAI model within an iterative workflow highlights the potential of XAI models to form part of a richer and more complex workflow than they were initially designed for.
Atombyatom protein generation and beyond with language models ; Protein language models learn powerful representations directly from sequences of amino acids. However, they are constrained to generate proteins with only the set of amino acids represented in their vocabulary. In contrast, chemical language models learn atomlevel representations of smaller molecules that include every atom, bond, and ring. In this work, we show that chemical language models can learn atomlevel representations of proteins enabling protein generation unconstrained to the standard genetic code and far beyond it. In doing so, we show that language models can generate entire proteins atom by atom effectively learning the multiple hierarchical layers of molecular information that define proteins from their primary sequence to their secondary, and tertiary structure. We demonstrate language models are able to explore beyond protein space generating proteins with modified sidechains that form unnatural amino acids. Even further, we find that language models can explore chemical space and protein space simultaneously and generate novel examples of proteindrug conjugates. The results demonstrate the potential for biomolecular design at the atom level using language models.
Generative Design of Hardwareaware DNNs ; To efficiently run DNNs on the edgecloud, many new DNN inference accelerators are being designed and deployed frequently. To enhance the resource efficiency of DNNs, model quantization is a widelyused approach. However, different acceleratorHW has different resources leading to the need for specialized quantization strategy of each HW. Moreover, using the same quantization for every layer may be suboptimal, increasing the designspace of possible quantization choices. This makes manualtuning infeasible. Recent work in automatically determining quantization for each layer is driven by optimization methods such as reinforcement learning. However, these approaches need retraining the RL for every new HW platform. We propose a new way for autonomous quantization and HWaware tuning. We propose a generative model, AQGAN, which takes a target accuracy as the condition and generates a suite of quantization configurations. With the conditional generative model, the user can autonomously generate different configurations with different targets in inference time. Moreover, we propose a simplified HWtuning flow, which uses the generative model to generate proposals and execute simple selection based on the HW resource budget, whose process is fast and interactive. We evaluate our model on five of the widelyused efficient models on the ImageNet dataset. We compare with existing uniform quantization and stateoftheart autonomous quantization methods. Our generative model shows competitive achieved accuracy, however, with around two degrees less search cost for each design point. Our generative model shows the generated quantization configuration can lead to less than 3.5 error across all experiments.
Recipe Generation from Unsegmented Cooking Videos ; This paper tackles recipe generation from unsegmented cooking videos, a task that requires agents to 1 extract key events in completing the dish and 2 generate sentences for the extracted events. Our task is similar to dense video captioning DVC, which aims at detecting events thoroughly and generating sentences for them. However, unlike DVC, in recipe generation, recipe story awareness is crucial, and a model should output an appropriate number of key events in the correct order. We analyze the output of the DVC model and observe that although 1 several events are adoptable as a recipe story, 2 the generated sentences for such events are not grounded in the visual content. Based on this, we hypothesize that we can obtain correct recipes by selecting oracle events from the output events of the DVC model and regenerating sentences for them. To achieve this, we propose a novel transformerbased joint approach of training event selector and sentence generator for selecting oracle events from the outputs of the DVC model and generating grounded sentences for the events, respectively. In addition, we extend the model by including ingredients to generate more accurate recipes. The experimental results show that the proposed method outperforms stateoftheart DVC models. We also confirm that, by modeling the recipe in a storyaware manner, the proposed model output the appropriate number of events in the correct order.
DomainStudio FineTuning Diffusion Models for DomainDriven Image Generation using Limited Data ; Denoising diffusion probabilistic models DDPMs have been proven capable of synthesizing highquality images with remarkable diversity when trained on large amounts of data. Typical diffusion models and modern largescale conditional generative models like texttoimage generative models are vulnerable to overfitting when finetuned on extremely limited data. Existing works have explored subjectdriven generation using a reference set containing a few images. However, few prior works explore DDPMbased domaindriven generation, which aims to learn the common features of target domains while maintaining diversity. This paper proposes a novel DomainStudio approach to adapt DDPMs pretrained on largescale source datasets to target domains using limited data. It is designed to keep the diversity of subjects provided by source domains and get highquality and diverse adapted samples in target domains. We propose to keep the relative distances between adapted samples to achieve considerable generation diversity. In addition, we further enhance the learning of highfrequency details for better generation quality. Our approach is compatible with both unconditional and conditional diffusion models. This work makes the first attempt to realize unconditional fewshot image generation with diffusion models, achieving better quality and greater diversity than current stateoftheart GANbased approaches. Moreover, this work also significantly relieves overfitting for conditional generation and realizes highquality domaindriven generation, further expanding the applicable scenarios of modern largescale texttoimage models.
Generative Modeling by Inclusive Neural Random Fields with Applications in Image Generation and Anomaly Detection ; Neural random fields NRFs, referring to a class of generative models that use neural networks to implement potential functions in random fields a.k.a. energybased models, are not new but receive less attention with slow progress. Different from various directed graphical models such as generative adversarial networks GANs, NRFs provide an interesting family of undirected graphical models for generative modeling. In this paper we propose a new approach, the inclusiveNRF approach, to learning NRFs for continuous data e.g. images, by introducing inclusivedivergence minimized auxiliary generators and developing stochastic gradient sampling in an augmented space. Based on the new approach, specific inclusiveNRF models are developed and thoroughly evaluated in two important generative modeling applications image generation and anomaly detection. The proposed models consistently improve over stateoftheart results in both applications. Remarkably, in addition to superior sample generation, one additional benefit of our inclusiveNRF approach is that, unlike GANs, it can directly provide unnormalized density estimate for sample evaluation. With these contributions and results, this paper significantly advances the learning and applications of NRFs to a new level, both theoretically and empirically, which have never been obtained before.
Learning Diverse Stochastic HumanAction Generators by Learning Smooth Latent Transitions ; Humanmotion generation is a longstanding challenging task due to the requirement of accurately modeling complex and diverse dynamic patterns. Most existing methods adopt sequence models such as RNN to directly model transitions in the original action space. Due to high dimensionality and potential noise, such modeling of action transitions is particularly challenging. In this paper, we focus on skeletonbased action generation and propose to model smooth and diverse transitions on a latent space of action sequences with much lower dimensionality. Conditioned on a latent sequence, actions are generated by a framewise decoder shared by all latent actionposes. Specifically, an implicit RNN is defined to model smooth latent sequences, whose randomness diversity is controlled by noise from the input. Different from standard actionprediction methods, our model can generate action sequences from pure noise without any conditional action poses. Remarkably, it can also generate unseen actions from mixed classes during training. Our model is learned with a bidirectional generativeadversarialnet framework, which not only can generate diverse action sequences of a particular class or mix classes, but also learns to classify action sequences within the same model. Experimental results show the superiority of our method in both diverse actionsequence generation and classification, relative to existing methods.
Game of Learning Bloch Equation Simulations for MR Fingerprinting ; Purpose This work proposes a novel approach to efficiently generate MR fingerprints for MR fingerprinting MRF problems based on the unsupervised deep learning model generative adversarial networks GAN. Methods The GAN model is adopted and modified for better convergence and performance, resulting in an MRF specific model named GANMRF. The GANMRF model is trained, validated, and tested using different MRF fingerprints simulated from the Bloch equations with certain MRF sequence. The performance and robustness of the model are further tested by using in vivo data collected on a 3 Tesla scanner from a healthy volunteer together with MRF dictionaries with different sizes. T1, T2 maps are generated and compared quantitatively. Results The validation and testing curves for the GANMRF model show no evidence of high bias or high variance problems. The sample MRF fingerprints generated from the trained GANMRF model agree well with the benchmark fingerprints simulated from the Bloch equations. The in vivo T1, T2 maps generated from the GANMRF fingerprints are in good agreement with those generated from the Bloch simulated fingerprints, showing good performance and robustness of the proposed GANMRF model. Moreover, the MRF dictionary generation time is reduced from hours to subsecond for the testing dictionary. Conclusion The GANMRF model enables a fast and accurate generation of the MRF fingerprints. It significantly reduces the MRF dictionary generation process and opens the door for realtime applications and sequence optimization problems.
GenMod A generative modeling approach for spectral representation of PDEs with random inputs ; We propose a method for quantifying uncertainty in highdimensional PDE systems with random parameters, where the number of solution evaluations is small. Parametric PDE solutions are often approximated using a spectral decomposition based on polynomial chaos expansions. For the class of systems we consider i.e., high dimensional with limited solution evaluations the coefficients are given by an underdetermined linear system in a regression formulation. This implies additional assumptions, such as sparsity of the coefficient vector, are needed to approximate the solution. Here, we present an approach where we assume the coefficients are close to the range of a generative model that maps from a low to a high dimensional space of coefficients. Our approach is inspired be recent work examining how generative models can be used for compressed sensing in systems with random Gaussian measurement matrices. Using results from PDE theory on coefficient decay rates, we construct an explicit generative model that predicts the polynomial chaos coefficient magnitudes. The algorithm we developed to find the coefficients, which we call GenMod, is composed of two main steps. First, we predict the coefficient signs using Orthogonal Matching Pursuit. Then, we assume the coefficients are within a sparse deviation from the range of a signadjusted generative model. This allows us to find the coefficients by solving a nonconvex optimization problem, over the input space of the generative model and the space of sparse vectors. We obtain theoretical recovery results for a Lipschitz continuous generative model and for a more specific generative model, based on coefficient decay rate bounds. We examine three highdimensional problems and show that, for all three examples, the generative model approach outperforms sparsity promoting methods at small sample sizes.
JaCoText A Pretrained Model for Java CodeText Generation ; Pretrained transformerbased models have shown high performance in natural language generation task. However, a new wave of interest has surged automatic programming language generation. This task consists of translating natural language instructions to a programming code. Despite the fact that wellknown pretrained models on language generation have achieved good performance in learning programming languages, effort is still needed in automatic code generation. In this paper, we introduce JaCoText, a model based on Transformers neural network. It aims to generate java source code from natural language text. JaCoText leverages advantages of both natural language and code generation models. More specifically, we study some findings from the state of the art and use them to 1 initialize our model from powerful pretrained models, 2 explore additional pretraining on our java dataset, 3 carry out experiments combining the unimodal and bimodal data in the training, and 4 scale the input and output length during the finetuning of the model. Conducted experiments on CONCODE dataset show that JaCoText achieves new stateoftheart results.
GMValuator Similaritybased Data Valuation for Generative Models ; Data valuation plays a crucial role in machine learning. Existing data valuation methods have primarily focused on discriminative models, neglecting generative models that have recently gained considerable attention. A very few existing attempts of data valuation method designed for deep generative models either concentrates on specific models or lacks robustness in their outcomes. Moreover, efficiency still reveals vulnerable shortcomings. To bridge the gaps, we formulate the data valuation problem in generative models from a similaritymatching perspective. Specifically, we introduce Generative Model Valuator GMValuator, the first trainingfree and modelagnostic approach to provide data valuation for generation tasks. It empowers efficient data valuation through our innovatively similarity matching module, calibrates biased contribution by incorporating image quality assessment, and attributes credits to all training samples based on their contributions to the generated samples. Additionally, we introduce four evaluation criteria for assessing data valuation methods in generative models, aligning with principles of plausibility and truthfulness. GMValuator is extensively evaluated on various datasets and generative architectures to demonstrate its effectiveness.
Aligning Optimization Trajectories with Diffusion Models for Constrained Design Generation ; Generative models have had a profound impact on vision and language, paving the way for a new era of multimodal generative applications. While these successes have inspired researchers to explore using generative models in science and engineering to accelerate the design process and reduce the reliance on iterative optimization, challenges remain. Specifically, engineering optimization methods based on physics still outperform generative models when dealing with constrained environments where data is scarce and precision is paramount. To address these challenges, we introduce Diffusion Optimization Models DOM and Trajectory Alignment TA, a learning framework that demonstrates the efficacy of aligning the sampling trajectory of diffusion models with the optimization trajectory derived from traditional physicsbased methods. This alignment ensures that the sampling process remains grounded in the underlying physical principles. Our method allows for generating feasible and highperformance designs in as few as two steps without the need for expensive preprocessing, external surrogate models, or additional labeled data. We apply our framework to structural topology optimization, a fundamental problem in mechanical design, evaluating its performance on in and outofdistribution configurations. Our results demonstrate that TA outperforms stateoftheart deep generative models on indistribution configurations and halves the inference computational cost. When coupled with a few steps of optimization, it also improves manufacturability for outofdistribution conditions. By significantly improving performance and inference efficiency, DOM enables us to generate highquality designs in just a few steps and guide them toward regions of high performance and manufacturability, paving the way for the widespread application of generative models in largescale datadriven design.
Stable phantomdivide crossing in two scalar models with matter ; We construct cosmological models with two scalar fields, which has the structure as in the ghost condensation model or kessence model. The models can describe the stable phantom crossing, which should be contrasted with one scalar tensor models, where the infinite instability occurs at the crossing the phantom divide. We give a general formulation of the reconstruction in terms of the efoldings N by including the matter although in the previous two scalar models, which are extensions of the scalar tensor model, it was difficult to give a formulation of the reconstruction when we include matters. In the formulation of the reconstruction, we start with a model with some arbitrary functions, and find the functions which generates the history in the expansion of the universe. We also give general arguments for the stabilities of the models and the reconstructed solution. The viability of a model is also investigated by comparing the observational data.
Model Selection in HighDimensional Misspecified Models ; Model selection is indispensable to highdimensional sparse modeling in selecting the best set of covariates among a sequence of candidate models. Most existing work assumes implicitly that the model is correctly specified or of fixed dimensions. Yet model misspecification and high dimensionality are common in real applications. In this paper, we investigate two classical KullbackLeibler divergence and Bayesian principles of model selection in the setting of highdimensional misspecified models. Asymptotic expansions of these principles reveal that the effect of model misspecification is crucial and should be taken into account, leading to the generalized AIC and generalized BIC in high dimensions. With a natural choice of prior probabilities, we suggest the generalized BIC with prior probability which involves a logarithmic factor of the dimensionality in penalizing model complexity. We further establish the consistency of the covariance contrast matrix estimator in a general setting. Our results and new method are supported by numerical studies.
On the Equivalence of Generative and Discriminative Formulations of the Sequential Dependence Model ; The sequential dependence model SDM is a popular retrieval model which is based on the theory of probabilistic graphical models. While it was originally introduced by Metzler and Croft as a Markov Random Field aka discriminative probabilistic model, in this paper we demonstrate that it is equivalent to a generative probabilistic model. To build an foundation for future retrieval models, this paper details the axiomatic underpinning of the SDM model as discriminative and generative probabilistic model. The only difference arises whether model parameters are estimated in logspace or Multinomialspace. We demonstrate that parameterestimation with gridtuning is negatively impacting the generative formulation, an effect that vanishes when parameters are estimated with coordinategradient descent. This is concerning, since empirical differences may be falsely attributed to improved models.
Generalized partially linear models on Riemannian manifolds ; The generalized partially linear models on Riemannian manifolds are introduced. These models, like ordinary generalized linear models, are a generalization of partially linear models on Riemannian manifolds that allow for response variables with error distribution models other than a normal distribution. Partially linear models are particularly useful when some of the covariates of the model are elements of a Riemannian manifold, because the curvature of these spaces makes it difficult to define parametric models. The model was developed to address an interesting application, the prediction of children's garment fit based on 3D scanning of their body. For this reason, we focus on logistic and ordinal models and on the important and difficult case where the Riemannian manifold is the threedimensional case of Kendall's shape space. An experimental study with a wellknown 3D database is carried out to check the goodness of the procedure. Finally it is applied to a 3D database obtained from an anthropometric survey of the Spanish child population. A comparative study with related techniques is carried out.
NeurallyGuided Procedural Models Amortized Inference for Procedural Graphics Programs using Neural Networks ; Probabilistic inference algorithms such as Sequential Monte Carlo SMC provide powerful tools for constraining procedural models in computer graphics, but they require many samples to produce desirable results. In this paper, we show how to create procedural models which learn how to satisfy constraints. We augment procedural models with neural networks which control how the model makes random choices based on the output it has generated thus far. We call such models neurallyguided procedural models. As a precomputation, we train these models to maximize the likelihood of example outputs generated via SMC. They are then used as efficient SMC importance samplers, generating highquality results with very few samples. We evaluate our method on Lsystemlike models with imagebased constraints. Given a desired quality threshold, neurallyguided models can generate satisfactory results up to 10x faster than unguided models.
Mean squared displacement in a generalized Levy walk model ; L'evy walks represent a class of stochastic models spacetime coupled continuous time random walks with applications ranging from the laser cooling to the description of animal motion. The initial model was intended for the description of turbulent dispersion as given by the Richardson's law. The existence of this Richardson's regime in the original model was recently challenged in the work by T. Albers and G. Radons, Phys. Rev. Lett. 120, 104501 2018 the mean squared displacement MSD in this model diverges, i.e. does not exist, in the regime, where it presumably should reproduce the Richardson's law. In the supplemental material to this work the authors present but do not investigate in detail a generalized model interpolating between the original one and the Drudelike models known to show no divergences. In the present work we give a detailed investigation of the ensemble MSD in this generalized model, show that the behavior of the MSD in this model is the same up to prefactiors as in the original one in the domains where the MSD in the original model does exist, and investigate the conditions under which the MSD in the generalized model does exist or diverges. Both ordinary and aged situations are considered.
Deep Generative Models for Reject Inference in Credit Scoring ; Credit scoring models based on accepted applications may be biased and their consequences can have a statistical and economic impact. Reject inference is the process of attempting to infer the creditworthiness status of the rejected applications. In this research, we use deep generative models to develop two new semisupervised Bayesian models for reject inference in credit scoring, in which we model the data generating process to be dependent on a Gaussian mixture. The goal is to improve the classification accuracy in credit scoring models by adding reject applications. Our proposed models infer the unknown creditworthiness of the rejected applications by exact enumeration of the two possible outcomes of the loan default or nondefault. The efficient stochastic gradient optimization technique used in deep generative models makes our models suitable for large data sets. Finally, the experiments in this research show that our proposed models perform better than classical and alternative machine learning models for reject inference in credit scoring.
Improving Variational Autoencoder for Text Modelling with TimestepWise Regularisation ; The Variational Autoencoder VAE is a popular and powerful model applied to text modelling to generate diverse sentences. However, an issue known as posterior collapse or KL loss vanishing happens when the VAE is used in text modelling, where the approximate posterior collapses to the prior, and the model will totally ignore the latent variables and be degraded to a plain language model during text generation. Such an issue is particularly prevalent when RNNbased VAE models are employed for text modelling. In this paper, we propose a simple, generic architecture called TimestepWise Regularisation VAE TWRVAE, which can effectively avoid posterior collapse and can be applied to any RNNbased VAE models. The effectiveness and versatility of our model are demonstrated in different tasks, including language modelling and dialogue response generation.
Kessence Lagrangians of polytropic and logotropic unified dark matter and dark energy models ; We determine the kessence Lagrangian of a relativistic barotropic fluid. The equation of state of the fluid can be specified in different manners depending on whether the pressure is expressed in terms of the energy density model I, the restmass density model II, or the pseudo restmass density for a complex scalar field in the ThomasFermi approximation model III. In the nonrelativistic limit, these three formulations coincide. In the relativistic regime, they lead to different models that we study exhaustively. We provide general results valid for an arbitrary equation of state and show how the different models are connected to each other. For illustration, we specifically consider polytropic and logotropic dark fluids that have been proposed as unified dark matter and dark energy models. We recover the BornInfeld action of the Chaplygin gas in models I and III and obtain the explicit expression of the reduced action of the logotropic dark fluid in models II and III. We also derive the twofluid representation of the Chaplygin and logotropic models. Our general formalism can be applied to many other situations such as BoseEinstein condensates with a varphi4 or more general selfinteraction, dark matter superfluids, and mixed models.
Pay Attention Accuracy Versus Interpretability Tradeoff in Finetuned Diffusion Models ; The recent progress of diffusion models in terms of image quality has led to a major shift in research related to generative models. Current approaches often finetune pretrained foundation models using domainspecific texttoimage pairs. This approach is straightforward for Xray image generation due to the high availability of radiology reports linked to specific images. However, current approaches hardly ever look at attention layers to verify whether the models understand what they are generating. In this paper, we discover an important tradeoff between image fidelity and interpretability in generative diffusion models. In particular, we show that finetuning texttoimage models with learnable text encoder leads to a lack of interpretability of diffusion models. Finally, we demonstrate the interpretability of diffusion models by showing that keeping the language encoder frozen, enables diffusion models to achieve stateoftheart phrase grounding performance on certain diseases for a challenging multilabel segmentation task, without any additional training. Code and models will be available at httpsgithub.comMischaDchestdistillation.
Explore and Exploit the Diverse Knowledge in Model Zoo for Domain Generalization ; The proliferation of pretrained models, as a result of advancements in pretraining techniques, has led to the emergence of a vast zoo of publicly available models. Effectively utilizing these resources to obtain models with robust outofdistribution generalization capabilities for downstream tasks has become a crucial area of research. Previous research has primarily focused on identifying the most powerful models within the model zoo, neglecting to fully leverage the diverse inductive biases contained within. This paper argues that the knowledge contained in weaker models is valuable and presents a method for leveraging the diversity within the model zoo to improve outofdistribution generalization capabilities. Specifically, we investigate the behaviors of various pretrained models across different domains of downstream tasks by characterizing the variations in their encoded representations in terms of two dimensions diversity shift and correlation shift. This characterization enables us to propose a new algorithm for integrating diverse pretrained models, not limited to the strongest models, in order to achieve enhanced outofdistribution generalization performance. Our proposed method demonstrates stateoftheart empirical results on a variety of datasets, thus validating the benefits of utilizing diverse knowledge.
Research on an improved Conformer endtoend Speech Recognition Model with RDrop Structure ; To address the issue of poor generalization ability in endtoend speech recognition models within deep learning, this study proposes a new Conformerbased speech recognition model called ConformerR that incorporates the Rdrop structure. This model combines the Conformer model, which has shown promising results in speech recognition, with the Rdrop structure. By doing so, the model is able to effectively model both local and global speech information while also reducing overfitting through the use of the Rdrop structure. This enhances the model's ability to generalize and improves overall recognition efficiency. The model was first pretrained on the Aishell1 and Wenetspeech datasets for general domain adaptation, and subsequently finetuned on computerrelated audio data. Comparison tests with classic models such as LAS and Wenet were performed on the same test set, demonstrating the ConformerR model's ability to effectively improve generalization.
Image Generation and Translation with Disentangled Representations ; Generative models have made significant progress in the tasks of modeling complex data distributions such as natural images. The introduction of Generative Adversarial Networks GANs and autoencoders lead to the possibility of training on big data sets in an unsupervised manner. However, for many generative models it is not possible to specify what kind of image should be generated and it is not possible to translate existing images into new images of similar domains. Furthermore, models that can perform imagetoimage translation often need distinct models for each domain, making it hard to scale these systems to multiple domain imagetoimage translation. We introduce a model that can do both, controllable image generation and imagetoimage translation between multiple domains. We split our image representation into two parts encoding unstructured and structured information respectively. The latter is designed in a disentangled manner, so that different parts encode different image characteristics. We train an encoder to encode images into these representations and use a small amount of labeled data to specify what kind of information should be encoded in the disentangled part. A generator is trained to generate images from these representations using the characteristics provided by the disentangled part of the representation. Through this we can control what kind of images the generator generates, translate images between different domains, and even learn unknown datagenerating factors while only using one single model.
Improving Model Compatibility of Generative Adversarial Networks by Boundary Calibration ; Generative Adversarial Networks GANs is a powerful family of models that learn an underlying distribution to generate synthetic data. Many existing studies of GANs focus on improving the realness of the generated image data for visual applications, and few of them concern about improving the quality of the generated data for training other classifiers a task known as the model compatibility problem. As a consequence, existing GANs often prefer generating easier' synthetic data that are far from the boundaries of the classifiers, and refrain from generating nearboundary data, which are known to play an important roles in training the classifiers. To improve GAN in terms of model compatibility, we propose BoundaryCalibration GANs BCGANs, which leverage the boundary information from a set of pretrained classifiers using the original data. In particular, we introduce an auxiliary BoundaryCalibration loss BCloss into the generator of GAN to match the statistics between the posterior distributions of original data and generated data with respect to the boundaries of the pretrained classifiers. The BCloss is provably unbiased and can be easily coupled with different GAN variants to improve their model compatibility. Experimental results demonstrate that BCGANs not only generate realistic images like original GANs but also achieves superior model compatibility than the original GANs.
Fragmentbased molecular generative model with high generalization ability and synthetic accessibility ; Deep generative models are attracting great attention for molecular design with desired properties. Most existing models generate molecules by sequentially adding atoms. This often renders generated molecules with less correlation with target properties and low synthetic accessibility. Molecular fragments such as functional groups are more closely related to molecular properties and synthetic accessibility than atoms. Here, we propose a fragmentbased molecular generative model which designs new molecules with target properties by sequentially adding molecular fragments to any given starting molecule. A key feature of our model is a high generalization ability in terms of property control and fragment types. The former becomes possible by learning the contribution of individual fragments to the target properties in an autoregressive manner. For the latter, we used a deep neural network that predicts the bonding probability of two molecules from the embedding vectors of the two molecules as input. The high synthetic accessibility of the generated molecules is implicitly considered while preparing the fragment library with the BRICS decomposition method. We show that the model can generate molecules with the simultaneous control of multiple target properties at a high success rate. It also works equally well with unseen fragments even in the property range where the training data is rare, verifying the high generalization ability. As a practical application, we demonstrated that the model can generate potential inhibitors with high binding affinities against the 3CL protease of SARSCOV2 in terms of docking score.
Plug and Play Counterfactual Text Generation for Model Robustness ; Generating counterfactual testcases is an important backbone for testing NLP models and making them as robust and reliable as traditional software. In generating the testcases, a desired property is the ability to control the testcase generation in a flexible manner to test for a large variety of failure cases and to explain and repair them in a targeted manner. In this direction, significant progress has been made in the prior works by manually writing rules for generating controlled counterfactuals. However, this approach requires heavy manual supervision and lacks the flexibility to easily introduce new controls. Motivated by the impressive flexibility of the plugandplay approach of PPLM, we propose bringing the framework of plugandplay to counterfactual test case generation task. We introduce CASPer, a plugandplay counterfactual generation framework to generate test cases that satisfy goal attributes on demand. Our plugandplay model can steer the test case generation process given any attribute model without requiring attributespecific training of the model. In experiments, we show that CASPer effectively generates counterfactual text that follow the steering provided by an attribute model while also being fluent, diverse and preserving the original content. We also show that the generated counterfactuals from CASPer can be used for augmenting the training data and thereby fixing and making the test model more robust.
Robust Preference Learning for Storytelling via Contrastive Reinforcement Learning ; Controlled automated story generation seeks to generate natural language stories satisfying constraints from natural language critiques or preferences. Existing methods to control for story preference utilize prompt engineering which is labor intensive and often inconsistent. They may also use logitmanipulation methods which require annotated datasets to exist for the desired attributes. To address these issues, we first train a contrastive biencoder model to align stories with corresponding human critiques, named CARP, building a general purpose preference model. This is subsequently used as a reward function to finetune a generative language model via reinforcement learning. However, simply finetuning a generative language model with a contrastive reward model does not always reliably result in a story generation system capable of generating stories that meet user preferences. To increase story generation robustness we further finetune the contrastive reward model using a promptlearning technique. A human participant study is then conducted comparing generations from our full system, ablations, and two baselines. We show that the full finetuning pipeline results in a story generator preferred over a LLM 20x as large as well as logitbased methods. This motivates the use of contrastive learning for general purpose human preference modeling.
ChatGPT or Human Detect and Explain. Explaining Decisions of Machine Learning Model for Detecting Short ChatGPTgenerated Text ; ChatGPT has the ability to generate grammatically flawless and seeminglyhuman replies to different types of questions from various domains. The number of its users and of its applications is growing at an unprecedented rate. Unfortunately, use and abuse come hand in hand. In this paper, we study whether a machine learning model can be effectively trained to accurately distinguish between original human and seemingly human that is, ChatGPTgenerated text, especially when this text is short. Furthermore, we employ an explainable artificial intelligence framework to gain insight into the reasoning behind the model trained to differentiate between ChatGPTgenerated and humangenerated text. The goal is to analyze model's decisions and determine if any specific patterns or characteristics can be identified. Our study focuses on short online reviews, conducting two experiments comparing humangenerated and ChatGPTgenerated text. The first experiment involves ChatGPT text generated from custom queries, while the second experiment involves text generated by rephrasing original humangenerated reviews. We finetune a Transformerbased model and use it to make predictions, which are then explained using SHAP. We compare our model with a perplexity scorebased approach and find that disambiguation between human and ChatGPTgenerated reviews is more challenging for the ML model when using rephrased text. However, our proposed approach still achieves an accuracy of 79. Using explainability, we observe that ChatGPT's writing is polite, without specific details, using fancy and atypical vocabulary, impersonal, and typically it does not express feelings.
StyleAvatar3D Leveraging ImageText Diffusion Models for HighFidelity 3D Avatar Generation ; The recent advancements in imagetext diffusion models have stimulated research interest in largescale 3D generative models. Nevertheless, the limited availability of diverse 3D resources presents significant challenges to learning. In this paper, we present a novel method for generating highquality, stylized 3D avatars that utilizes pretrained imagetext diffusion models for data generation and a Generative Adversarial Network GANbased 3D generation network for training. Our method leverages the comprehensive priors of appearance and geometry offered by imagetext diffusion models to generate multiview images of avatars in various styles. During data generation, we employ poses extracted from existing 3D models to guide the generation of multiview images. To address the misalignment between poses and images in data, we investigate viewspecific prompts and develop a coarsetofine discriminator for GAN training. We also delve into attributerelated prompts to increase the diversity of the generated avatars. Additionally, we develop a latent diffusion model within the style space of StyleGAN to enable the generation of avatars based on image inputs. Our approach demonstrates superior performance over current stateoftheart methods in terms of visual quality and diversity of the produced avatars.
Bias Assessment and Mitigation in LLMbased Code Generation ; Utilizing stateoftheart Large Language Models LLMs, automatic code generation models play a pivotal role in enhancing the productivity and efficiency of software development coding procedures. As the adoption of LLMs becomes more widespread in software coding ecosystems, a pressing issue has emerged does the generated code contain social biases, such as those related to age, gender, and race This issue concerns the integrity, fairness, and ethical foundation of software applications that depend on the code generated by these models, yet is underexplored in the literature. This paper presents a novel bias assessment framework that is specifically designed for code generation tasks. Based on this framework, we conduct an extensive evaluation on the bias of nine stateoftheart LLMbased code generation models. Our findings reveal that first, 31.45 to 79.93 code functions generated by our evaluated code generation models are biased, and 9.68 to 37.37 code functions' functionality are affected by the bias, which means biases not only exist in code generation models but in some cases, directly affect the functionality of the generated code, posing risks of unintended and possibly harmful software behaviors. To mitigate bias from code generation models, we propose three mitigation strategies, which can decrease the biased code ratio to a very low level of 0.4 to 4.57.
Multiscale sequence modeling with a learned dictionary ; We propose a generalization of neural network sequence models. Instead of predicting one symbol at a time, our multiscale model makes predictions over multiple, potentially overlapping multisymbol tokens. A variation of the bytepair encoding BPE compression algorithm is used to learn the dictionary of tokens that the model is trained with. When applied to language modelling, our model has the flexibility of characterlevel models while maintaining many of the performance benefits of wordlevel models. Our experiments show that this model performs better than a regular LSTM on language modeling tasks, especially for smaller models.
Model Complexity of Deep Learning A Survey ; Model complexity is a fundamental problem in deep learning. In this paper we conduct a systematic overview of the latest studies on model complexity in deep learning. Model complexity of deep learning can be categorized into expressive capacity and effective model complexity. We review the existing studies on those two categories along four important factors, including model framework, model size, optimization process and data complexity. We also discuss the applications of deep learning model complexity including understanding model generalization, model optimization, and model selection and design. We conclude by proposing several interesting future directions.
Rethinking the Knowledge Distillation From the Perspective of Model Calibration ; Recent years have witnessed dramatically improvements in the knowledge distillation, which can generate a compact student model for better efficiency while retaining the model effectiveness of the teacher model. Previous studies find that more accurate teachers do not necessary make for better teachers due to the mismatch of abilities. In this paper, we aim to analysis the phenomenon from the perspective of model calibration. We found that the larger teacher model may be too overconfident, thus the student model cannot effectively imitate. While, after the simple model calibration of the teacher model, the size of the teacher model has a positive correlation with the performance of the student model.
The matrix model for dessins d'enfants ; We present the matrix models that are the generating functions for branched covers of the complex projective line ramified over 0, 1, and infty Grotendieck's dessins d'enfants of fixed genus, degree, and the ramification profile at infinity. For general ramifications at other points, the model is the twologarithm matrix model with the external field studied previously by one of the authors L.Ch. and K.Palamarchuk. It lies in the class of the generalised Kontsevich models GKM thus being the KadomtsevPetviashvili KP hierarchy taufunction and, upon the shift of times, this model is equivalent to a Hermitian onematrix model with a general potential whose coefficients are related to the KP times by a Miwatype transformation. The original model therefore enjoys a topological recursion and can be solved in terms of shifted moments of the standard Hermitian onematrix model at all genera of the topological expansion. We also derive the matrix model for clean Belyi morphisms, which turns out to be the KontsevichPenner model introduced by the authors and Yu. Makeenko. Its partition function is also a KP hierarchy tau function, and this model is in turn equivalent to a Hermitian onematrix model with a general potential. Finally we prove that the generating function for general twoprofile Belyi morphisms is a GKM thus proving that it is also a KP hierarchy tau function in proper times.
On the Discrepancy between Density Estimation and Sequence Generation ; Many sequencetosequence generation tasks, including machine translation and texttospeech, can be posed as estimating the density of the output y given the input x pyx. Given this interpretation, it is natural to evaluate sequencetosequence models using conditional loglikelihood on a test set. However, the goal of sequencetosequence generation or structured prediction is to find the best output y given an input x, and each task has its own downstream metric R that scores a model output by comparing against a set of references y Ry, y x. While we hope that a model that excels in density estimation also performs well on the downstream metric, the exact correlation has not been studied for sequence generation tasks. In this paper, by comparing several density estimators on five machine translation tasks, we find that the correlation between rankings of models based on loglikelihood and BLEU varies significantly depending on the range of the model families being compared. First, loglikelihood is highly correlated with BLEU when we consider models within the same family e.g. autoregressive models, or latent variable models with the same parameterization of the prior. However, we observe no correlation between rankings of models across different families 1 among nonautoregressive latent variable models, a flexible prior distribution is better at density estimation but gives worse generation quality than a simple prior, and 2 autoregressive models offer the best translation performance overall, while latent variable models with a normalizing flow prior give the highest heldout loglikelihood across all datasets. Therefore, we recommend using a simple prior for the latent variable nonautoregressive model when fast generation speed is desired.
Relieve the H0 tension with a new coupled generalized threeform dark energy model ; In this work we propose a new coupled generalized threeform dark energy model, in which dark energy are represented by a threeform field and other components are represented by ideal fluids. We first perform a dynamical analysis on the new model and obtain four fixed points, including a saddle point representing a radiation dominated Universe, a saddle point representing a matter dominated Universe, and two attractors representing two dark energy dominated Universes. We then use the observational data, including cosmic microwave background CMB data, baryon acoustic oscillations BAO data, and Type Ia supernovae SN Ia data to constrain the model parameters of the coupled generalized threeform dark energy model. For comparison, we also consider the coupled threeform dark energy model, generalized threeform dark energy model, and LambdaCDM model, we find that the coupled generalized threeform dark energy model is the only one model that can reduce the H0 tension to a more acceptable level, with H070.11.51.4 kmsMpc, which is consistent with R19 at 2.0sigma confidence level. We also investigate the bestfit dynamical behavior of the coupled generalized threeform dark energy model, and show that our model is equivalent to a quintom dark energy model, in which dark energy, at early epoch, behaves like some form of early dark energy with a small positive equation of state.
A Tree Adjoining Grammar Representation for Models Of Stochastic Dynamical Systems ; Model structure and complexity selection remains a challenging problem in system identification, especially for parametric nonlinear models. Many Evolutionary Algorithm EA based methods have been proposed in the literature for estimating model structure and complexity. In most cases, the proposed methods are devised for estimating structure and complexity within a specified model class and hence these methods do not extend to other model structures without significant changes. In this paper, we propose a Tree Adjoining Grammar TAG for stochastic parametric models. TAGs can be used to generate models in an EA framework while imposing desirable structural constraints and incorporating prior knowledge. In this paper, we propose a TAG that can systematically generate models ranging from FIRs to polynomial NARMAX models. Furthermore, we demonstrate that TAGs can be easily extended to more general model classes, such as the nonlinear BoxJenkins model class, enabling the realization of flexible and automatic model structure and complexity selection via EA.
Maximum Entropy Model Rollouts Fast Model Based Policy Optimization without Compounding Errors ; Model usage is the central challenge of modelbased reinforcement learning. Although dynamics model based on deep neural networks provide good generalization for single step prediction, such ability is over exploited when it is used to predict long horizon trajectories due to compounding errors. In this work, we propose a Dynastyle modelbased reinforcement learning algorithm, which we called Maximum Entropy Model Rollouts MEMR. To eliminate the compounding errors, we only use our model to generate singlestep rollouts. Furthermore, we propose to generate emphdiverse model rollouts by nonuniform sampling of the environment states such that the entropy of the model rollouts is maximized. We mathematically derived the maximum entropy sampling criteria for one data case under Gaussian prior. To accomplish this criteria, we propose to utilize a prioritized experience replay. Our preliminary experiments in challenging locomotion benchmarks show that our approach achieves the same sample efficiency of the best modelbased algorithms, matches the asymptotic performance of the best modelfree algorithms, and significantly reduces the computation requirements of other modelbased methods.
Model Extraction and Defenses on Generative Adversarial Networks ; Model extraction attacks aim to duplicate a machine learning model through query access to a target model. Early studies mainly focus on discriminative models. Despite the success, model extraction attacks against generative models are less well explored. In this paper, we systematically study the feasibility of model extraction attacks against generative adversarial networks GANs. Specifically, we first define accuracy and fidelity on model extraction attacks against GANs. Then we study model extraction attacks against GANs from the perspective of accuracy extraction and fidelity extraction, according to the adversary's goals and background knowledge. We further conduct a case study where an adversary can transfer knowledge of the extracted model which steals a stateoftheart GAN trained with more than 3 million images to new domains to broaden the scope of applications of model extraction attacks. Finally, we propose effective defense techniques to safeguard GANs, considering a tradeoff between the utility and security of GAN models.
Timed ModelBased Mutation Operators for Simulink Models ; Modelbased mutation analysis is a recent research area, and realtime system testing can benefit from using model mutants. Modelbased mutation testing MBMT is a particular branch of modelbased testing. It generates faulty versions of a model using mutation operators to evaluate and improve test cases. Mutation testing is an effective way to ensure software correctness and has been applied to various application areas. Simulink is a vital modeling language for realtime systems. This paper introduces Simulink model mutation analysis to improve Modelintheloop MIL testing. We propose a set of Simulink mutation operators based on AUTOSAR, which reflects the temporal correctness when a Simulink model is mapped to Operating System tasks. We implement a mutation framework that generates mutants for implicit clock Simulink models. Finally, we demonstrate how this framework generates mutants to reveal task interference issues in the simulation. Our work integrates the Simulink model with the timed systems to better support mutation testing automation.
FlexibleSUSY A spectrum generator generator for supersymmetric models ; We introduce FlexibleSUSY, a Mathematica and C package, which generates a fast, precise C spectrum generator for any SUSY model specified by the user. The generated code is designed with both speed and modularity in mind, making it easy to adapt and extend with new features. The model is specified by supplying the superpotential, gauge structure and particle content in a SARAH model file; specific boundary conditions e.g. at the GUT, weak or intermediate scales are defined in a separate FlexibleSUSY model file. From these model files, FlexibleSUSY generates C code for selfenergies, tadpole corrections, renormalization group equations RGEs and electroweak symmetry breaking EWSB conditions and combines them with numerical routines for solving the RGEs and EWSB conditions simultaneously. The resulting spectrum generator is then able to solve for the spectrum of the model, including loopcorrected pole masses, consistent with user specified boundary conditions. The modular structure of the generated code allows for individual components to be replaced with an alternative if available. FlexibleSUSY has been carefully designed to grow as alternative solvers and calculators are added. Predefined models include the MSSM, NMSSM, E6SSM, USSM, Rsymmetric models and models with righthanded neutrinos.
Generative Models for Network Neuroscience Prospects and Promise ; Network neuroscience is the emerging discipline concerned with investigating the complex patterns of interconnections found in neural systems, and to identify principles with which to understand them. Within this discipline, one particularly powerful approach is network generative modeling, in which wiring rules are algorithmically implemented to produce synthetic network architectures with the same properties as observed in empirical network data. Successful models can highlight the principles by which a network is organized and potentially uncover the mechanisms by which it grows and develops. Here we review the prospects and promise of generative models for network neuroscience. We begin with a primer on network generative models, with a discussion of compressibility and predictability, utility in intuiting mechanisms, and a short history on their use in network science broadly. We then discuss generative models in practice and application, paying particular attention to the critical need for crossvalidation. Next, we review generative models of biological neural networks, both at the cellular and largescale level, and across a variety of species including emphC. elegans, emphDrosophila, mouse, rat, cat, macaque, and human. We offer a careful treatment of a few relevant distinctions, including differences between generative models and null models, sufficiency and redundancy, inferring and claiming mechanism, and functional and structural connectivity. We close with a discussion of future directions, outlining exciting frontiers both in empirical data collection efforts as well as in method and theory development that, together, further the utility of the generative network modeling approach for network neuroscience.
GANLeaks A Taxonomy of Membership Inference Attacks against Generative Models ; Deep learning has achieved overwhelming success, spanning from discriminative models to generative models. In particular, deep generative models have facilitated a new level of performance in a myriad of areas, ranging from media manipulation to sanitized dataset generation. Despite the great success, the potential risks of privacy breach caused by generative models have not been analyzed systematically. In this paper, we focus on membership inference attack against deep generative models that reveals information about the training data used for victim models. Specifically, we present the first taxonomy of membership inference attacks, encompassing not only existing attacks but also our novel ones. In addition, we propose the first generic attack model that can be instantiated in a large range of settings and is applicable to various kinds of deep generative models. Moreover, we provide a theoretically grounded attack calibration technique, which consistently boosts the attack performance in all cases, across different attack settings, data modalities, and training configurations. We complement the systematic analysis of attack performance by a comprehensive experimental study, that investigates the effectiveness of various attacks w.r.t. model type and training configurations, over three diverse application scenarios i.e., images, medical data, and location data.
Adversarial Attacks Against Deep Generative Models on Data A Survey ; Deep generative models have gained much attention given their ability to generate data for applications as varied as healthcare to financial technology to surveillance, and many more the most popular models being generative adversarial networks and variational autoencoders. Yet, as with all machine learning models, ever is the concern over security breaches and privacy leaks and deep generative models are no exception. These models have advanced so rapidly in recent years that work on their security is still in its infancy. In an attempt to audit the current and future threats against these models, and to provide a roadmap for defense preparations in the short term, we prepared this comprehensive and specialized survey on the security and privacy preservation of GANs and VAEs. Our focus is on the inner connection between attacks and model architectures and, more specifically, on five components of deep generative models the training data, the latent code, the generatorsdecoders of GANs VAEs, the discriminatorsencoders of GANs VAEs, and the generated data. For each model, component and attack, we review the current research progress and identify the key challenges. The paper concludes with a discussion of possible future attacks and research directions in the field.
Are You Robert or RoBERTa Deceiving Online Authorship Attribution Models Using Neural Text Generators ; Recently, there has been a rise in the development of powerful pretrained natural language models, including GPT2, Grover, and XLM. These models have shown stateoftheart capabilities towards a variety of different NLP tasks, including question answering, content summarisation, and text generation. Alongside this, there have been many studies focused on online authorship attribution AA. That is, the use of models to identify the authors of online texts. Given the power of natural language models in generating convincing texts, this paper examines the degree to which these language models can generate texts capable of deceiving online AA models. Experimenting with both blog and Twitter data, we utilise GPT2 language models to generate texts using the existing posts of online users. We then examine whether these AIbased text generators are capable of mimicking authorial style to such a degree that they can deceive typical AA models. From this, we find that current AIbased text generators are able to successfully mimic authorship, showing capabilities towards this on both datasets. Our findings, in turn, highlight the current capacity of powerful natural language models to generate original online posts capable of mimicking authorial style sufficiently to deceive popular AA methods; a key finding given the proposed role of AA in real world applications such as spamdetection and forensic investigation.
Diversity vs. Recognizability Humanlike generalization in oneshot generative models ; Robust generalization to new concepts has long remained a distinctive feature of human intelligence. However, recent progress in deep generative models has now led to neural architectures capable of synthesizing novel instances of unknown visual concepts from a single training example. Yet, a more precise comparison between these models and humans is not possible because existing performance metrics for generative models i.e., FID, IS, likelihood are not appropriate for the oneshot generation scenario. Here, we propose a new framework to evaluate oneshot generative models along two axes sample recognizability vs. diversity i.e., intraclass variability. Using this framework, we perform a systematic evaluation of representative oneshot generative models on the Omniglot handwritten dataset. We first show that GANlike and VAElike models fall on opposite ends of the diversityrecognizability space. Extensive analyses of the effect of key model parameters further revealed that spatial attention and context integration have a linear contribution to the diversityrecognizability tradeoff. In contrast, disentanglement transports the model along a parabolic curve that could be used to maximize recognizability. Using the diversityrecognizability framework, we were able to identify models and parameters that closely approximate human data.
Generalizing to new geometries with GeometryAware Autoregressive Models GAAMs for fast calorimeter simulation ; Generation of simulated detector response to collision products is crucial to data analysis in particle physics, but computationally very expensive. One subdetector, the calorimeter, dominates the computational time due to the high granularity of its cells and complexity of the interactions. Generative models can provide more rapid sample production, but currently require significant effort to optimize performance for specific detector geometries, often requiring many models to describe the varying cell sizes and arrangements, without the ability to generalize to other geometries. We develop a textitgeometryaware autoregressive model, which learns how the calorimeter response varies with geometry, and is capable of generating simulated responses to unseen geometries without additional training. The geometryaware model outperforms a baseline unaware model by over 50 in several metrics such as the Wasserstein distance between the generated and the true distributions of key quantities which summarize the simulated response. A single geometryaware model could replace the hundreds of generative models currently designed for calorimeter simulation by physicists analyzing data collected at the Large Hadron Collider. For the study of future detectors, such a foundational model will be a crucial tool, dramatically reducing the large upfront investment usually needed to develop generative calorimeter models.
Learning Joint 2D 3D Diffusion Models for Complete Molecule Generation ; Designing new molecules is essential for drug discovery and material science. Recently, deep generative models that aim to model molecule distribution have made promising progress in narrowing down the chemical research space and generating highfidelity molecules. However, current generative models only focus on modeling either 2D bonding graphs or 3D geometries, which are two complementary descriptors for molecules. The lack of ability to jointly model both limits the improvement of generation quality and further downstream applications. In this paper, we propose a new joint 2D and 3D diffusion model JODO that generates complete molecules with atom types, formal charges, bond information, and 3D coordinates. To capture the correlation between molecular graphs and geometries in the diffusion process, we develop a Diffusion Graph Transformer to parameterize the data prediction model that recovers the original data from noisy data. The Diffusion Graph Transformer interacts node and edge representations based on our relational attention mechanism, while simultaneously propagating and updating scalar features and geometric vectors. Our model can also be extended for inverse molecular design targeting single or multiple quantum properties. In our comprehensive evaluation pipeline for unconditional joint generation, the results of the experiment show that JODO remarkably outperforms the baselines on the QM9 and GEOMDrugs datasets. Furthermore, our model excels in fewstep fast sampling, as well as in inverse molecule design and molecular graph generation. Our code is provided in httpsgithub.comGRAPH0JODO.
Asking Questions the Human Way Scalable QuestionAnswer Generation from Text Corpus ; The ability to ask questions is important in both human and machine intelligence. Learning to ask questions helps knowledge acquisition, improves questionanswering and machine reading comprehension tasks, and helps a chatbot to keep the conversation flowing with a human. Existing question generation models are ineffective at generating a large amount of highquality questionanswer pairs from unstructured text, since given an answer and an input passage, question generation is inherently a onetomany mapping. In this paper, we propose AnswerClueStyleaware Question Generation ACSQG, which aims at automatically generating highquality and diverse questionanswer pairs from unlabeled text corpus at scale by imitating the way a human asks questions. Our system consists of i an information extractor, which samples from the text multiple types of assistive information to guide question generation; ii neural question generators, which generate diverse and controllable questions, leveraging the extracted assistive information; and iii a neural quality controller, which removes lowquality generated data based on text entailment. We compare our question generation models with existing approaches and resort to voluntary human evaluation to assess the quality of the generated questionanswer pairs. The evaluation results suggest that our system dramatically outperforms stateoftheart neural question generation models in terms of the generation quality, while being scalable in the meantime. With models trained on a relatively smaller amount of data, we can generate 2.8 million qualityassured questionanswer pairs from a million sentences found in Wikipedia.
Unpaired MultiDomain Image Generation via Regularized Conditional GANs ; In this paper, we study the problem of multidomain image generation, the goal of which is to generate pairs of corresponding images from different domains. With the recent development in generative models, image generation has achieved great progress and has been applied to various computer vision tasks. However, multidomain image generation may not achieve the desired performance due to the difficulty of learning the correspondence of different domain images, especially when the information of paired samples is not given. To tackle this problem, we propose Regularized Conditional GAN RegCGAN which is capable of learning to generate corresponding images in the absence of paired training data. RegCGAN is based on the conditional GAN, and we introduce two regularizers to guide the model to learn the corresponding semantics of different domains. We evaluate the proposed model on several tasks for which paired training data is not given, including the generation of edges and photos, the generation of faces with different attributes, etc. The experimental results show that our model can successfully generate corresponding images for all these tasks, while outperforms the baseline methods. We also introduce an approach of applying RegCGAN to unsupervised domain adaptation.
Text2Action Generative Adversarial Synthesis from Language to Action ; In this paper, we propose a generative model which learns the relationship between language and human action in order to generate a human action sequence given a sentence describing human behavior. The proposed generative model is a generative adversarial network GAN, which is based on the sequence to sequence SEQ2SEQ model. Using the proposed generative network, we can synthesize various actions for a robot or a virtual agent using a text encoder recurrent neural network RNN and an action decoder RNN. The proposed generative network is trained from 29,770 pairs of actions and sentence annotations extracted from MSRVideotoText MSRVTT, a largescale video dataset. We demonstrate that the network can generate humanlike actions which can be transferred to a Baxter robot, such that the robot performs an action based on a provided sentence. Results show that the proposed generative network correctly models the relationship between language and action and can generate a diverse set of actions from the same sentence.
DPGAN DiversityPromoting Generative Adversarial Network for Generating Informative and Diversified Text ; Existing text generation methods tend to produce repeated and boring expressions. To tackle this problem, we propose a new text generation model, called DiversityPromoting Generative Adversarial Network DPGAN. The proposed model assigns low reward for repeatedly generated text and high reward for novel and fluent text, encouraging the generator to produce diverse and informative text. Moreover, we propose a novel languagemodel based discriminator, which can better distinguish novel text from repeated text without the saturation problem compared with existing classifierbased discriminators. The experimental results on review generation and dialogue generation tasks demonstrate that our model can generate substantially more diverse and informative text than existing baselines. The code is available at httpsgithub.comlancopkuDPGAN
Personalized Patent Claim Generation and Measurement ; This workinprogress paper proposes a framework to generate and measure personalized patent claims. The objective is to help inventors conceive better inventions by learning from relevant inventors. Patent claim generation is a way of augmented inventing. for inventors. Such patent claim generation leverages the recent transfer learning in the Deep Learning field, particularly the stateoftheart Transformerbased models. In terms of system implementation, it is planned to build an autocomplete function for patent claim drafting. The autocomplete function is analyzed from four different perspectives extent of generation, generative direction, proximity of generation, and constraint in generation. Technically, the framework is composed of two Transformer models. One is for text generation and the other is for quality measurement. Specifically, the patent claim generation is based on GPT2 model and the measurement of personalization is based on BERT model. The training data is inventorcentric and comes from the Inventors Endpoint API provided by the USPTO.
QURIOUS Question Generation Pretraining for Text Generation ; Recent trends in natural language processing using pretraining have shifted focus towards pretraining and finetuning approaches for text generation. Often the focus has been on taskagnostic approaches that generalize the language modeling objective. We propose question generation as a pretraining method, which better aligns with the text generation objectives. Our text generation models pretrained with this method are better at understanding the essence of the input and are better language models for the target task. When evaluated on two text generation tasks, abstractive summarization and answerfocused question generation, our models result in stateoftheart performances in terms of automatic metrics. Human evaluators also found our summaries and generated questions to be more natural, concise and informative.
BOLD Dataset and Metrics for Measuring Biases in OpenEnded Language Generation ; Recent advances in deep learning techniques have enabled machines to generate cohesive openended text when prompted with a sequence of words as context. While these models now empower many downstream applications from conversation bots to automatic storytelling, they have been shown to generate texts that exhibit social biases. To systematically study and benchmark social biases in openended language generation, we introduce the Bias in OpenEnded Language Generation Dataset BOLD, a largescale dataset that consists of 23,679 English text generation prompts for bias benchmarking across five domains profession, gender, race, religion, and political ideology. We also propose new automated metrics for toxicity, psycholinguistic norms, and text gender polarity to measure social biases in openended text generation from multiple angles. An examination of text generated from three popular language models reveals that the majority of these models exhibit a larger social bias than humanwritten Wikipedia text across all domains. With these results we highlight the need to benchmark biases in openended language generation and caution users of language generation models on downstream tasks to be cognizant of these embedded prejudices.
MotionDiffuse TextDriven Human Motion Generation with Diffusion Model ; Human motion modeling is important for many modern graphics applications, which typically require professional skills. In order to remove the skill barriers for laymen, recent motion generation methods can directly generate human motions conditioned on natural languages. However, it remains challenging to achieve diverse and finegrained motion generation with various text inputs. To address this problem, we propose MotionDiffuse, the first diffusion modelbased textdriven motion generation framework, which demonstrates several desired properties over existing methods. 1 Probabilistic Mapping. Instead of a deterministic languagemotion mapping, MotionDiffuse generates motions through a series of denoising steps in which variations are injected. 2 Realistic Synthesis. MotionDiffuse excels at modeling complicated data distribution and generating vivid motion sequences. 3 MultiLevel Manipulation. MotionDiffuse responds to finegrained instructions on body parts, and arbitrarylength motion synthesis with timevaried text prompts. Our experiments show MotionDiffuse outperforms existing SoTA methods by convincing margins on textdriven motion generation and actionconditioned motion generation. A qualitative analysis further demonstrates MotionDiffuse's controllability for comprehensive motion generation. Homepage httpsmingyuanzhang.github.ioprojectsMotionDiffuse.html
DiffusionHPC Generating Synthetic Images with Realistic Humans ; Recent texttoimage generative models have exhibited remarkable abilities in generating highfidelity and photorealistic images. However, despite the visually impressive results, these models often struggle to preserve plausible human structure in the generations. Due to this reason, while generative models have shown promising results in aiding downstream image recognition tasks by generating large volumes of synthetic data, they remain infeasible for improving downstream human pose perception and understanding. In this work, we propose Diffusion model with Human Pose Correction Diffusion HPC, a textconditioned method that generates photorealistic images with plausible posed humans by injecting prior knowledge about human body structure. We show that Diffusion HPC effectively improves the realism of human generations. Furthermore, as the generations are accompanied by 3D meshes that serve as ground truths, Diffusion HPC's generated imagemesh pairs are wellsuited for downstream human mesh recovery task, where a shortage of 3D training data has long been an issue.
CCLAP Controllable Chinese Landscape Painting Generation via Latent Diffusion Model ; With the development of deep generative models, recent years have seen great success of Chinese landscape painting generation. However, few works focus on controllable Chinese landscape painting generation due to the lack of data and limited modeling capabilities. In this work, we propose a controllable Chinese landscape painting generation method named CCLAP, which can generate painting with specific content and style based on Latent Diffusion Model. Specifically, it consists of two cascaded modules, i.e., content generator and style aggregator. The content generator module guarantees the content of generated paintings specific to the input text. While the style aggregator module is to generate paintings of a style corresponding to a reference image. Moreover, a new dataset of Chinese landscape paintings named CLAP is collected for comprehensive evaluation. Both the qualitative and quantitative results demonstrate that our method achieves stateoftheart performance, especially in artfullycomposed and artistic conception. Codes are available at httpsgithub.comRobinWZQCCLAP.
Generated Graph Detection ; Graph generative models become increasingly effective for data distribution approximation and data augmentation. While they have aroused public concerns about their malicious misuses or misinformation broadcasts, just as what Deepfake visual and auditory media has been delivering to society. Hence it is essential to regulate the prevalence of generated graphs. To tackle this problem, we pioneer the formulation of the generated graph detection problem to distinguish generated graphs from real ones. We propose the first framework to systematically investigate a set of sophisticated models and their performance in four classification scenarios. Each scenario switches between seen and unseen datasetsgenerators during testing to get closer to realworld settings and progressively challenge the classifiers. Extensive experiments evidence that all the models are qualified for generated graph detection, with specific models having advantages in specific scenarios. Resulting from the validated generality and oblivion of the classifiers to unseen datasetsgenerators, we draw a safe conclusion that our solution can sustain for a decent while to curb generated graph misuses.
CompoNet Learning to Generate the Unseen by Part Synthesis and Composition ; Datadriven generative modeling has made remarkable progress by leveraging the power of deep neural networks. A reoccurring challenge is how to enable a model to generate a rich variety of samples from the entire target distribution, rather than only from a distribution confined to the training data. In other words, we would like the generative model to go beyond the observed samples and learn to generate unseen'', yet still plausible, data. In our work, we present CompoNet, a generative neural network for 2D or 3D shapes that is based on a partbased prior, where the key idea is for the network to synthesize shapes by varying both the shape parts and their compositions. Treating a shape not as an unstructured whole, but as a recomposable set of deformable parts, adds a combinatorial dimension to the generative process to enrich the diversity of the output, encouraging the generator to venture more into the unseen''. We show that our partbased model generates richer variety of plausible shapes compared with baseline generative models. To this end, we introduce two quantitative metrics to evaluate the diversity of a generative model and assess how well the generated data covers both the training data and unseen data from the same target distribution. Code is available at httpsgithub.comnschorCompoNet.