text
stringlengths
62
2.94k
Phase transitions in selfdual generalizations of the BaxterWu model ; We study two types of generalized BaxterWu models, by means of transfermatrix and Monte Carlo techniques. The first generalization allows for different couplings in the up and down triangles, and the second generalization is to a qstate spin model with threespin interactions. Both generalizations lead to selfdual models, so that the probable locations of the phase transitions follow. Our numerical analysis confirms that phase transitions occur at the selfdual points. For both generalizations of the BaxterWu model, the phase transitions appear to be discontinuous.
A generalization of the Virasoro algebra to arbitrary dimensions ; Colored tensor models generalize matrix models in higher dimensions. They admit a 1N expansion dominated by spherical topologies and exhibit a critical behavior strongly reminiscent of matrix models. In this paper we generalize the colored tensor models to colored models with generic interaction, derive the Schwinger Dyson equations in the large N limit and analyze the associated algebra of constraints satisfied at leading order by the partition function. We show that the constraints form a Lie algebra indexed by trees yielding a generalization of the Virasoro algebra in arbitrary dimensions.
A Note on the Identifiability of Generalized Linear Mixed Models ; I present here a simple proof that, under general regularity conditions, the standard parametrization of generalized linear mixed model is identifiable. The proof is based on the assumptions of generalized linear mixed models on the first and second order moments and some general mild regularity conditions, and, therefore, is extensible to quasilikelihood based generalized linear models. In particular, binomial and Poisson mixed models with dispersion parameter are identifiable when equipped with the standard parametrization.
Bias and Generalization in Deep Generative Models An Empirical Study ; In high dimensional settings, density estimation algorithms rely crucially on their inductive bias. Despite recent empirical success, the inductive bias of deep generative models is not well understood. In this paper we propose a framework to systematically investigate bias and generalization in deep generative models of images. Inspired by experimental methods from cognitive psychology, we probe each learning algorithm with carefully designed training datasets to characterize when and how existing models generate novel attributes and their combinations. We identify similarities to human psychology and verify that these patterns are consistent across commonly used models and architectures.
MultiTask Learning with Language Modeling for Question Generation ; This paper explores the task of answeraware questions generation. Based on the attentionbased pointer generator model, we propose to incorporate an auxiliary task of language modeling to help question generation in a hierarchical multitask learning structure. Our jointlearning model enables the encoder to learn a better representation of the input sequence, which will guide the decoder to generate more coherent and fluent questions. On both SQuAD and MARCO datasets, our multitask learning model boosts the performance, achieving stateoftheart results. Moreover, human evaluation further proves the high quality of our generated questions.
Flow Plugin Network for conditional generation ; Generative models have gained many researchers' attention in the last years resulting in models such as StyleGAN for human face generation or PointFlow for the 3D point cloud generation. However, by default, we cannot control its sampling process, i.e., we cannot generate a sample with a specific set of attributes. The current approach is model retraining with additional inputs and different architecture, which requires time and computational resources. We propose a novel approach that enables to a generation of objects with a given set of attributes without retraining the base model. For this purpose, we utilize the normalizing flow models Conditional Masked Autoregressive Flow and Conditional Real NVP, as a Flow Plugin Network FPN.
Multimodel inference through projections in model space ; Information criteria have had a profound impact on modern ecological science. They allow researchers to estimate which probabilistic approximating models are closest to the generating process. Unfortunately, information criterion comparison does not tell how good the best model is. Nor do practitioners routinely test the reliability e.g. error rates of information criterionbased model selection. In this work, we show that these two shortcomings can be resolved by extending a key observation from Hirotugu Akaike's original work. Standard information criterion analysis considers only the divergences of each model from the generating process. It is ignored that there are also estimable divergence relationships amongst all of the approximating models. We then show that using both sets of divergences, a model space can be constructed that includes an estimated location for the generating process. Thus, not only can an analyst determine which model is closest to the generating process, shehe can also determine how close to the generating process the best approximating model is. Properties of the generating process estimated from these projections are more accurate than those estimated by model averaging. The applications of our findings extend to all areas of science where model selection through information criteria is done.
Conditional Generative Adversarial Nets ; Generative Adversarial Nets 8 were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multimodal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels.
Neural Academic Paper Generation ; In this work, we tackle the problem of structured text generation, specifically academic paper generation in LaTeX, inspired by the surprisingly good results of basic characterlevel language models. Our motivation is using more recent and advanced methods of language modeling on a more complex dataset of LaTeX source files to generate realistic academic papers. Our first contribution is preparing a dataset with LaTeX source files on recent opensource computer vision papers. Our second contribution is experimenting with recent methods of language modeling and text generation such as Transformer and TransformerXL to generate consistent LaTeX code. We report crossentropy and bitspercharacter BPC results of the trained models, and we also discuss interesting points on some examples of the generated LaTeX code.
A Generative Modeling Approach Using Quantum Gates ; In recent years, quantum computing has emerged as a promising technology for solving complex computational problems. Generative modeling is a technique that allows us to learn and generate new data samples similar to the original dataset. In this paper, we propose a generative modeling approach using quantum gates to generate new samples from a given dataset. We start with a brief introduction to quantum computing and generative modeling. Then, we describe our proposed approach, which involves encoding the dataset into quantum states and using quantum gates to manipulate these states to generate new samples. We also provide mathematical details of our approach and demonstrate its effectiveness through experimental results on various datasets.
Efficient response surface methods based on generic surrogate models ; Surrogate models are used for global approximation of responses generated by expensive computer experiments like CFD applications. In this paper, we make use of structural similarities which are shared by a class of related problems. We identify these structures by applying statistical shape models. They are used to build a generic surrogate model approximation to sample data of a new problem of the same class. In a variable fidelity framework the generic surrogate model is combined with the sample data to generate an efficient and globally accurate interpolation model, which requires less costly sample evaluations than ordinary response surface methods. We demonstrate our method with an aerodynamic test case and show that it significantly improves the approximation quality.
Solving the generalized Higgs model from the generalized CRS model ; In this paper, we reveal a direct relation between the generalized onedimensional CarinenaRanadaSantander CRS model and the radial part of twodimensional generalized Higgs model. By this relation, we construct a series of quasiexactly solutions for the twodimensional Higgs model from a solved generalized CRS model.
S4 symmetric fourgeneration models for charged leptons ; We propose S4 symmetric fourgeneration models for charged leptons. Although an S4 symmetric fourgeneration model has been already proposed, there are some additional symmetries in the model. We construct fourgeneration models for charged leptons with only requirement of exact S4 symmetry. It turned out that at least one of the models is consistent with observations of charged lepton masses and predicts the mass of the charged lepton of the fourth generation to be 556 GeV.
Discriminative Viewer Identification using Generative Models of Eye Gaze ; We study the problem of identifying viewers of arbitrary images based on their eye gaze. Psychological research has derived generative stochastic models of eye movements. In order to exploit this background knowledge within a discriminatively trained classification model, we derive Fisher kernels from different generative models of eye gaze. Experimentally, we find that the performance of the classifier strongly depends on the underlying generative model. Using an SVM with Fisher kernel improves the classification performance over the underlying generative model.
A Reparameterized Discrete Diffusion Model for Text Generation ; This work studies discrete diffusion probabilistic models with applications to natural language generation. We derive an alternative yet equivalent formulation of the sampling from discrete diffusion processes and leverage this insight to develop a family of reparameterized discrete diffusion models. The derived generic framework is highly flexible, offers a fresh perspective of the generation process in discrete diffusion models, and features more effective training and decoding techniques. We conduct extensive experiments to evaluate the text generation capability of our model, demonstrating significant improvements over existing diffusion models.
Deep Generative Models for Physiological Signals A Systematic Literature Review ; In this paper, we present a systematic literature review on deep generative models for physiological signals, particularly electrocardiogram, electroencephalogram, photoplethysmogram and electromyogram. Compared to the existing review papers, we present the first review that summarizes the recent stateoftheart deep generative models. By analysing the stateoftheart research related to deep generative models along with their main applications and challenges, this review contributes to the overall understanding of these models applied to physiological signals. Additionally, by highlighting the employed evaluation protocol and the most used physiological databases, this review facilitates the assessment and benchmarking of deep generative models.
Molecular De Novo Design through Deep Reinforcement Learning ; This work introduces a method to tune a sequencebased generative model for molecular de novo design that through augmented episodic likelihood can learn to generate structures with certain specified desirable properties. We demonstrate how this model can execute a range of tasks such as generating analogues to a query structure and generating compounds predicted to be active against a biological target. As a proof of principle, the model is first trained to generate molecules that do not contain sulphur. As a second example, the model is trained to generate analogues to the drug Celecoxib, a technique that could be used for scaffold hopping or library expansion starting from a single molecule. Finally, when tuning the model towards generating compounds predicted to be active against the dopamine receptor type 2, the model generates structures of which more than 95 are predicted to be active, including experimentally confirmed actives that have not been included in either the generative model nor the activity prediction model.
KnowledgeBased Regularization in Generative Modeling ; Prior domain knowledge can greatly help to learn generative models. However, it is often too costly to hardcode prior knowledge as a specific model architecture, so we often have to use generalpurpose models. In this paper, we propose a method to incorporate prior knowledge of feature relations into the learning of generalpurpose generative models. To this end, we formulate a regularizer that makes the marginals of a generative model to follow prescribed relative dependence of features. It can be incorporated into offtheshelf learning methods of many generative models, including variational autoencoders and generative adversarial networks, as its gradients can be computed using standard backpropagation techniques. We show the effectiveness of the proposed method with experiments on multiple types of datasets and generative models.
A SkeletonBased Model for Promoting Coherence Among Sentences in Narrative Story Generation ; Narrative story generation is a challenging problem because it demands the generated sentences with tight semantic connections, which has not been well studied by most existing generative models. To address this problem, we propose a skeletonbased model to promote the coherence of generated stories. Different from traditional models that generate a complete sentence at a stroke, the proposed model first generates the most critical phrases, called skeleton, and then expands the skeleton to a complete and fluent sentence. The skeleton is not manually defined, but learned by a reinforcement learning method. Compared to the stateoftheart models, our skeletonbased model can generate significantly more coherent text according to human evaluation and automatic evaluation. The Gscore is improved by 20.1 in the human evaluation. The code is available at httpsgithub.comlancopkuSkeletonBasedGenerationModel
Unsupervised Source Separation By Steering Pretrained Music Models ; We showcase an unsupervised method that repurposes deep models trained for music generation and music tagging for audio source separation, without any retraining. An audio generation model is conditioned on an input mixture, producing a latent encoding of the audio used to generate audio. This generated audio is fed to a pretrained music tagger that creates source labels. The crossentropy loss between the tag distribution for the generated audio and a predefined distribution for an isolated source is used to guide gradient ascent in the unchanging latent space of the generative model. This system does not update the weights of the generative model or the tagger, and only relies on moving through the generative model's latent space to produce separated sources. We use OpenAI's Jukebox as the pretrained generative model, and we couple it with four kinds of pretrained music taggers two architectures and two tagging datasets. Experimental results on two source separation datasets, show this approach can produce separation estimates for a wider variety of sources than any tested supervised or unsupervised system. This work points to the vast and heretofore untapped potential of large pretrained music models for audiotoaudio tasks like source separation.
MedDiff Generating Electronic Health Records using Accelerated Denoising Diffusion Model ; Due to patient privacy protection concerns, machine learning research in healthcare has been undeniably slower and limited than in other application domains. Highquality, realistic, synthetic electronic health records EHRs can be leveraged to accelerate methodological developments for research purposes while mitigating privacy concerns associated with data sharing. The current stateoftheart model for synthetic EHR generation is generative adversarial networks, which are notoriously difficult to train and can suffer from mode collapse. Denoising Diffusion Probabilistic Models, a class of generative models inspired by statistical thermodynamics, have recently been shown to generate highquality synthetic samples in certain domains. It is unknown whether these can generalize to generation of largescale, highdimensional EHRs. In this paper, we present a novel generative model based on diffusion models that is the first successful application on electronic health records. Our model proposes a mechanism to perform classconditional sampling to preserve label information. We also introduce a new sampling strategy to accelerate the inference speed. We empirically show that our model outperforms existing stateoftheart synthetic EHR generation methods.
RNNbased Generative Model for FineGrained Sketching ; Deep generative models have shown great promise when it comes to synthesising novel images. While they can generate images that look convincing on a higherlevel, generating finegrained details is still a challenge. In order to foster research on more powerful generative approaches, this paper proposes a novel task generative modelling of 2D tree skeletons. Trees are an interesting shape class because they exhibit complexity and variations that are wellsuited to measure the ability of a generative model to generated detailed structures. We propose a new dataset for this task and demonstrate that stateoftheart generative models fail to synthesise realistic images on our benchmark, even though they perform well on current datasets like MNIST digits. Motivated by these results, we propose a novel network architecture based on combining a variational autoencoder using Recurrent Neural Networks and a convolutional discriminator. The network, error metrics and training procedure are adapted to the task of finegrained sketching. Through quantitative and perceptual experiments, we show that our model outperforms previous work and that our dataset is a valuable benchmark for generative models. We will make our dataset publicly available.
Exploring the Effectiveness of Large Language Models in Generating Unit Tests ; A code generation model generates code by taking a prompt from a code comment, existing code, or a combination of both. Although code generation models e.g., GitHub Copilot are increasingly being adopted in practice, it is unclear whether they can successfully be used for unit test generation without finetuning. To fill this gap, we investigated how well three generative models CodeGen, Codex, and GPT3.5 can generate test cases. We used two benchmarks HumanEval and Evosuite SF110 to investigate the context generation's effect in the unit test generation process. We evaluated the models based on compilation rates, test correctness, coverage, and test smells. We found that the Codex model achieved above 80 coverage for the HumanEval dataset, but no model had more than 2 coverage for the EvoSuite SF110 benchmark. The generated tests also suffered from test smells, such as Duplicated Asserts and Empty Tests.
On the Robustness of Generative Retrieval Models An OutofDistribution Perspective ; Recently, we have witnessed generative retrieval increasingly gaining attention in the information retrieval IR field, which retrieves documents by directly generating their identifiers. So far, much effort has been devoted to developing effective generative retrieval models. There has been less attention paid to the robustness perspective. When a new retrieval paradigm enters into the realworld application, it is also critical to measure the outofdistribution OOD generalization, i.e., how would generative retrieval models generalize to new distributions. To answer this question, firstly, we define OOD robustness from three perspectives in retrieval problems 1 The query variations; 2 The unforeseen query types; and 3 The unforeseen tasks. Based on this taxonomy, we conduct empirical studies to analyze the OOD robustness of several representative generative retrieval models against dense retrieval models. The empirical results indicate that the OOD robustness of generative retrieval models requires enhancement. We hope studying the OOD robustness of generative retrieval models would be advantageous to the IR community.
Autoregressive Diffusion Model for Graph Generation ; Diffusionbased graph generative models have recently obtained promising results for graph generation. However, existing diffusionbased graph generative models are mostly oneshot generative models that apply Gaussian diffusion in the dequantized adjacency matrix space. Such a strategy can suffer from difficulty in model training, slow sampling speed, and incapability of incorporating constraints. We propose an emphautoregressive diffusion model for graph generation. Unlike existing methods, we define a nodeabsorbing diffusion process that operates directly in the discrete graph space. For forward diffusion, we design a emphdiffusion ordering network, which learns a datadependent node absorbing ordering from graph topology. For reverse generation, we design a emphdenoising network that uses the reverse node ordering to efficiently reconstruct the graph by predicting the node type of the new node and its edges with previously denoised nodes at a time. Based on the permutation invariance of graph, we show that the two networks can be jointly trained by optimizing a simple lower bound of data likelihood. Our experiments on six diverse generic graph datasets and two molecule datasets show that our model achieves better or comparable generation performance with previous stateoftheart, and meanwhile enjoys fast generation speed.
A Brief Survey of Associations Between MetaLearning and General AI ; This paper briefly reviews the history of metalearning and describes its contribution to general AI. Metalearning improves model generalization capacity and devises general algorithms applicable to both indistribution and outofdistribution tasks potentially. General AI replaces taskspecific models with general algorithmic systems introducing higher level of automation in solving diverse tasks using AI. We summarize main contributions of metalearning to the developments in general AI, including memory module, metalearner, coevolution, curiosity, forgetting and AIgenerating algorithm. We present connections between metalearning and general AI and discuss how metalearning can be used to formulate general AI algorithms.
Normal Factor Graphs as Probabilistic Models ; We present a new probabilistic modelling framework based on the recent notion of normal factor graph NFG. We show that the proposed NFG models and their transformations unify some existing models such as factor graphs, convolutional factor graphs, and cumulative distribution networks. The two subclasses of the NFG models, namely the constrained and generative models, exhibit a duality in their dependence structure. Transformation of NFG models further extends the power of this modelling framework. We point out the wellknown NFG representations of parity and generator realizations of a linear code as generative and constrained models, and comment on a more prevailing duality in this context. Finally, we address the algorithmic aspect of computing the exterior function of NFGs and the inference problem on NFGs.
A Generative Parser with a Discriminative Recognition Algorithm ; Generative models defining joint distributions over parse trees and sentences are useful for parsing and language modeling, but impose restrictions on the scope of features and are often outperformed by discriminative models. We propose a framework for parsing and language modeling which marries a generative model with a discriminative recognition model in an encoderdecoder setting. We provide interpretations of the framework based on expectation maximization and variational inference, and show that it enables parsing and language modeling within a single implementation. On the English Penn Treenbank, our framework obtains competitive performance on constituency parsing while matching the stateoftheart singlemodel language modeling score.
Generative Cooperative Net for Image Generation and Data Augmentation ; How to build a good model for image generation given an abstract concept is a fundamental problem in computer vision. In this paper, we explore a generative model for the task of generating unseen images with desired features. We propose the Generative Cooperative Net GCN for image generation. The idea is similar to generative adversarial networks except that the generators and discriminators are trained to work accordingly. Our experiments on handwritten digit generation and facial expression generation show that GCN's two cooperative counterparts the generator and the classifier can work together nicely and achieve promising results. We also discovered a usage of such generative model as an dataaugmentation tool. Our experiment of applying this method on a recognition task shows that it is very effective comparing to other existing methods. It is easy to set up and could help generate a very large synthesized dataset.
Video Generation Beyond a Single Clip ; We tackle the long video generation problem, i.e.generating videos beyond the output length of video generation models. Due to the computation resource constraints, video generation models can only generate video clips that are relatively short compared with the length of real videos. Existing works apply a sliding window approach to generate long videos at inference time, which is often limited to generating recurrent events or homogeneous content. To generate long videos covering diverse content and multiple events, we propose to use additional guidance to control the video generation process. We further present a twostage approach to the problem, which allows us to utilize existing video generation models to generate highquality videos within a small time window while modeling the video holistically based on the input guidance. The proposed approach is complementary to existing efforts on video generation, which focus on generating realistic video within a fixed time window. Extensive experiments on challenging realworld videos validate the benefit of the proposed method, which improves over stateoftheart by up to 9.5 in objective metrics and is preferred by users more than 80 of time.
Directed Beam Search PlugandPlay Lexically Constrained Language Generation ; Large pretrained language models are capable of generating realistic text. However, controlling these models so that the generated text satisfies lexical constraints, i.e., contains specific words, is a challenging problem. Given that stateoftheart language models are too large to be trained from scratch in a manageable time, it is desirable to control these models without retraining them. Methods capable of doing this are called plugandplay. Recent plugandplay methods have been successful in constraining small bidirectional language models as well as forward models in tasks with a restricted search space, e.g., machine translation. However, controlling large transformerbased models to meet lexical constraints without retraining them remains a challenge. In this work, we propose Directed Beam Search DBS, a plugandplay method for lexically constrained language generation. Our method can be applied to any language model, is easy to implement and can be used for general language generation. In our experiments we use DBS to control GPT2. We demonstrate its performance on keywordtophrase generation and we obtain comparable results as a stateoftheart nonplugandplay model for lexically constrained story generation.
A Stochastic Model for Block Segmentation of Images Based on the Quadtree and the Bayes Code for It ; In information theory, lossless compression of general data is based on an explicit assumption of a stochastic generative model on target data. However, in lossless image compression, the researchers have mainly focused on the coding procedure that outputs the coded sequence from the input image, and the assumption of the stochastic generative model is implicit. In these studies, there is a difficulty in confirming the informationtheoretical optimality of the coding procedure to the stochastic generative model. Hence, in this paper, we propose a novel stochastic generative model of images by redefining the implicit stochastic generative model in a previous coding procedure. That is based on the quadtree so that our model effectively represents the variable block size segmentation of images. Then, we construct the Bayes code optimal for the proposed stochastic generative model. In general, the computational cost to calculate the posterior distribution required in the Bayes code increases exponentially for the image size. However, we introduce an efficient algorithm to calculate it in the polynomial order of the image size without loss of the optimality. Some experiments are performed to confirm the flexibility of the proposed stochastic model and the efficiency of the introduced algorithm.
Extrapolating Multilingual Understanding Models as Multilingual Generators ; Multilingual understanding models or encoderbased, pretrained via masked language modeling, have achieved promising results on many language understanding tasks e.g., mBERT. However, these nonautoregressive NAR models still struggle to generate highquality texts compared with autoregressive AR models. Considering that encoderbased models have the advantage of efficient generation and selfcorrection abilities, this paper explores methods to empower multilingual understanding models the generation abilities to get a unified model. Specifically, we start from a multilingual encoder XLMR and propose a textbfSemantictextbfGuided textbfAlignmentthenDenoising SGA approach to adapt an encoder to a multilingual generator with a small number of new parameters. Experiments show that the proposed approach is an effective adaption method, outperforming widelyused initializationbased methods with gains of 9.4 BLEU on machine translation, 8.1 RougeL on question generation, and 5.5 METEOR on story generation on XLMRlarge. On the other hand, we observe that XLMR is still inferior to mBART in supervised settings despite better results on zeroshot settings, indicating that more exploration is required to make understanding models strong generators.
Disease Mapping with Generative Models ; Disease mapping focuses on learning about areal units presenting high relative risk. Disease mapping models for disease counts specify Poisson regressions in relative risks compared with the expected counts. These models typically incorporate spatial random effects to accomplish spatial smoothing. Fitting of these models customarily computes expected disease counts via internal standardization. This places the data on both sides of the model, i.e., the counts are on the left side but they are also used to obtain the expected counts on the right side. As a result, these internally standardized models are incoherent and not generative; probabilistically, they could not produce the observed data. Here, we argue for adopting the direct generative model for disease counts. We model disease incidence instead of relative risks, using a generalized logistic regression. We extract relative risks post model fitting. We also extend the generative model to dynamic settings. We compare the generative models with internally standardized models through simulated datasets and a wellexamined lung cancer morbidity data in Ohio. Each model is a spatial smoother and they smooth the data similarly with regard to relative risks. However, the generative models tend to provide tighter credible intervals. Since the generative specification is no more difficult to fit, is coherent, and is at least as good inferentially, we suggest it should be the model of choice for spatial disease mapping.
Keeping it Simple Language Models can learn Complex Molecular Distributions ; Deep generative models of molecules have grown immensely in popularity, trained on relevant datasets, these models are used to search through chemical space. The downstream utility of generative models for the inverse design of novel functional compounds depends on their ability to learn a training distribution of molecules. The most simple example is a language model that takes the form of a recurrent neural network and generates molecules using a string representation. More sophisticated are graph generative models, which sequentially construct molecular graphs and typically achieve state of the art results. However, recent work has shown that language models are more capable than once thought, particularly in the low data regime. In this work, we investigate the capacity of simple language models to learn distributions of molecules. For this purpose, we introduce several challenging generative modeling tasks by compiling especially complex distributions of molecules. On each task, we evaluate the ability of language models as compared with two widely used graph generative models. The results demonstrate that language models are powerful generative models, capable of adeptly learning complex molecular distributions and yield better performance than the graph models. Language models can accurately generate distributions of the highest scoring penalized LogP molecules in ZINC15, multimodal molecular distributions as well as the largest molecules in PubChem.
Mutation Models Learning to Generate Levels by Imitating Evolution ; Searchbased procedural content generation PCG is a wellknown method for level generation in games. Its key advantage is that it is generic and able to satisfy functional constraints. However, due to the heavy computational costs to run these algorithms online, searchbased PCG is rarely utilized for realtime generation. In this paper, we introduce mutation models, a new type of iterative level generator based on machine learning. We train a model to imitate the evolutionary process and use the trained model to generate levels. This trained model is able to modify noisy levels sequentially to create better levels without the need for a fitness function during inference. We evaluate our trained models on a 2D maze generation task. We compare several different versions of the method training the models either at the end of evolution normal evolution or every 100 generations assisted evolution and using the model as a mutation function during evolution. Using the assisted evolution process, the final trained models are able to generate mazes with a success rate of 99 and high diversity of 86. The trained model is many times faster than the evolutionary process it was trained on. This work opens the door to a new way of learning level generators guided by an evolutionary process, meaning automatic creation of generators with specifiable constraints and objectives that are fast enough for runtime deployment in games.
DeBiasing Generative Models using Counterfactual Methods ; Variational autoencoders VAEs and other generative methods have garnered growing interest not just for their generative properties but also for the ability to disentangle a lowdimensional latent variable space. However, few existing generative models take causality into account. We propose a new decoder based framework named the Causal Counterfactual Generative Model CCGM, which includes a partially trainable causal layer in which a part of a causal model can be learned without significantly impacting reconstruction fidelity. By learning the causal relationships between image semantic labels or tabular variables, we can analyze biases, intervene on the generative model, and simulate new scenarios. Furthermore, by modifying the causal structure, we can generate samples outside the domain of the original training data and use such counterfactual models to debias datasets. Thus, datasets with known biases can still be used to train the causal generative model and learn the causal relationships, but we can produce debiased datasets on the generative side. Our proposed method combines a causal latent space VAE model with specific modification to emphasize causal fidelity, enabling finer control over the causal layer and the ability to learn a robust intervention framework. We explore how better disentanglement of causal learning and encodingdecoding generates higher causal intervention quality. We also compare our model against similar research to demonstrate the need for explicit generative debiasing beyond interventions. Our initial experiments show that our model can generate images and tabular data with high fidelity to the causal framework and accommodate explicit debiasing to ignore undesired relationships in the causal data compared to the baseline.
An Extended Symbol Table Infrastructure to Manage the Composition of OutputSpecific Generator Information ; Code generation is regarded as an essential part of modeldriven development MDD to systematically transform the abstract models to concrete code. One current challenges of templatebased code generation is that outputspecific information, i.e., information about the generated source code, is not explicitly modeled and, thus, not accessible during code generation. Existing approaches try to either parse the generated output or store it in a data structure before writing into a file. In this paper, we propose a first approach to explicitly model parts of the generated output. These modeled parts are stored in a symbol for efficient management. During code generation this information can be accessed to ensure that the composition of the overall generated source code is valid. We achieve this goal by creating a domain model of relevant generator output information, extending the symbol table to store this information, and adapt the overall code generation process.
Composable Generative Models ; Generative modeling has recently seen many exciting developments with the advent of deep generative architectures such as Variational AutoEncoders VAE or Generative Adversarial Networks GAN. The ability to draw synthetic i.i.d. observations with the same joint probability distribution as a given dataset has a wide range of applications including representation learning, compression or imputation. It appears that it also has many applications in privacy preserving data analysis, especially when used in conjunction with differential privacy techniques. This paper focuses on synthetic data generation models with privacy preserving applications in mind. It introduces a novel architecture, the Composable Generative Model CGM that is stateoftheart in tabular data generation. Any conditional generative model can be used as a subcomponent of the CGM, including CGMs themselves, allowing the generation of numerical, categorical data as well as images, text, or time series. The CGM has been evaluated on 13 datasets 6 standard datasets and 7 simulated and compared to 14 recent generative models. It beats the state of the art in tabular data generation by a significant margin.
Analogy Generation by Prompting Large Language Models A Case Study of InstructGPT ; We propose a novel application of prompting Pretrained Language Models PLMs to generate analogies and study how to design effective prompts for two task settings generating a source concept analogous to a given target concept aka Analogous Concept Generation or ACG, and generating an explanation of the similarity between a given pair of target concept and source concept aka Analogous Explanation Generation or AEG. We found that it is feasible to prompt InstructGPT to generate meaningful analogies and the best prompts tend to be precise imperative statements especially with a low temperature setting. We also systematically analyzed the sensitivity of the InstructGPT model to prompt design, temperature, and injected spelling errors, and found that the model is particularly sensitive to certain variations e.g., questions vs. imperative statements. Further, we conducted human evaluation on 1.4k of the generated analogies and found that the quality of generations varies substantially by model size. The largest InstructGPT model can achieve humanlevel performance at generating meaningful analogies for a given target while there is still room for improvement on the AEG task.
GlyphDiffusion Text Generation as Image Generation ; Diffusion models have become a new generative paradigm for text generation. Considering the discrete categorical nature of text, in this paper, we propose GlyphDiffusion, a novel diffusion approach for text generation via textguided image generation. Our key idea is to render the target text as a glyph image containing visual language content. In this way, conditional text generation can be cast as a glyph image generation task, and it is then natural to apply continuous diffusion models to discrete texts. Specially, we utilize a cascaded architecture ie a base and a superresolution diffusion model to generate highfidelity glyph images, conditioned on the input text. Furthermore, we design a text grounding module to transform and refine the visual language content from generated glyph images into the final texts. In experiments over four conditional text generation tasks and two classes of metrics ie quality and diversity, GlyphDiffusion can achieve comparable or even better results than several baselines, including pretrained language models. Our model also makes significant improvements compared to the recent diffusion model.
Diffusion idea exploration for art generation ; CrossModal learning tasks have picked up pace in recent times. With plethora of applications in diverse areas, generation of novel content using multiple modalities of data has remained a challenging problem. To address the same, various generative modelling techniques have been proposed for specific tasks. Novel and creative image generation is one important aspect for industrial application which could help as an arm for novel content generation. Techniques proposed previously used Generative Adversarial NetworkGAN, autoregressive models and Variational Autoencoders VAE for accomplishing similar tasks. These approaches are limited in their capability to produce images guided by either text instructions or rough sketch images decreasing the overall performance of image generator. We used state of the art diffusion models to generate creative art by primarily leveraging text with additional support of rough sketches. Diffusion starts with a pattern of random dots and slowly converts that pattern into a design image using the guiding information fed into the model. Diffusion models have recently outperformed other generative models in image generation tasks using cross modal data as guiding information. The initial experiments for this task of novel image generation demonstrated promising qualitative results.
Generalized Linear Models with Structured Sparsity Estimators ; In this paper, we introduce structured sparsity estimators in Generalized Linear Models. Structured sparsity estimators in the least squares loss are introduced by Stucky and van de Geer 2018 recently for fixed design and normal errors. We extend their results to debiased structured sparsity estimators with Generalized Linear Model based loss. Structured sparsity estimation means penalized loss functions with a possible sparsity structure used in the chosen norm. These include weighted group lasso, lasso and norms generated from convex cones. The significant difficulty is that it is not clear how to prove two oracle inequalities. The first one is for the initial penalized Generalized Linear Model estimator. Since it is not clear how a particular feasibleweighted nodewise regression may fit in an oracle inequality for penalized Generalized Linear Model, we need a second oracle inequality to get oracle bounds for the approximate inverse for the sample estimate of secondorder partial derivative of Generalized Linear Model. Our contributions are fivefold 1. We generalize the existing oracle inequality results in penalized Generalized Linear Models by proving the underlying conditions rather than assuming them. One of the key issues is the proof of a sample onepoint margin condition and its use in an oracle inequality. 2. Our results cover even non subGaussian errors and regressors. 3. We provide a feasible weighted nodewise regression proof which generalizes the results in the literature from a simple l1 norm usage to norms generated from convex cones. 4. We realize that norms used in feasible nodewise regression proofs should be weaker or equal to the norms in penalized Generalized Linear Model loss. 5. We can debias the first step estimator via getting an approximate inverse of the singularsample second order partial derivative of Generalized Linear Model loss.
DiffGAR ModelAgnostic Restoration from Generative Artifacts Using ImagetoImage Diffusion Models ; Recent generative models show impressive results in photorealistic image generation. However, artifacts often inevitably appear in the generated results, leading to downgraded user experience and reduced performance in downstream tasks. This work aims to develop a plugin postprocessing module for diverse generative models, which can faithfully restore images from diverse generative artifacts. This is challenging because 1 Unlike traditional degradation patterns, generative artifacts are nonlinear and the transformation function is highly complex. 2 There are no readily available artifactimage pairs. 3 Different from modelspecific antiartifact methods, a modelagnostic framework views the generator as a blackbox machine and has no access to the architecture details. In this work, we first design a group of mechanisms to simulate generative artifacts of popular generators i.e., GANs, autoregressive models, and diffusion models, given real images. Second, we implement the modelagnostic antiartifact framework as an imagetoimage diffusion model, due to its advantage in generation quality and capacity. Finally, we design a conditioning scheme for the diffusion model to enable both blind and nonblind image restoration. A guidance parameter is also introduced to allow for a tradeoff between restoration accuracy and image quality. Extensive experiments show that our method significantly outperforms previous approaches on the proposed datasets and realworld artifact images.
Generating Images with Multimodal Language Models ; We propose a method to fuse frozen textonly large language models LLMs with pretrained image encoder and decoder models, by mapping between their embedding spaces. Our model demonstrates a wide suite of multimodal capabilities image retrieval, novel image generation, and multimodal dialogue. Ours is the first approach capable of conditioning on arbitrarily interleaved image and text inputs to generate coherent image and text outputs. To achieve strong performance on image generation, we propose an efficient mapping network to ground the LLM to an offtheshelf texttoimage generation model. This mapping network translates hidden representations of text into the embedding space of the visual models, enabling us to leverage the strong text representations of the LLM for visual outputs. Our approach outperforms baseline generation models on tasks with longer and more complex language. In addition to novel image generation, our model is also capable of image retrieval from a prespecified dataset, and decides whether to retrieve or generate at inference time. This is done with a learnt decision module which conditions on the hidden representations of the LLM. Our model exhibits a wider range of capabilities compared to prior multimodal language models. It can process imageandtext inputs, and produce retrieved images, generated images, and generated text outperforming nonLLM based generation models across several texttoimage tasks that measure context dependence.
Structural Guidance for Transformer Language Models ; Transformerbased language models pretrained on large amounts of text data have proven remarkably successful in learning generic transferable linguistic representations. Here we study whether structural guidance leads to more humanlike systematic linguistic generalization in Transformer language models without resorting to pretraining on very large amounts of data. We explore two general ideas. The Generative Parsing idea jointly models the incremental parse and word sequence as part of the same sequence modeling task. The Structural Scaffold idea guides the language model's representation via additional structure loss that separately predicts the incremental constituency parse. We train the proposed models along with a vanilla Transformer language model baseline on a 14 milliontoken and a 46 milliontoken subset of the BLLIP dataset, and evaluate models' syntactic generalization performances on SG Test Suites and sized BLiMP. Experiment results across two benchmarks suggest converging evidence that generative structural supervisions can induce more robust and humanlike linguistic generalization in Transformer language models without the need for data intensive pretraining.
Understanding and Improving Zeroshot Multihop Reasoning in Generative Question Answering ; Generative question answering QA models generate answers to questions either solely based on the parameters of the model the closedbook setting or additionally retrieving relevant evidence the openbook setting. Generative QA models can answer some relatively complex questions, but the mechanism through which they do so is still poorly understood. We perform several studies aimed at better understanding the multihop reasoning capabilities of generative QA models. First, we decompose multihop questions into multiple corresponding singlehop questions, and find marked inconsistency in QA models' answers on these pairs of ostensibly identical question chains. Second, we find that models lack zeroshot multihop reasoning ability when trained only on singlehop questions, models generalize poorly to multihop questions. Finally, we demonstrate that it is possible to improve models' zeroshot multihop reasoning capacity through two methods that approximate real multihop natural language NL questions by training on either concatenation of singlehop questions or logical forms SPARQL. In sum, these results demonstrate that multihop reasoning does not emerge naturally in generative QA models, but can be encouraged by advances in training or modeling techniques.
Generative datadriven approaches for stochastic subgrid parameterizations in an idealized ocean model ; Subgrid parameterizations of mesoscale eddies continue to be in demand for climate simulations. These subgrid parameterizations can be powerfully designed using physics andor datadriven methods, with uncertainty quantification. For example, Guillaumin and Zanna 2021 proposed a Machine Learning ML model that predicts subgrid forcing and its local uncertainty. The major assumption and potential drawback of this model is the statistical independence of stochastic residuals between grid points. Here, we aim to improve the simulation of stochastic forcing with generative models of ML, such as Generative adversarial network GAN and Variational autoencoder VAE. Generative models learn the distribution of subgrid forcing conditioned on the resolved flow directly from data and they can produce new samples from this distribution. Generative models can potentially capture not only the spatial correlation but any statistically significant property of subgrid forcing. We test the proposed stochastic parameterizations offline and online in an idealized ocean model. We show that generative models are able to predict subgrid forcing and its uncertainty with spatially correlated stochastic forcing. Online simulations for a range of resolutions demonstrated that generative models are superior to the baseline ML model at the coarsest resolution.
TopDown Tree Structured Text Generation ; Text generation is a fundamental building block in natural language processing tasks. Existing sequential models performs autoregression directly over the text sequence and have difficulty generating long sentences of complex structures. This paper advocates a simple approach that treats sentence generation as a treegeneration task. By explicitly modelling syntactic structures in a constituent syntactic tree and performing topdown, breadthfirst tree generation, our model fixes dependencies appropriately and performs implicit global planning. This is in contrast to transitionbased depthfirst generation process, which has difficulty dealing with incomplete texts when parsing and also does not incorporate future contexts in planning. Our preliminary results on two generation tasks and one parsing task demonstrate that this is an effective strategy.
Boosting Generative Models by Leveraging Cascaded MetaModels ; Deep generative models are effective methods of modeling data. However, it is not easy for a single generative model to faithfully capture the distributions of complex data such as images. In this paper, we propose an approach for boosting generative models, which cascades metamodels together to produce a stronger model. Any hidden variable metamodel e.g., RBM and VAE which supports likelihood evaluation can be leveraged. We derive a decomposable variational lower bound of the boosted model, which allows each metamodel to be trained separately and greedily. Besides, our framework can be extended to semisupervised boosting, where the boosted model learns a joint distribution of data and labels. Finally, we combine our boosting framework with the multiplicative boosting framework, which further improves the learning power of generative models.
A Survey on Graph Diffusion Models Generative AI in Science for Molecule, Protein and Material ; Diffusion models have become a new SOTA generative modeling method in various fields, for which there are multiple survey works that provide an overall survey. With the number of articles on diffusion models increasing exponentially in the past few years, there is an increasing need for surveys of diffusion models on specific fields. In this work, we are committed to conducting a survey on the graph diffusion models. Even though our focus is to cover the progress of diffusion models in graphs, we first briefly summarize how other generative modeling methods are used for graphs. After that, we introduce the mechanism of diffusion models in various forms, which facilitates the discussion on the graph diffusion models. The applications of graph diffusion models mainly fall into the category of AIgenerated content AIGC in science, for which we mainly focus on how graph diffusion models are utilized for generating molecules and proteins but also cover other cases, including materials design. Moreover, we discuss the issue of evaluating diffusion models in the graph domain and the existing challenges.
Simplicity in cosmology add virialisation, remove , keep classical GR ; Presentday extragalactic observations are mostly rather wellmodelled by a generalrelativistic model, the Lambda CDM model. The model appears to surpass the limits of known physics by requiring that the Universe be dominated by dark energy. However, the model sacrifices physical simplicity in favour of applied mathematical simplicity. A physically simpler, generalrelativistic alternative to the Lambda CDM model is described here, along with preliminary observational checks. Thus, it will be argued that extragalactic observations such as the distancemodulusredshift relation of type Ia supernovae are wellmodelled within classical general relativity, without the addition of new physics.
TCVAE Uncovering OutofDistribution Data Generative Factors ; Uncovering data generative factors is the ultimate goal of disentanglement learning. Although many works proposed disentangling generative models able to uncover the underlying generative factors of a dataset, so far no one was able to uncover OOD generative factors i.e., factors of variations that are not explicitly shown on the dataset. Moreover, the datasets used to validate these models are synthetically generated using a balanced mixture of some predefined generative factors, implicitly assuming that generative factors are uniformly distributed across the datasets. However, real datasets do not present this property. In this work we analyse the effect of using datasets with unbalanced generative factors, providing qualitative and quantitative results for widely used generative models. Moreover, we propose TCVAE, a generative model optimized using a lower bound of the joint total correlation between the learned latent representations and the input data. We show that the proposed model is able to uncover OOD generative factors on different datasets and outperforms on average the related baselines in terms of downstream disentanglement metrics.
Generative MetaLearning for ZeroShot Relation Triplet Extraction ; The zeroshot relation triplet extraction ZeroRTE task aims to extract relation triplets from a piece of text with unseen relation types. The seminal work adopts the pretrained generative model to generate synthetic samples for new relations. However, current generative models lack the optimization process of model generalization on different tasks during training, and thus have limited generalization capability. For this reason, we propose a novel generative metalearning framework which exploits the learningtolearn' ability of metalearning to boost the generalization capability of generative models. Specifically, we first design a taskaware generative model which can learn the general knowledge by forcing the optimization process to be conducted across multiple tasks. Based on it, we then present three generative metalearning approaches designated for three typical metalearning categories. Extensive experimental results demonstrate that our framework achieves a new stateoftheart performance for the ZeroRTE task.
MolDiff Addressing the AtomBond Inconsistency Problem in 3D Molecule Diffusion Generation ; Deep generative models have recently achieved superior performance in 3D molecule generation. Most of them first generate atoms and then add chemical bonds based on the generated atoms in a postprocessing manner. However, there might be no corresponding bond solution for the temporally generated atoms as their locations are generated without considering potential bonds. We define this problem as the atombond inconsistency problem and claim it is the main reason for current approaches to generating unrealistic 3D molecules. To overcome this problem, we propose a new diffusion model called MolDiff which can generate atoms and bonds simultaneously while still maintaining their consistency by explicitly modeling the dependence between their relationships. We evaluated the generation ability of our proposed model and the quality of the generated molecules using criteria related to both geometry and chemical properties. The empirical studies showed that our model outperforms previous approaches, achieving a threefold improvement in success rate and generating molecules with significantly better quality.
A Systematic Survey on Deep Generative Models for Graph Generation ; Graphs are important data representations for describing objects and their relationships, which appear in a wide diversity of realworld scenarios. As one of a critical problem in this area, graph generation considers learning the distributions of given graphs and generating more novel graphs. Owing to their wide range of applications, generative models for graphs, which have a rich history, however, are traditionally handcrafted and only capable of modeling a few statistical properties of graphs. Recent advances in deep generative models for graph generation is an important step towards improving the fidelity of generated graphs and paves the way for new kinds of applications. This article provides an extensive overview of the literature in the field of deep generative models for graph generation. Firstly, the formal definition of deep generative models for the graph generation and the preliminary knowledge are provided. Secondly, taxonomies of deep generative models for both unconditional and conditional graph generation are proposed respectively; the existing works of each are compared and analyzed. After that, an overview of the evaluation metrics in this specific domain is provided. Finally, the applications that deep graph generation enables are summarized and five promising future research directions are highlighted.
Finding Reproduction Numbers for Epidemic Models PredatorPrey Models of Arbitrary Finite Dimension Using The Generalized Linear Chain Trick ; Reproduction numbers, like the basic reproduction number mathcalR0, play an important role in the analysis and application of dynamic models, including contagion models and ecological population models. One difficulty in deriving these quantities is that they must be computed on a modelbymodel basis, since it is typically impractical to obtain general reproduction number expressions applicable to a family of related models, especially if these are of different dimensions. For example, this is typically the case for SIRtype infectious disease models derived using the linear chain trick LCT. Here we show how to find general reproduction number expressions for such models families which vary in their number of state variables using the next generation operator approach in conjunction with the generalized linear chain trick GLCT. We further show how the GLCT enables modelers to draw insights from these results by leveraging theory and intuition from continuous time Markov chains CTMCs and their absorption time distributions i.e., phasetype probability distributions. To do this, we first review the GLCT and other connections between meanfield ODE model assumptions, CTMCs, and phasetype distributions. We then apply this technique to find reproduction numbers for two sets of models a family of generalized SEIRS models of arbitrary finite dimension, and a generalized family of finite dimensional predatorprey RosenzweigMacArthur type models. These results highlight the utility of the GLCT for the derivation and analysis of mean field ODE models, especially when used in conjunction with theory from CTMCs and their associated phasetype distributions.
PTSG a test generation tool based on extended finite state machine ; The Extended Finite State Machine EFSM is one of the most popular modeling approaches for modelbased testing. However, EFSMbased test case generation is susceptible to the infeasible inexecutable path problem, which stems from the conflict of predicates guards between transitions in the path. Therefore, in order to derive feasible test cases, a test generation algorithm needs to dynamically acquire information about the model and verify the feasibility of the generated test path through the simulation execution of the model. The traditional method of constructing executable models using hardcoding for different EFSM models under test has limitations such as inflexibility, timeconsuming and errorprone. To address this issue, this paper develops an open source test generation tool for testing EFSMspecified systems, PTSG, to support the automatic generation of executable test cases. It decouples the EFSM model description, parsing and simulation execution functions from the test generation algorithm, which can effectively improve the efficiency and quality of test generation. In particular, PTSG first uses a welldesigned JSON syntax to describe the specific EFSM under test. Then, based on the model description file, it uses lexical and syntactic parsers to dynamically extract model information to construct various model objects in memory such as state configurations, transitions, etc. Finally, the tool provide a series of service interfaces to support model information acquisition, transition feasibility evaluation, and model simulation execution. A case study of test sequence generation for the SCP protocol model demonstrates the capability and promise of the PTSG to serve executable test cases.
Non Proportional Odds Models are Widely Dispensable Sparser Modeling based on Parametric and Additive LocationShift Approaches ; The potential of locationshift models to find adequate models between the proportional odds model and the nonproportional odds model is investigated. It is demonstrated that these models are very useful in ordinal modeling. While proportional odds models are often too simple, non proportional odds models are typically unnecessary complicated and seem widely dispensable. The class of locationshift models is also extended to allow for smooth effects. The additive locationshift model contains two functions for each explanatory variable, one for the location and one for dispersion. It is much sparser than hardtohandle additive models with categoryspecific covariate functions but more flexible than common vector generalized additive models.
Hybrid modeling Applications in realtime diagnosis ; Reducedorder models that accurately abstract high fidelity models and enable faster simulation is vital for realtime, modelbased diagnosis applications. In this paper, we outline a novel hybrid modeling approach that combines machine learning inspired models and physicsbased models to generate reducedorder models from high fidelity models. We are using such models for realtime diagnosis applications. Specifically, we have developed machine learning inspired representations to generate reduced order component models that preserve, in part, the physical interpretation of the original high fidelity component models. To ensure the accuracy, scalability and numerical stability of the learning algorithms when training the reducedorder models we use optimization platforms featuring automatic differentiation. Training data is generated by simulating the highfidelity model. We showcase our approach in the context of fault diagnosis of a rail switch system. Three new model abstractions whose complexities are two orders of magnitude smaller than the complexity of the high fidelity model, both in the number of equations and simulation time are shown. The numerical experiments and results demonstrate the efficacy of the proposed hybrid modeling approach.
Reducing the Computational Cost of Deep Generative Models with Binary Neural Networks ; Deep generative models provide a powerful set of tools to understand realworld data. But as these models improve, they increase in size and complexity, so their computational cost in memory and execution time grows. Using binary weights in neural networks is one method which has shown promise in reducing this cost. However, whether binary neural networks can be used in generative models is an open problem. In this work we show, for the first time, that we can successfully train generative models which utilize binary neural networks. This reduces the computational cost of the models massively. We develop a new class of binary weight normalization, and provide insights for architecture designs of these binarized generative models. We demonstrate that two stateoftheart deep generative models, the ResNet VAE and Flow models, can be binarized effectively using these techniques. We train binary models that achieve loss values close to those of the regular models but are 9094 smaller in size, and also allow significant speedups in execution time.
Heuristicbased Mining of Service Behavioral Models from Interaction Traces ; Software behavioral models have proven useful for emulating and testing software systems. Many techniques have been proposed to infer behavioral models of software systems from their interaction traces. The quality of the inferred model is critical to its successful use. While generalization is necessary to deduce concise behavioral models, existing techniques of inferring models, in general, overgeneralize what behavior is valid. Imprecise models include many spurious behaviors, and thus compromise the effectiveness of their use. In this paper, we propose a novel technique that increases the accuracy of the behavioral model inferred from interaction traces. The essence of our approach is a heuristicbased generalization and truthful minimization. The set of heuristics include patterns to match input traces and generalize them towards concise model representations. Furthermore, we adopt a truthful minimization technique to merge these generalized traces. The key insight of our approach is to infer a concise behavioral model without compromising its accuracy. We present an empirical evaluation of how our approach improves upon the stateoftheart specification inference techniques. The results show that our approach mines model with 100 precision and recall with a limited computation overhead.
LLMgrounded Diffusion Enhancing Prompt Understanding of TexttoImage Diffusion Models with Large Language Models ; Recent advancements in texttoimage generation with diffusion models have yielded remarkable results synthesizing highly realistic and diverse images. However, these models still encounter difficulties when generating images from prompts that demand spatial or common sense reasoning. We propose to equip diffusion models with enhanced reasoning capabilities by using offtheshelf pretrained large language models LLMs in a novel twostage generation process. First, we adapt an LLM to be a textguided layout generator through incontext learning. When provided with an image prompt, an LLM outputs a scene layout in the form of bounding boxes along with corresponding individual descriptions. Second, we steer a diffusion model with a novel controller to generate images conditioned on the layout. Both stages utilize frozen pretrained models without any LLM or diffusion model parameter optimization. We validate the superiority of our design by demonstrating its ability to outperform the base diffusion model in accurately generating images according to prompts that necessitate both language and spatial reasoning. Additionally, our method naturally allows dialogbased scene specification and is able to handle prompts in a language that is not wellsupported by the underlying diffusion model.
Physicsguided training of GAN to improve accuracy in airfoil design synthesis ; Generative adversarial networks GAN have recently been used for a design synthesis of mechanical shapes. A GAN sometimes outputs physically unreasonable shapes. For example, when a GAN model is trained to output airfoil shapes that indicate required aerodynamic performance, significant errors occur in the performance values. This is because the GAN model only considers data but does not consider the aerodynamic equations that lie under the data. This paper proposes the physicsguided training of the GAN model to guide the model to learn physical validity. Physical validity is computed using generalpurpose software located outside the neural network model. Such generalpurpose software cannot be used in physicsinformed neural network frameworks, because physical equations must be implemented inside the neural network models. Additionally, a limitation of generative models is that the output data are similar to the training data and cannot generate completely new shapes. However, because the proposed model is guided by a physical model and does not use a training dataset, it can generate completely new shapes. Numerical experiments show that the proposed model drastically improves the accuracy. Moreover, the output shapes differ from those of the training dataset but still satisfy the physical validity, overcoming the limitations of existing GAN models.
A note on locally optimal designs for generalized linear models with restricted support ; Optimal designs for generalized linear models require a prior knowledge of the regression parameters. At certain values of the parameters we propose particular assumptions which allow to derive a locally optimal design for a model without intercept from a locally optimal design for the corresponding model with intercept and vice versa. Applications to Poisson and logistic models and Extensions to nonlinear models are provided.
A Generalized Framework of Sequence Generation with Application to Undirected Sequence Models ; Undirected neural sequence models such as BERT Devlin et al., 2019 have received renewed interest due to their success on discriminative natural language understanding tasks such as questionanswering and natural language inference. The problem of generating sequences directly from these models has received relatively little attention, in part because generating from undirected models departs significantly from conventional monotonic generation in directed sequence models. We investigate this problem by proposing a generalized model of sequence generation that unifies decoding in directed and undirected models. The proposed framework models the process of generation rather than the resulting sequence, and under this framework, we derive various neural sequence models as special cases, such as autoregressive, semiautoregressive, and refinementbased nonautoregressive models. This unification enables us to adapt decoding algorithms originally developed for directed sequence models to undirected sequence models. We demonstrate this by evaluating various handcrafted and learned decoding strategies on a BERTlike machine translation model Lample Conneau, 2019. The proposed approach achieves constanttime translation results on par with lineartime translation results from the same undirected sequence model, while both are competitive with the stateoftheart on WMT'14 EnglishGerman translation.
CoderEval A Benchmark of Pragmatic Code Generation with Generative Pretrained Models ; Code generation models based on the pretraining and finetuning paradigm have been increasingly attempted by both academia and industry, resulting in wellknown industrial models such as Codex, CodeGen, and PanGuCoder. To validate the performance of these models, multiple existing benchmarks e.g., AiXBench and HumanEval are proposed, including only cases of generating a standalone function, i.e., a function that invokes or accesses only builtin functions and standard libraries. However, standalone functions constitute only about 30 of functions from real opensource projects. To assess a model's performance for pragmatic code generation i.e., code generation for real settings of open source or proprietary code, in this paper, we propose a benchmark named CoderEval of pragmatic code generation with generative pretrained models. Compared with the widelyused HumanEval benchmark from OpenAI, CoderEval can be used to assess the performance of models against pragmatic code generation beyond just generating standalone functions. Through the evaluation of three public available models CodeGen, PanGuCoder, and Codex on CoderEval, we analyze and discuss the current progress and future directions of pragmatic code generation with a generative pretrained model.
ImagenHub Standardizing the evaluation of conditional image generation models ; Recently, a myriad of conditional image generation and editing models have been developed to serve different downstream tasks, including texttoimage generation, textguided image editing, subjectdriven image generation, controlguided image generation, etc. However, we observe huge inconsistencies in experimental conditions datasets, inference, and evaluation metrics render fair comparisons difficult. This paper proposes ImagenHub, which is a onestop library to standardize the inference and evaluation of all the conditional image generation models. Firstly, we define seven prominent tasks and curate highquality evaluation datasets for them. Secondly, we built a unified inference pipeline to ensure fair comparison. Thirdly, we design two human evaluation scores, i.e. Semantic Consistency and Perceptual Quality, along with comprehensive guidelines to evaluate generated images. We train expert raters to evaluate the model outputs based on the proposed metrics. Our human evaluation achieves a high interworker agreement of Krippendorff's alpha on 76 models with a value higher than 0.4. We comprehensively evaluated a total of around 30 models and observed three key takeaways 1 the existing models' performance is generally unsatisfying except for Textguided Image Generation and Subjectdriven Image Generation, with 74 models achieving an overall score lower than 0.5. 2 we examined the claims from published papers and found 83 of them hold with a few exceptions. 3 None of the existing automatic metrics has a Spearman's correlation higher than 0.2 except subjectdriven image generation. Moving forward, we will continue our efforts to evaluate newly published models and update our leaderboard to keep track of the progress in conditional image generation.
Generative models and Bayesian inversion using Laplace approximation ; The Bayesian approach to solving inverse problems relies on the choice of a prior. This critical ingredient allows the formulation of expert knowledge or physical constraints in a probabilistic fashion and plays an important role for the success of the inference. Recently, Bayesian inverse problems were solved using generative models as highly informative priors. Generative models are a popular tool in machine learning to generate data whose properties closely resemble those of a given database. Typically, the generated distribution of data is embedded in a lowdimensional manifold. For the inverse problem, a generative model is trained on a database that reflects the properties of the sought solution, such as typical structures of the tissue in the human brain in magnetic resonance MR imaging. The inference is carried out in the lowdimensional manifold determined by the generative model which strongly reduces the dimensionality of the inverse problem. However, this proceeding produces a posterior that admits no Lebesgue density in the actual variables and the accuracy reached can strongly depend on the quality of the generative model. For linear Gaussian models we explore an alternative Bayesian inference based on probabilistic generative models which is carried out in the original highdimensional space. A Laplace approximation is employed to analytically derive the required prior probability density function induced by the generative model. Properties of the resulting inference are investigated. Specifically, we show that derived Bayes estimates are consistent, in contrast to the approach employing the lowdimensional manifold of the generative model. The MNIST data set is used to construct numerical experiments which confirm our theoretical findings.
AMPERE AMRAware Prefix for GenerationBased Event Argument Extraction Model ; Event argument extraction EAE identifies event arguments and their specific roles for a given event. Recent advancement in generationbased EAE models has shown great performance and generalizability over classificationbased models. However, existing generationbased EAE models mostly focus on problem reformulation and prompt design, without incorporating additional information that has been shown to be effective for classificationbased models, such as the abstract meaning representation AMR of the input passages. Incorporating such information into generationbased models is challenging due to the heterogeneous nature of the natural language form prevalently used in generationbased models and the structured form of AMRs. In this work, we study strategies to incorporate AMR into generationbased EAE models. We propose AMPERE, which generates AMRaware prefixes for every layer of the generation model. Thus, the prefix introduces AMR information to the generationbased EAE model and then improves the generation. We also introduce an adjusted copy mechanism to AMPERE to help overcome potential noises brought by the AMR graph. Comprehensive experiments and analyses on ACE2005 and ERE datasets show that AMPERE can get 4 10 absolute F1 score improvements with reduced training data and it is in general powerful across different training sizes.
Probability Link Models with Symmetric Information Divergence ; This paper introduces link functions for transforming one probability distribution to another such that the KullbackLeibler and R'enyi divergences between the two distributions are symmetric. Two general classes of link models are proposed. The first model links two survival functions and is applicable to models such as the proportional odds and change point, which are used in survival analysis and reliability modeling. A prototype application involving the proportional odds model demonstrates advantages of symmetric divergence measures over asymmetric measures for assessing the efficacy of features and for model averaging purposes. The advantages include providing unique ranks for models and unique information weights for model averaging with onehalf as much computation requirement of asymmetric divergences. The second model links two cumulative probability distribution functions. This model produces a generalized location model which are continuous counterparts of the binary probability models such as probit and logit models. Examples include the generalized probit and logit models which have appeared in the survival analysis literature, and a generalized Laplace model and a generalized Studentt model, which are survival time models corresponding to the respective binary probability models. Lastly, extensions to symmetric divergence between survival functions and conditions for copula dependence information are presented.
On the Equivalence Between HighOrder NetworkInfluence Frameworks GeneralThreshold, HypergraphTriggering, and LogicTriggering Models ; In this paper, we study several highorder networkinfluencepropagation frameworks and their connection to the classical network diffusion frameworks such as the triggering model and the general threshold model. In one framework, we use hyperedges to represent manytoone influence the collective influence of a group of nodes on another node and define the hypergraph triggering model as a natural extension to the classical triggering model. In another framework, we use monotone Boolean functions to capture the diverse logic underlying manytoone influence behaviors, and extend the triggering model to the Booleanfunction triggering model. We prove that the Booleanfunction triggering model, even with refined details of influence logic, is equivalent to the hypergraph triggering model, and both are equivalent to the general threshold model. Moreover, the general threshold model is optimal in the number of parameters, among all models with the same expressive power. We further extend these three equivalent models by introducing correlations among influence propagations on different nodes. Surprisingly, we discover that while the correlated hypergraphbased model is still equivalent to the correlated Booleanfunctionbased model, the correlated general threshold model is more restrictive than the two highorder models. Our study sheds light on highorder networkinfluence propagations by providing new insight into the group influence behaviors in existing models, as well as diverse modeling tools for understanding influence propagations in networks.
SeqGAN Sequence Generative Adversarial Nets with Policy Gradient ; As a new way of training generative models, Generative Adversarial Nets GAN that uses a discriminative model to guide the training of the generative model has enjoyed considerable success in generating realvalued data. However, it has limitations when the goal is for generating sequences of discrete tokens. A major reason lies in that the discrete outputs from the generative model make it difficult to pass the gradient update from the discriminative model to the generative model. Also, the discriminative model can only assess a complete sequence, while for a partially generated sequence, it is nontrivial to balance its current score and the future one once the entire sequence has been generated. In this paper, we propose a sequence generation framework, called SeqGAN, to solve the problems. Modeling the data generator as a stochastic policy in reinforcement learning RL, SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update. The RL reward signal comes from the GAN discriminator judged on a complete sequence, and is passed back to the intermediate stateaction steps using Monte Carlo search. Extensive experiments on synthetic data and realworld tasks demonstrate significant improvements over strong baselines.
OneShot Generalization in Deep Generative Models ; Humans have an impressive ability to reason about new concepts and experiences from just a single example. In particular, humans have an ability for oneshot generalization an ability to encounter a new concept, understand its structure, and then be able to generate compelling alternative variations of the concept. We develop machine learning systems with this important capacity by developing new deep generative models, models that combine the representational power of deep learning with the inferential power of Bayesian reasoning. We develop a class of sequential generative models that are built on the principles of feedback and attention. These two characteristics lead to generative models that are among the stateofthe art in density estimation and image generation. We demonstrate the oneshot generalization ability of our models using three tasks unconditional sampling, generating new exemplars of a given concept, and generating new exemplars of a family of concepts. In all cases our models are able to generate compelling and diverse sampleshaving seen new examples just onceproviding an important class of generalpurpose models for oneshot machine learning.
On the Biometric Capacity of Generative Face Models ; There has been tremendous progress in generating realistic faces with high fidelity over the past few years. Despite this progress, a crucial question remains unanswered Given a generative face model, how many unique identities can it generate In other words, what is the biometric capacity of the generative face model A scientific basis for answering this question will benefit evaluating and comparing different generative face models and establish an upper bound on their scalability. This paper proposes a statistical approach to estimate the biometric capacity of generated face images in a hyperspherical feature space. We employ our approach on multiple generative models, including unconditional generators like StyleGAN, Latent Diffusion Model, and Generated Photos, as well as DCFace, a classconditional generator. We also estimate capacity w.r.t. demographic attributes such as gender and age. Our capacity estimates indicate that a under ArcFace representation at a false acceptance rate FAR of 0.1, StyleGAN3 and DCFace have a capacity upper bound of 1.43times106 and 1.190times104, respectively; b the capacity reduces drastically as we lower the desired FAR with an estimate of 1.796times104 and 562 at FAR of 1 and 10, respectively, for StyleGAN3; c there is no discernible disparity in the capacity w.r.t gender; and d for some generative models, there is an appreciable disparity in the capacity w.r.t age. Code is available at httpsgithub.comhumananalysiscapacitygenerativefacemodels.
Human Action Generation with Generative Adversarial Networks ; Inspired by the recent advances in generative models, we introduce a human action generation model in order to generate a consecutive sequence of human motions to formulate novel actions. We propose a framework of an autoencoder and a generative adversarial network GAN to produce multiple and consecutive human actions conditioned on the initial state and the given class label. The proposed model is trained in an endtoend fashion, where the autoencoder is jointly trained with the GAN. The model is trained on the NTU RGBD dataset and we show that the proposed model can generate different styles of actions. Moreover, the model can successfully generate a sequence of novel actions given different action labels as conditions. The conventional human action prediction and generation models lack those features, which are essential for practical applications.
Data augmentation for low resource sentiment analysis using generative adversarial networks ; Sentiment analysis is a task that may suffer from a lack of data in certain cases, as the datasets are often generated and annotated by humans. In cases where data is inadequate for training discriminative models, generate models may aid training via data augmentation. Generative Adversarial Networks GANs are one such model that has advanced the state of the art in several tasks, including as image and text generation. In this paper, I train GAN models on low resource datasets, then use them for the purpose of data augmentation towards improving sentiment classifier generalization. Given the constraints of limited data, I explore various techniques to train the GAN models. I also present an analysis of the quality of generated GAN data as more training data for the GAN is made available. In this analysis, the generated data is evaluated as a test set against a model trained on real data points as well as a training set to train classification models. Finally, I also conduct a visual analysis by projecting the generated and the real data into a twodimensional space using the tDistributed Stochastic Neighbor Embedding tSNE method.
Adversarial Code Learning for Image Generation ; We introduce the adversarial code learning ACL module that improves overall image generation performance to several types of deep models. Instead of performing a posterior distribution modeling in the pixel spaces of generators, ACLs aim to jointly learn a latent code with another image encoderinference net, with a prior noise as its input. We conduct the learning in an adversarial learning process, which bears a close resemblance to the original GAN but again shifts the learning from image spaces to prior and latent code spaces. ACL is a portable module that brings up much more flexibility and possibilities in generative model designs. First, it allows flexibility to convert nongenerative models like Autoencoders and standard classification models to decent generative models. Second, it enhances existing GANs' performance by generating meaningful codes and images from any part of the prior. We have incorporated our ACL module with the aforementioned frameworks and have performed experiments on synthetic, MNIST, CIFAR10, and CelebA datasets. Our models have achieved significant improvements which demonstrated the generality for image generation tasks.
Selfplanning Code Generation with Large Language Models ; Although large language models have demonstrated impressive ability in code generation, they are still struggling to address the complicated intent provided by humans. It is widely acknowledged that humans typically employ planning to decompose complex problems and schedule the solution steps prior to implementation. Thus we introduce planning into code generation to help the model understand complex intent and reduce the difficulty of problem solving. This paper proposes a selfplanning code generation method with large language model, which consists of two phases, namely planning phase and implementation phase. Specifically, in the planning phase, the language model plans out the solution steps from the intent combined with incontext learning. Then it enters the implementation phase, where the model generates code step by step, guided by the solution steps. The effectiveness of selfplanning code generation has been rigorously evaluated on multiple code generation datasets and the results have demonstrated a marked superiority over naive direct generation approaches with language model. The improvement in performance is substantial, highlighting the significance of selfplanning in code generation tasks.
FullFormer Generating Shapes Inside Shapes ; Implicit generative models have been widely employed to model 3D data and have recently proven to be successful in encoding and generating highquality 3D shapes. This work builds upon these models and alleviates current limitations by presenting the first implicit generative model that facilitates the generation of complex 3D shapes with rich internal geometric details. To achieve this, our model uses unsigned distance fields to represent nested 3D surfaces allowing learning from nonwatertight mesh data. We propose a transformerbased autoregressive model for 3D shape generation that leverages contextrich tokens from vector quantized shape embeddings. The generated tokens are decoded into an unsigned distance field which is rendered into a novel 3D shape exhibiting a rich internal structure. We demonstrate that our model achieves stateoftheart point cloud generation results on popular classes of 'Cars', 'Planes', and 'Chairs' of the ShapeNet dataset. Additionally, we curate a dataset that exclusively comprises shapes with realistic internal details from the Cars' class of ShapeNet and demonstrate our method's efficacy in generating these shapes with internal geometry.
Efficient and DegreeGuided Graph Generation via Discrete Diffusion Modeling ; Diffusionbased generative graph models have been proven effective in generating highquality small graphs. However, they need to be more scalable for generating large graphs containing thousands of nodes desiring graph statistics. In this work, we propose EDGE, a new diffusionbased generative graph model that addresses generative tasks with large graphs. To improve computation efficiency, we encourage graph sparsity by using a discrete diffusion process that randomly removes edges at each time step and finally obtains an empty graph. EDGE only focuses on a portion of nodes in the graph at each denoising step. It makes much fewer edge predictions than previous diffusionbased models. Moreover, EDGE admits explicitly modeling the node degrees of the graphs, further improving the model performance. The empirical study shows that EDGE is much more efficient than competing methods and can generate large graphs with thousands of nodes. It also outperforms baseline models in generation quality graphs generated by our approach have more similar graph statistics to those of the training graphs.
HumanNorm Learning Normal Diffusion Model for Highquality and Realistic 3D Human Generation ; Recent textto3D methods employing diffusion models have made significant advancements in 3D human generation. However, these approaches face challenges due to the limitations of the texttoimage diffusion model, which lacks an understanding of 3D structures. Consequently, these methods struggle to achieve highquality human generation, resulting in smooth geometry and cartoonlike appearances. In this paper, we observed that finetuning texttoimage diffusion models with normal maps enables their adaptation into texttonormal diffusion models, which enhances the 2D perception of 3D geometry while preserving the priors learned from largescale datasets. Therefore, we propose HumanNorm, a novel approach for highquality and realistic 3D human generation by learning the normal diffusion model including a normaladapted diffusion model and a normalaligned diffusion model. The normaladapted diffusion model can generate highfidelity normal maps corresponding to prompts with viewdependent text. The normalaligned diffusion model learns to generate color images aligned with the normal maps, thereby transforming physical geometry details into realistic appearance. Leveraging the proposed normal diffusion model, we devise a progressive geometry generation strategy and coarsetofine texture generation strategy to enhance the efficiency and robustness of 3D human generation. Comprehensive experiments substantiate our method's ability to generate 3D humans with intricate geometry and realistic appearances, significantly outperforming existing textto3D methods in both geometry and texture quality. The project page of HumanNorm is httpshumannorm.github.io.
Intersecting delocalized pbranes ; A model considered in the paper generalizes supergravity type model to the case of delocalized membrane sources. A generalization of intersecting pbrane solution with delocalized membranes is presented.
A Generalized MarkovChain Modelling Approach to 1,ES Linear Optimization ; The manuscript generalizes several recent results of the 2nd author concerning MarkovChain Modelling of 1,lambda ES Linear Optimization.
On the notions of dimension and transcendence degree for models of ZFC ; We define notions of generic dimension and generic transcendence degree between models of ZFC and give some examples.
On the notion of generic cut for models of ZFC ; We define the notion of generic cut between models of ZFC and give some examples.
LOGAN Membership Inference Attacks Against Generative Models ; Generative models estimate the underlying distribution of a dataset to generate realistic samples according to that distribution. In this paper, we present the first membership inference attacks against generative models given a data point, the adversary determines whether or not it was used to train the model. Our attacks leverage Generative Adversarial Networks GANs, which combine a discriminative and a generative model, to detect overfitting and recognize inputs that were part of training datasets, using the discriminator's capacity to learn statistical differences in distributions. We present attacks based on both whitebox and blackbox access to the target model, against several stateoftheart generative models, over datasets of complex representations of faces LFW, objects CIFAR10, and medical images Diabetic Retinopathy. We also discuss the sensitivity of the attacks to different training parameters, and their robustness against mitigation strategies, finding that defenses are either ineffective or lead to significantly worse performances of the generative models in terms of training stability andor sample quality.
Generative Models for Security Attacks, Defenses, and Opportunities ; Generative models learn the distribution of data from a sample dataset and can then generate new data instances. Recent advances in deep learning has brought forth improvements in generative model architectures, and some stateoftheart models can in some cases produce outputs realistic enough to fool humans. We survey recent research at the intersection of security and privacy and generative models. In particular, we discuss the use of generative models in adversarial machine learning, in helping automate or enhance existing attacks, and as building blocks for defenses in contexts such as intrusion detection, biometrics spoofing, and malware obfuscation. We also describe the use of generative models in diverse applications such as fairness in machine learning, privacypreserving data synthesis, and steganography. Finally, we discuss new threats due to generative models the creation of synthetic media such as deepfakes that can be used for disinformation.
An Integrated Approach for Keyphrase Generation via Exploring the Power of Retrieval and Extraction ; In this paper, we present a novel integrated approach for keyphrase generation KG. Unlike previous works which are purely extractive or generative, we first propose a new multitask learning framework that jointly learns an extractive model and a generative model. Besides extracting keyphrases, the output of the extractive model is also employed to rectify the copy probability distribution of the generative model, such that the generative model can better identify important contents from the given document. Moreover, we retrieve similar documents with the given document from training data and use their associated keyphrases as external knowledge for the generative model to produce more accurate keyphrases. For further exploiting the power of extraction and retrieval, we propose a neuralbased merging module to combine and rerank the predicted keyphrases from the enhanced generative model, the extractive model, and the retrieved keyphrases. Experiments on the five KG benchmarks demonstrate that our integrated approach outperforms the stateoftheart methods.
How Readable is Modelgenerated Code Examining Readability and Visual Inspection of GitHub Copilot ; Background Recent advancements in large language models have motivated the practical use of such models in code generation and program synthesis. However, little is known about the effects of such tools on code readability and visual attention in practice. Objective In this paper, we focus on GitHub Copilot to address the issues of readability and visual inspection of model generated code. Readability and low complexity are vital aspects of good source code, and visual inspection of generated code is important in light of automation bias. Method Through a human experiment n21 we compare model generated code to code written completely by human programmers. We use a combination of static code analysis and human annotators to assess code readability, and we use eye tracking to assess the visual inspection of code. Results Our results suggest that model generated code is comparable in complexity and readability to code written by human pair programmers. At the same time, eye tracking data suggests, to a statistically significant level, that programmers direct less visual attention to model generated code. Conclusion Our findings highlight that reading code is more important than ever, and programmers should beware of complacency and automation bias with model generated code.
DiffusER Discrete Diffusion via Editbased Reconstruction ; In text generation, models that generate text from scratch one token at a time are currently the dominant paradigm. Despite being performant, these models lack the ability to revise existing text, which limits their usability in many practical scenarios. We look to address this, with DiffusER Diffusion via Editbased Reconstruction, a new editbased generative model for text based on denoising diffusion models a class of models that use a Markov chain of denoising steps to incrementally generate data. DiffusER is not only a strong generative model in general, rivalling autoregressive models on several tasks spanning machine translation, summarization, and style transfer; it can also perform other varieties of generation that standard autoregressive models are not wellsuited for. For instance, we demonstrate that DiffusER makes it possible for a user to condition generation on a prototype, or an incomplete sequence, and continue revising based on previous edit steps.
SeqDiffuSeq Text Diffusion with EncoderDecoder Transformers ; Diffusion model, a new generative modelling paradigm, has achieved great success in image, audio, and video generation. However, considering the discrete categorical nature of text, it is not trivial to extend continuous diffusion models to natural language, and text diffusion models are less studied. Sequencetosequence text generation is one of the essential natural language processing topics. In this work, we apply diffusion models to approach sequencetosequence text generation, and explore whether the superiority generation performance of diffusion model can transfer to natural language domain. We propose SeqDiffuSeq, a text diffusion model for sequencetosequence generation. SeqDiffuSeq uses an encoderdecoder Transformers architecture to model denoising function. In order to improve generation quality, SeqDiffuSeq combines the selfconditioning technique and a newly proposed adaptive noise schedule technique. The adaptive noise schedule has the difficulty of denoising evenly distributed across time steps, and considers exclusive noise schedules for tokens at different positional order. Experiment results illustrate the good performance on sequencetosequence generation in terms of text quality and inference time.
Counterfactual Edits for Generative Evaluation ; Evaluation of generative models has been an underrepresented field despite the surge of generative architectures. Most recent models are evaluated upon rather obsolete metrics which suffer from robustness issues, while being unable to assess more aspects of visual quality, such as compositionality and logic of synthesis. At the same time, the explainability of generative models remains a limited, though important, research direction with several current attempts requiring access to the inner functionalities of generative models. Contrary to prior literature, we view generative models as a black box, and we propose a framework for the evaluation and explanation of synthesized results based on concepts instead of pixels. Our framework exploits knowledgebased counterfactual edits that underline which objects or attributes should be inserted, removed, or replaced from generated images to approach their ground truth conditioning. Moreover, global explanations produced by accumulating local edits can also reveal what concepts a model cannot generate in total. The application of our framework on various models designed for the challenging tasks of Story Visualization and Scene Synthesis verifies the power of our approach in the modelagnostic setting.
Generating symbolic music using diffusion models ; Denoising Diffusion Probabilistic models have emerged as simple yet very powerful generative models. Unlike other generative models, diffusion models do not suffer from mode collapse or require a discriminator to generate highquality samples. In this paper, a diffusion model that uses a binomial prior distribution to generate piano rolls is proposed. The paper also proposes an efficient method to train the model and generate samples. The generated music has coherence at time scales up to the length of the training piano roll segments. The paper demonstrates how this model is conditioned on the input and can be used to harmonize a given melody, complete an incomplete piano roll, or generate a variation of a given piece. The code is publicly shared to encourage the use and development of the method by the community.
A Framework for Demonstrating Practical Quantum Advantage Racing Quantum against Classical Generative Models ; Generative modeling has seen a rising interest in both classical and quantum machine learning, and it represents a promising candidate to obtain a practical quantum advantage in the near term. In this study, we build over a proposed framework for evaluating the generalization performance of generative models, and we establish the first quantitative comparative race towards practical quantum advantage PQA between classical and quantum generative models, namely Quantum Circuit Born Machines QCBMs, Transformers TFs, Recurrent Neural Networks RNNs, Variational Autoencoders VAEs, and Wasserstein Generative Adversarial Networks WGANs. After defining four types of PQAs scenarios, we focus on what we refer to as potential PQA, aiming to compare quantum models with the bestknown classical algorithms for the task at hand. We let the models race on a welldefined and applicationrelevant competition setting, where we illustrate and demonstrate our framework on 20 variables qubits generative modeling task. Our results suggest that QCBMs are more efficient in the datalimited regime than the other stateoftheart classical generative models. Such a feature is highly desirable in a wide range of realworld applications where the available data is scarce.
Phoenix A Federated Generative Diffusion Model ; Generative AI has made impressive strides in enabling users to create diverse and realistic visual content such as images, videos, and audio. However, training generative models on large centralized datasets can pose challenges in terms of data privacy, security, and accessibility. Federated learning FL is an approach that uses decentralized techniques to collaboratively train a shared deep learning model while retaining the training data on individual edge devices to preserve data privacy. This paper proposes a novel method for training a Denoising Diffusion Probabilistic Model DDPM across multiple data sources using FL techniques. Diffusion models, a newly emerging generative model, show promising results in achieving superior quality images than Generative Adversarial Networks GANs. Our proposed method Phoenix is an unconditional diffusion model that leverages strategies to improve the data diversity of generated samples even when trained on data with statistical heterogeneity or NonIID NonIndependent and Identically Distributed data. We demonstrate how our approach outperforms the default diffusion model in an FL setting. These results indicate that highquality samples can be generated by maintaining data diversity, preserving privacy, and reducing communication between data sources, offering exciting new possibilities in the field of generative AI.
Generalized Chaplygin gas model Cosmological consequences and statefinder diagnosis ; The generalized Chaplygin gas GCG model in spatially flat universe is investigated. The cosmological consequences led by GCG model including the evolution of EoS parameter, deceleration parameter and dimensionless Hubble parameter are calculated. We show that the GCG model behaves as a general quintessence model. The GCG model can also represent the pressureless CDM model at the early time and cosmological constant model at the late time. The dependency of transition from decelerated expansion to accelerated expansion on the parameters of model is investigated. The statefinder parameters r and s in this model are derived and the evolutionary trajectories in sr plane are plotted. Finally, based on current observational data, we plot the evolutionary trajectories in sr and qr planes for best fit values of the parameters of GCG model. It has been shown that although, there are similarities between GCG model and other forms of chaplygin gas in statefinder plane, but the distance of this model from the LambdaCDM fixed point in sr diagram is shorter compare with standard chaplygin gas model.
Biosignal Generation and Latent Variable Analysis with Recurrent Generative Adversarial Networks ; The effectiveness of biosignal generation and data augmentation with biosignal generative models based on generative adversarial networks GANs, which are a type of deep learning technique, was demonstrated in our previous paper. GANbased generative models only learn the projection between a random distribution as input data and the distribution of training data.Therefore, the relationship between input and generated data is unclear, and the characteristics of the data generated from this model cannot be controlled. This study proposes a method for generating timeseries data based on GANs and explores their ability to generate biosignals with certain classes and characteristics. Moreover, in the proposed method, latent variables are analyzed using canonical correlation analysis CCA to represent the relationship between input and generated data as canonical loadings. Using these loadings, we can control the characteristics of the data generated by the proposed method. The influence of class labels on generated data is analyzed by feeding the data interpolated between two class labels into the generator of the proposed GANs. The CCA of the latent variables is shown to be an effective method of controlling the generated data characteristics. We are able to model the distribution of the timeseries data without requiring domaindependent knowledge using the proposed method. Furthermore, it is possible to control the characteristics of these data by analyzing the model trained using the proposed method. To the best of our knowledge, this work is the first to generate biosignals using GANs while controlling the characteristics of the generated data.
Abelian Extension of Standard Model with Four Generations ; An abelian gauge extension of the Standard Model is proposed with a fourth generation. The fourth generation fermions obtain their masses from a heavier Higgs doublet which makes no tree level contributions to the first three generations masses. Light first three generations neutrino masses continue to have type I seesaw explanation whereas the fourth generation neutrino turns out to be a heavy Dirac neutrino. In the minimal version of such a model with no offdiagonal Yukawa couplings between the fourth and the first three generation, such a heavy Dirac neutrino can be long lived on cosmological time scales. In this model the stated LHC exclusion range 120 ; textGeV mH 600 ; textGeV on the lighter Higgs placed in the context of generic fourth generation standard model is evaded. Also, the Dirac fourth generation neutrino in this model, if stable would constitute upto 1 of the cold Dark Matter in the Universe.
Localized TexttoImage Generation for Free via Cross Attention Control ; Despite the tremendous success in texttoimage generative models, localized texttoimage generation that is, generating objects or features at specific locations in an image while maintaining a consistent overall generation still requires either explicit training or substantial additional inference time. In this work, we show that localized generation can be achieved by simply controlling cross attention maps during inference. With no additional training, model architecture modification or inference time, our proposed cross attention control CAC provides new openvocabulary localization abilities to standard texttoimage models. CAC also enhances models that are already trained for localized generation when deployed at inference time. Furthermore, to assess localized texttoimage generation performance automatically, we develop a standardized suite of evaluations using large pretrained recognition models. Our experiments show that CAC improves localized generation performance with various types of location information ranging from bounding boxes to semantic segmentation maps, and enhances the compositional capability of stateoftheart texttoimage generative models.