text
stringlengths
62
2.94k
General Relativistic NonNeutral White Dwarf Stars ; We generalize the recent Newtonian twocomponent charged fluid models for white dwarf stars of Krivoruchenko, Nadyozhin and Yudin and of Hund and Kiessling to the context of general relativity. We compare the equations and numerical solutions of these models. We extend to the general relativistic setting the nonneutrality results and bounds on the stellar charge obtained by Hund and Kiessling.
BV quantization of a generic degenerate quadratic lagrangian ; Generalizing the YangMills gauge theory, we provide the BV quantization of a field model with a generic almostregular quadratic Lagrangian by use of the fact that the configuration space of such a field model is split into the gaugeinvariant and gaugefixing parts.
On Friedmann's universes ; There is a perfect concordance between Friedmann's cosmological models and the correspondent purely gravitational interactions and negligible pressure Newtonian models. This renders quite intuitive the fact that in general relativity no motion of bodies generates gravitational waves.
Blow up of solutions to generalized KellerSegel model ; The existence and nonexistence of global in time solutions is studied for a class of equations generalizing the chemotaxis model of Keller and Segel. These equations involve L'evy diffusion operators and general potential type nonlinear terms.
An Abased cofibrantly generated model category ; We develop a cofibrantly generated model category structure in the category of topological spaces in which weak equivalences are Aweak equivalences and such that the generalized CWAcomplexes are cofibrant objects. With this structure the exponential law turns out to be a Quillen adjunction.
Recurrence and nonergodicity in generalized windtree models ; In this paper, we consider generalized windtree models and Zdcovers over compact translation surfaces. Under suitable hypothesis, we prove recurrence of the linear flow in a generic direction and nonergodicity of Lebesgue measure.
XGGM Graph Generative Modeling for OutofDistribution Generalization in Visual Question Answering ; Encouraging progress has been made towards Visual Question Answering VQA in recent years, but it is still challenging to enable VQA models to adaptively generalize to outofdistribution OOD samples. Intuitively, recompositions of existing visual concepts ie, attributes and objects can generate unseen compositions in the training set, which will promote VQA models to generalize to OOD samples. In this paper, we formulate OOD generalization in VQA as a compositional generalization problem and propose a graph generative modelingbased training scheme XGGM to implicitly model the problem. XGGM leverages graph generative modeling to iteratively generate a relation matrix and node representations for the predefined graph that utilizes attributeobject pairs as nodes. Furthermore, to alleviate the unstable training issue in graph generative modeling, we propose a gradient distribution consistency loss to constrain the data distribution with adversarial perturbations and the generated distribution. The baseline VQA model LXMERT trained with the XGGM scheme achieves stateoftheart OOD performance on two standard VQA OOD benchmarks, ie, VQACP v2 and GQAOOD. Extensive ablation studies demonstrate the effectiveness of XGGM components. Code is available at urlhttpsgithub.comjingjing12110xggm.
Generative Audio Synthesis with a Parametric Model ; Use a parametric representation of audio to train a generative model in the interest of obtaining more flexible control over the generated sound.
Generalized permutations related to the degenerate Eulerian numbers ; In this work we propose a combinatorial model that generalizes the standard definition of permutation. Our model generalizes the degenerate Eulerian polynomials and numbers of Carlitz from 1979 and provides missing combinatorial proofs for some relations on the degenerate Eulerian numbers.
Parallel Synthesis for Autoregressive Speech Generation ; Autoregressive models have achieved outstanding performance in neural speech synthesis tasks. Though they can generate highly natural human speech, the iterative generation inevitably makes the synthesis time proportional to the utterance's length, leading to low efficiency. Many works were dedicated to generating the whole speech time sequence in parallel and then proposed GANbased, flowbased, and scorebased models. This paper proposed a new thought for the autoregressive generation. Instead of iteratively predicting samples in a time sequence, the proposed model performs frequencywise autoregressive generation FAR and bitwise autoregressive generation BAR to synthesize speech. In FAR, a speech utterance is first split into different frequency subbands. The proposed model generates a subband conditioned on the previously generated one. A full band speech can then be reconstructed by using these generated subbands and a synthesis filter bank. Similarly, in BAR, an 8bit quantized signal is generated iteratively from the first bit. By redesigning the autoregressive method to compute in domains other than the time domain, the number of iterations in the proposed model is no longer proportional to the utterance's length but the number of subbandsbits. The inference efficiency is hence significantly increased. Besides, a postfilter is employed to sample audio signals from output posteriors, and its training objective is designed based on the characteristics of the proposed autoregressive methods. The experimental results show that the proposed model is able to synthesize speech faster than realtime without GPU acceleration. Compared with the baseline autoregressive and nonautoregressive models, the proposed model achieves better MOS and shows its good generalization ability while synthesizing 44 kHz speech or utterances from unseen speakers.
MEGA Multilingual Evaluation of Generative AI ; Generative AI models have impressive performance on many Natural Language Processing tasks such as language understanding, reasoning and language generation. One of the most important questions that is being asked by the AI community today is about the capabilities and limits of these models, and it is clear that evaluating generative AI is very challenging. Most studies on generative Large Language Models LLMs are restricted to English and it is unclear how capable these models are at understanding and generating other languages. We present the first comprehensive benchmarking of generative LLMs MEGA, which evaluates models on standard NLP benchmarks, covering 8 diverse tasks and 33 typologically diverse languages. We also compare the performance of generative LLMs to State of the Art SOTA nonautoregressive models on these tasks to determine how well generative models perform compared to the previous generation of LLMs. We present a thorough analysis of the performance of models across languages and discuss some of the reasons why generative LLMs are currently not optimal for all languages. We create a framework for evaluating generative LLMs in the multilingual setting and provide directions for future progress in the field.
On Attribution of Deepfakes ; Progress in generative modelling, especially generative adversarial networks, have made it possible to efficiently synthesize and alter media at scale. Malicious individuals now rely on these machinegenerated media, or deepfakes, to manipulate social discourse. In order to ensure media authenticity, existing research is focused on deepfake detection. Yet, the adversarial nature of frameworks used for generative modeling suggests that progress towards detecting deepfakes will enable more realistic deepfake generation. Therefore, it comes at no surprise that developers of generative models are under the scrutiny of stakeholders dealing with misinformation campaigns. At the same time, generative models have a lot of positive applications. As such, there is a clear need to develop tools that ensure the transparent use of generative modeling, while minimizing the harm caused by malicious applications. Our technique optimizes over the source of entropy of each generative model to probabilistically attribute a deepfake to one of the models. We evaluate our method on the seminal example of face synthesis, demonstrating that our approach achieves 97.62 attribution accuracy, and is less sensitive to perturbations and adversarial examples. We discuss the ethical implications of our work, identify where our technique can be used, and highlight that a more meaningful legislative framework is required for a more transparent and ethical use of generative modeling. Finally, we argue that model developers should be capable of claiming plausible deniability and propose a second framework to do so this allows a model developer to produce evidence that they did not produce media that they are being accused of having produced.
TextFree ProsodyAware Generative Spoken Language Modeling ; Speech pretraining has primarily demonstrated efficacy on classification tasks, while its capability of generating novel speech, similar to how GPT2 can generate coherent paragraphs, has barely been explored. Generative Spoken Language Modeling GSLM citeLakhotia2021 is the only prior work addressing the generative aspects of speech pretraining, which replaces text with discovered phonelike units for language modeling and shows the ability to generate meaningful novel sentences. Unfortunately, despite eliminating the need of text, the units used in GSLM discard most of the prosodic information. Hence, GSLM fails to leverage prosody for better comprehension, and does not generate expressive speech. In this work, we present a prosodyaware generative spoken language model pGSLM. It is composed of a multistream transformer language model MSTLM of speech, represented as discovered unit and prosodic feature streams, and an adapted HiFiGAN model converting MSTLM outputs to waveforms. We devise a series of metrics for prosody modeling and generation, and reuse metrics from GSLM for content modeling. Experimental results show that the pGSLM can utilize prosody to improve both prosody and content modeling, and also generate natural, meaningful, and coherent speech given a spoken prompt. Audio samples can be found at httpsspeechbot.github.iopgslm. Codes and models are available at httpsgithub.compytorchfairseqtreemainexamplestextlessnlppgslm.
Procedural Generalization by Planning with SelfSupervised World Models ; One of the key promises of modelbased reinforcement learning is the ability to generalize using an internal model of the world to make predictions in novel environments and tasks. However, the generalization ability of modelbased agents is not well understood because existing work has focused on modelfree agents when benchmarking generalization. Here, we explicitly measure the generalization ability of modelbased agents in comparison to their modelfree counterparts. We focus our analysis on MuZero Schrittwieser et al., 2020, a powerful modelbased agent, and evaluate its performance on both procedural and task generalization. We identify three factors of procedural generalization planning, selfsupervised representation learning, and procedural data diversity and show that by combining these techniques, we achieve stateofthe art generalization performance and data efficiency on Procgen Cobbe et al., 2019. However, we find that these factors do not always provide the same benefits for the task generalization benchmarks in MetaWorld Yu et al., 2019, indicating that transfer remains a challenge and may require different approaches than procedural generalization. Overall, we suggest that building generalizable agents requires moving beyond the singletask, modelfree paradigm and towards selfsupervised modelbased agents that are trained in rich, procedural, multitask environments.
Operationalizing Specifications, In Addition to Test Sets for Evaluating Constrained Generative Models ; In this work, we present some recommendations on the evaluation of stateoftheart generative models for constrained generation tasks. The progress on generative models has been rapid in recent years. These largescale models have had three impacts firstly, the fluency of generation in both language and vision modalities has rendered common averagecase evaluation metrics much less useful in diagnosing system errors. Secondly, the same substrate models now form the basis of a number of applications, driven both by the utility of their representations as well as phenomena such as incontext learning, which raise the abstraction level of interacting with such models. Thirdly, the user expectations around these models and their feted public releases have made the technical challenge of out of domain generalization much less excusable in practice. Subsequently, our evaluation methodologies haven't adapted to these changes. More concretely, while the associated utility and methods of interacting with generative models have expanded, a similar expansion has not been observed in their evaluation practices. In this paper, we argue that the scale of generative models could be exploited to raise the abstraction level at which evaluation itself is conducted and provide recommendations for the same. Our recommendations are based on leveraging specifications as a powerful instrument to evaluate generation quality and are readily applicable to a variety of tasks.
Observational constraints on generalized Chaplygin gas model ; The generalized Chaplygin gas model with parameter space alpha1 is studied in this paper. Some reasonable physical constraints are added to justify the use of the larger parameter space. The Type Ia supernova data and age data of some clusters are then used to fit the model. We find that the parameters have bimodal distributions. For the generalized Chaplygin gas model, we also find that less free parameters fit the data better. The best fit model is the spatially flat model with baryons. The best fit parameters are Omegam00.044, wc00.881 and alpha1.57. The transition redshift is zT0.395.
Generation of new classes of integrable quantum and statistical models ; A scheme based on a unifying qdeformed algebra and associated with a generalized Lax operator is proposed for generating integrable quantum and statistical models. As important applications we derive known as well as novel quantum models and obtain new series of vertex models related to qspin, qboson and their hybrid combinations. Generic q, q roots of unity and q 1 yield different classes of integrable models. Exact solutions through algebraic Bethe ansatz is formulated for all models in a unified way.
Grand Unification with Three Generations in Free Fermionic String Models ; We examine the problem of constructing three generation free fermionic string models with grand unified gauge groups. We attempt the construction of Gtimes G models, where G is a grand unified group realized at level 1. This structure allows those Higgs representations to appear which are necessary to break the symmetry down to the standard model gauge group. For GSO10, we find only models with an even number of generations. However, for GSU5 we find a number of 3 generation models.
A Generalized Higgs Model ; The Higgs model is generalized so that in addition to the radial Higgs field there are fields which correspond to the themasy and entropy. The model is further generalized to include state and sign parameters. A reduction to the standard Higgs model is given and how to break symmetry using a nonzero VEV vacuum expectation value is shown. A 'fluid rotation' can be performed on the standard Higgs model to give a model dependant on the entropy and themasy and with a constant mass.
Multivariate Generalized Gaussian Process Models ; We propose a family of multivariate Gaussian process models for correlated outputs, based on assuming that the likelihood function takes the generic form of the multivariate exponential family distribution EFD. We denote this model as a multivariate generalized Gaussian process model, and derive Taylor and Laplace algorithms for approximate inference on the generic model. By instantiating the EFD with specific parameter functions, we obtain two novel GP models and corresponding inference algorithms for correlated outputs 1 a VonMises GP for angle regression; and 2 a Dirichlet GP for regressing on the multinomial simplex.
Mixmaster model is associated to Borcherds algebra ; The problem of integrability of the mixmaster model as a dynamical system with finite degrees of freedom is investigated. The model belongs to the class of pseudoEuclidean generalized Toda chains. It is presented as a quasihomogeneous system after transformations of phase variables. An application of the method of getting of Kovalevskaya exponents to the model leads to the generalized Adler van Moerbeke formula on root vectors. A generalized Cartan matrix is constructed with use of simple root vectors in Minkowski space. The mixmaster model is associated to a Borcherds algebra. The known hyperbolic Kac Moody algebra of Chitre billiard model is obtained by using three spacelike without isotropic root vectors.
On the decomposition of Generalized Additive Independence models ; The GAI Generalized Additive Independence model proposed by Fishburn is a generalization of the additive utility model, which need not satisfy mutual preferential independence. Its great generality makes however its application and study difficult. We consider a significant subclass of GAI models, namely the discrete 2additive GAI models, and provide for this class a decomposition into nonnegative monotone terms. This decomposition allows a reduction from exponential to quadratic complexity in any optimization problem involving discrete 2additive models, making them usable in practice.
An Architecture for Deep, Hierarchical Generative Models ; We present an architecture which lets us train deep, directed generative models with many layers of latent variables. We include deterministic paths between all latent variables and the generated output, and provide a richer set of connections between computations for inference and generation, which enables more effective communication of information throughout the model during training. To improve performance on natural images, we incorporate a lightweight autoregressive model in the reconstruction distribution. These techniques permit endtoend training of models with 10 layers of latent variables. Experiments show that our approach achieves stateoftheart performance on standard image modelling benchmarks, can expose latent class structure in the absence of label information, and can provide convincing imputations of occluded regions in natural images.
Learning to generate onesentence biographies from Wikidata ; We investigate the generation of onesentence Wikipedia biographies from facts derived from Wikidata slotvalue pairs. We train a recurrent neural network sequencetosequence model with attention to select facts and generate textual summaries. Our model incorporates a novel secondary objective that helps ensure it generates sentences that contain the input facts. The model achieves a BLEU score of 41, improving significantly upon the vanilla sequencetosequence model and scoring roughly twice that of a simple template baseline. Human preference evaluation suggests the model is nearly as good as the Wikipedia reference. Manual analysis explores content selection, suggesting the model can trade the ability to infer knowledge against the risk of hallucinating incorrect information.
A general solution to the preferential selection model ; We provide a general analytic solution to Herbert Simon's 1955 model for timeevolving novelty functions. This has farreaching consequences Simon's is a precursor model for Barabasi's 1999 preferential attachment model for growing social networks, and our general abstraction of it more considers attachment to be a form of link selection. We show that any system which can be modeled as instances of typesi.e., occurrence data frequenciescan be generatively modeled and simulated from a distributional perspective with an exceptionally highdegree of accuracy.
Conditional Constrained Graph Variational Autoencoders for Molecule Design ; In recent years, deep generative models for graphs have been used to generate new molecules. These models have produced good results, leading to several proposals in the literature. However, these models may have troubles learning some of the complex laws governing the chemical world. In this work, we explore the usage of the histogram of atom valences to drive the generation of molecules in such models. We present Conditional Constrained Graph Variational Autoencoder CCGVAE, a model that implements this keyidea in a stateoftheart model, and shows improved results on several evaluation metrics on two commonly adopted datasets for molecule generation.
Searching for Search Errors in Neural Morphological Inflection ; Neural sequencetosequence models are currently the predominant choice for language generation tasks. Yet, on wordlevel tasks, exact inference of these models reveals the empty string is often the global optimum. Prior works have speculated this phenomenon is a result of the inadequacy of neural models for language generation. However, in the case of morphological inflection, we find that the empty string is almost never the most probable solution under the model. Further, greedy search often finds the global optimum. These observations suggest that the poor calibration of many neural models may stem from characteristics of a specific subset of tasks rather than general illsuitedness of such models for language generation.
Topic Sensitive Neural Headline Generation ; Neural models have recently been used in text summarization including headline generation. The model can be trained using a set of documentheadline pairs. However, the model does not explicitly consider topical similarities and differences of documents. We suggest to categorizing documents into various topics so that documents within the same topic are similar in content and share similar summarization patterns. Taking advantage of topic information of documents, we propose topic sensitive neural headline generation model. Our model can generate more accurate summaries guided by document topics. We test our model on LCSTS dataset, and experiments show that our method outperforms other baselines on each topic and achieves the stateofart performance.
Latent space generative model for bipartite networks ; Generative network models are extremely useful for understanding the mechanisms that operate in network formation and are widely used across several areas of knowledge. However, when it comes to bipartite networks a class of network frequently encountered in social systems generative models are practically nonexistent. Here, we propose a latent space generative model for bipartite networks growing in a hyperbolic plan. It is an extension of a model previously proposed for onemode networks, based on a maximum entropy approach. We show that, by reproducing bipartite structural properties, such as degree distributions and small cycles, bipartite networks can be better modelled and onemode projected network properties can be naturally assessed.
Dualtrack Music Generation using Deep Learning ; Music generation is always interesting in a sense that there is no formalized recipe. In this work, we propose a novel dualtrack architecture for generating classical piano music, which is able to model the interdependency of lefthand and righthand piano music. Particularly, we experimented with a lot of different models of neural network as well as different representations of music, and the results show that our proposed model outperforms all other tested methods. Besides, we deployed some special policies for model training and generation, which contributed to the model performance remarkably. Finally, under two evaluation methods, we compared our models with the MuseGAN project and true music.
Modern French Poetry Generation with RoBERTa and GPT2 ; We present a novel neural model for modern poetry generation in French. The model consists of two pretrained neural models that are finetuned for the poem generation task. The encoder of the model is a RoBERTa based one while the decoder is based on GPT2. This way the model can benefit from the superior natural language understanding performance of RoBERTa and the good natural language generation performance of GPT2. Our evaluation shows that the model can create French poetry successfully. On a 5 point scale, the lowest score of 3.57 was given by human judges to typicality and emotionality of the output poetry while the best score of 3.79 was given to understandability.
On type III generalized half logistic distribution ; It is well known that generalized models is attracting the attention of researchers in recent times because of their flexibilities. Particularly, the logistic model has been generalized and applied by many authors while the half logistic distribution has not recieve much attention in term of its generalization. In this paper, we considered a generalized form of half logistic model called type III generalized half logistic distribution. We obtained its probability density function, the cumulative distribution function, the nth moment, the median, the mode, the 100ppercentage points and the order statistics of the generalized distribution are established.
Chinese Poetry Generation with Flexible Styles ; Research has shown that sequencetosequence neural models, particularly those with the attention mechanism, can successfully generate classical Chinese poems. However, neural models are not capable of generating poems that match specific styles, such as the impulsive style of Li Bai, a famous poet in the Tang Dynasty. This work proposes a memoryaugmented neural model to enable the generation of stylespecific poetry. The key idea is a memory structure that stores how poems with a desired style were generated by humans, and uses similar fragments to adjust the generation. We demonstrate that the proposed algorithm generates poems with flexible styles, including styles of a particular era and an individual poet.
Investigating Under and Overfitting in Wasserstein Generative Adversarial Networks ; We investigate under and overfitting in Generative Adversarial Networks GANs, using discriminators unseen by the generator to measure generalization. We find that the model capacity of the discriminator has a significant effect on the generator's model quality, and that the generator's poor performance coincides with the discriminator underfitting. Contrary to our expectations, we find that generators with large model capacities relative to the discriminator do not show evidence of overfitting on CIFAR10, CIFAR100, and CelebA.
Judge a Sentence by Its Content to Generate Grammatical Errors ; Data sparsity is a wellknown problem for grammatical error correction GEC. Generating synthetic training data is one widely proposed solution to this problem, and has allowed models to achieve stateoftheart SOTA performance in recent years. However, these methods often generate unrealistic errors, or aim to generate sentences with only one error. We propose a learning based two stage method for synthetic data generation for GEC that relaxes this constraint on sentences containing only one error. Errors are generated in accordance with sentence merit. We show that a GEC model trained on our synthetically generated corpus outperforms models trained on synthetic data from prior work.
Automatic Locally Robust Estimation with Generated Regressors ; Many economic and causal parameters of interest depend on generated regressors, including structural parameters in models with endogenous variables estimated by control functions and in models with sample selection. Inference with generated regressors is complicated by the very complex expression for influence functions and asymptotic variances. To address this problem, we propose automatic Locally Robustdebiased GMM estimators in a general setting with generated regressors. Importantly, we allow for the generated regressors to be generated from machine learners, such as Random Forest, Neural Nets, Boosting, and many others. We use our results to construct novel Doubly Robust estimators for the Counterfactural Average Structural Function and Average Partial Effects in models with endogeneity and sample selection, respectively.
A General Equivalence Theorem for Crossover Designs under Generalized Linear Models ; With the help of Generalized Estimating Equations, we identify locally Doptimal crossover designs for generalized linear models. We adopt the variance of parameters of interest as the objective function, which is minimized using constrained optimization to obtain optimal crossover designs. In this case, the traditional general equivalence theorem could not be used directly to check the optimality of obtained designs. In this manuscript, we derive a corresponding general equivalence theorem for crossover designs under generalized linear models.
Improve Language Modelling for Code Completion through Statement Level Language Model based on Statement Embedding Generated by BiLSTM ; Language models such as RNN, LSTM or other variants have been widely used as generative models in natural language processing. In last few years, taking source code as natural languages, parsing source code into a token sequence and using a language model such as LSTM to train that sequence are stateofart methods to get a generative model for solving the problem of code completion. However, for source code with hundreds of statements, traditional LSTM model or attentionbased LSTM model failed to capture the long term dependency of source code. In this paper, we propose a novel statementlevel language model SLM which uses BiLSTM to generate the embedding for each statement. The standard LSTM is adopted in SLM to iterate and accumulate the embedding of each statement in context to help predict next code. The statement level attention mechanism is also adopted in the model. The proposed model SLM is aimed at token level code completion. The experiments on innerproject and crossproject data sets indicate that the newly proposed statementlevel language model with attention mechanism SLM outperforms all other stateofart models in token level code completion task.
Framework for Converting Mechanistic Network Models to Probabilistic Models ; There are two prominent paradigms to the modeling of networks in the first, referred to as the mechanistic approach, one specifies a set of domainspecific mechanistic rules that are used to grow or evolve the network over time; in the second, referred to as the probabilistic approach, one describes a model that specifies the likelihood of observing a given network. Mechanistic models are scalable and, in select cases, allow for analytical solutions for some of their properties, whereas probabilistic models have inferential tools available. Mechanistic models are appealing because they capture scientific processes that are hypothesized to be responsible for network generation. We introduce a generic framework for converting a mechanistic network model to a probabilistic network model. The proposed framework makes it possible to identify the essential network properties and their joint probability distribution for mechanistic network models, which enables addressing questions such as whether two mechanistic models generate networks with identical distributions of properties of interest, or whether a network property, such as clustering, is over or under represented in the generated networks compared to a reference model. The proposed framework is intended to bridge some of the gap that currently exists between mechanistic and probabilistic network models.
Combinatorial and accessible weak model categories ; In a previous work, we have introduced a weakening of Quillen model categories called weak model categories. They still allow all the usual constructions of model category theory, but are easier to construct and are in some sense better behaved. In this paper we continue to develop their general theory by introducing combinatorial and accessible weak model categories. We give simple necessary and sufficient conditions under which such a weak model category can be extended into a left andor right semimodel category. As an application, we recover CisinskiOlschok theory and generalize it to weak and semimodel categories. We also provide general existence theorems for both left and right Bousfield localization of combinatorial and accessible weak model structures, which combined with the results above gives existence results for left and right Bousfield localization of combinatorial and accessible left and right semimodel categories, generalizing previous results of Barwick. Surprisingly,we show that any left or right Bousfield localization of an accessible or combinatorial Quillen model category always exists, without properness assumptions, and is simultaneously both a left and a right semimodel category, without necessarily being a Quillen model category itself.
MultiObjective De Novo Drug Design with Conditional Graph Generative Model ; Recently, deep generative models have revealed itself as a promising way of performing de novo molecule design. However, previous research has focused mainly on generating SMILES strings instead of molecular graphs. Although current graph generative models are available, they are often too general and computationally expensive, which restricts their application to molecules with small sizes. In this work, a new de novo molecular design framework is proposed based on a type sequential graph generators that do not use atom level recurrent units. Compared with previous graph generative models, the proposed method is much more tuned for molecule generation and have been scaled up to cover significantly larger molecules in the ChEMBL database. It is shown that the graphbased model outperforms SMILES based models in a variety of metrics, especially in the rate of valid outputs. For the application of drug design tasks, conditional graph generative model is employed. This method offers higher flexibility compared to previous finetuning based approach and is suitable for generation based on multiple objectives. This approach is applied to solve several drug design problems, including the generation of compounds containing a given scaffold, generation of compounds with specific druglikeness and synthetic accessibility requirements, as well as generating dual inhibitors against JNK3 and GSK3beta. Results show high enrichment rates for outputs satisfying the given requirements.
Transferable Universal Adversarial Perturbations Using Generative Models ; Deep neural networks tend to be vulnerable to adversarial perturbations, which by adding to a natural image can fool a respective model with high confidence. Recently, the existence of imageagnostic perturbations, also known as universal adversarial perturbations UAPs, were discovered. However, existing UAPs still lack a sufficiently high fooling rate, when being applied to an unknown target model. In this paper, we propose a novel deep learning technique for generating more transferable UAPs. We utilize a perturbation generator and some given pretrained networks socalled source models to generate UAPs using the ImageNet dataset. Due to the similar feature representation of various model architectures in the first layer, we propose a loss formulation that focuses on the adversarial energy only in the respective first layer of the source models. This supports the transferability of our generated UAPs to any other target model. We further empirically analyze our generated UAPs and demonstrate that these perturbations generalize very well towards different target models. Surpassing the current state of the art in both, fooling rate and modeltransferability, we can show the superiority of our proposed approach. Using our generated nontargeted UAPs, we obtain an average fooling rate of 93.36 on the source models state of the art 82.16. Generating our UAPs on the deep ResNet152, we obtain about a 12 absolute fooling rate advantage vs. cuttingedge methods on VGG16 and VGG19 target models.
Prospects for Declarative Mathematical Modeling of Complex Biological Systems ; Declarative modeling uses symbolic expressions to represent models. With such expressions one can formalize highlevel mathematical computations on models that would be difficult or impossible to perform directly on a lowerlevel simulation program, in a generalpurpose programming language. Examples of such computations on models include model analysis, relatively generalpurpose modelreduction maps, and the initial phases of model implementation, all of which should preserve or approximate the mathematical semantics of a complex biological model. The potential advantages are particularly relevant in the case of developmental modeling, wherein complex spatial structures exhibit dynamics at molecular, cellular, and organogenic levels to relate genotype to multicellular phenotype. Multiscale modeling can benefit from both the expressive power of declarative modeling languages and the application of model reduction methods to link models across scale. Based on previous work, here we define declarative modeling of complex biological systems by defining the operator algebra semantics of an increasingly powerful series of declarative modeling languages including reactionlike dynamics of parameterized and extended objects; we define semanticspreserving implementation and semanticsapproximating model reduction transformations; and we outline a metahierarchy for organizing declarative models and the mathematical methods that can fruitfully manipulate them.
Canonical Formalism for a 2nDimensional Model with Topological Mass Generation ; The fourdimensional model with topological mass generation that was found by Dvali, Jackiw and Pi has recently been generalized to any even number of dimensions 2ndimensions in a nontrivial manner in which a Stueckelbergtype mass term is introduced S. Deguchi and S. Hayakawa, Phys. Rev. D 77, 045003 2008, arXiv0711.1446. The present paper deals with a selfcontained model, called here a modified hybrid model, proposed in this 2ndimensional generalization and considers the canonical formalism for this model. For the sake of convenience, the canonical formalism itself is studied for a model equivalent to the modified hybrid model by following the recipe for treating constrained Hamiltonian systems. This formalism is applied to the canonical quantization of the equivalent model in order to clarify observable and unobservable particles in the model. The equivalent model with a gaugefixing term is converted to the modified hybrid model with a corresponding gaugefixing term in a BecchiRouetStoraTyutin BRSTinvariant manner. Thereby it is shown that the ChernPontryagin density behaves as an observable massive particle or field. The topological mass generation is thus verified at the quantumtheoretical level.
Generalized PolandScheraga denaturation model and twodimensional renewal processes ; The PolandScheraga model describes the denaturation transition of two complementary in particular, equally long strands of DNA, and it has enjoyed a remarkable success both for quantitative modeling purposes and at a more theoretical level. The solvable character of the homogeneous version of the model is one of features to which its success is due. In the biophysical literature a generalization of the model, allowing different length and non complementarity of the strands, has been considered and the solvable character extends to this substantial generalization. We present a mathematical analysis of the homogeneous generalized PolandScheraga model. Our approach is based on the fact that such a model is a homogeneous pinning model based on a bivariate renewal process, much like the basic PolandScheraga model is a pinning model based on a univariate, i.e. standard, renewal. We present a complete analysis of the free energy singularities, which include the localizationdelocalization critical point and in general other critical points that have been only partially captured in the physical literature. We obtain also precise estimates on the path properties of the model.
GEN Model An Alternative Approach to Deep Neural Network Models ; In this paper, we introduce an alternative approach, namely GEN Genetic Evolution Network Model, to the deep learning models. Instead of building one single deep model, GEN adopts a geneticevolutionary learning strategy to build a group of unit models generations by generations. Significantly different from the wellknown representation learning models with extremely deep structures, the unit models covered in GEN are of a much shallower architecture. In the training process, from each generation, a subset of unit models will be selected based on their performance to evolve and generate the child models in the next generation. GEN has significant advantages compared with existing deep representation learning models in terms of both learning effectiveness, efficiency and interpretability of the learning process and learned results. Extensive experiments have been done on diverse benchmark datasets, and the experimental results have demonstrated the outstanding performance of GEN compared with the stateoftheart baseline methods in both effectiveness of efficiency.
A New Generative Statistical Model for Graphs The Latent Order Logistic LOLOG Model ; Full probability models are critical for the statistical modeling of complex networks, and yet there are few general, flexible and widely applicable generative methods. We propose a new family of probability models motivated by the idea of network growth, which we call the Latent Order Logistic LOLOG model. LOLOG is a fully general framework capable of describing any probability distribution over graph configurations, though not all distributions are easily expressible or estimable as a LOLOG. We develop inferential procedures based on Monte Carlo Method of Moments, Generalized Method of Moments and variational inference. To show the flexibility of the model framework, we show how socalled scalefree networks can be modeled as LOLOGs via preferential attachment. The advantages of LOLOG in terms of avoidance of degeneracy, ease of sampling, and model flexibility are illustrated. Connections with the popular Exponentialfamily Random Graph model ERGM are also explored, and we find that they are identical in the case of dyadic independence. Finally, we apply the model to a social network of collaboration within a corporate law firm, a friendship network among adolescent students, and the friendship relations in an online social network.
NonHermitian generalizations of extended SuSchriefferHeeger models ; NonHermitian generalizations of the SuSchriefferHeeger SSH models with higher periods of the hopping coefficients, called the SSH3 and SSH4 models, are analyzed. The conventional construction of the winding number fails for the Hermitian SSH3 model, but the nonHermitian generalization leads to a topological system due to a point gap on the complex plane. The nonHermitian SSH3 model thus has a winding number and exhibits the nonHermitian skin effect. Moreover, the SSH3 model has two types of localized states and a zeroenergy state associated with special symmetries. The total Zak phase of the SSH3 model exhibits quantization, and its finite value indicates coexistence of the two types of localized states. Meanwhile, the SSH4 model resembles the SSH model, and its nonHermitian generalization also exhibits the nonHermitian skin effect. A careful analysis of the nonHermitian SSH4 model with different boundary conditions shows the bulkboundary correspondence is restored with the help of the generalized Brillouin zone or the realspace winding number. The physics of the nonHermitian SSH3 and SSH4 models may be tested in coldatom or other simulators.
A Survey of Diffusion Based Image Generation Models Issues and Their Solutions ; Recently, there has been significant progress in the development of large models. Following the success of ChatGPT, numerous language models have been introduced, demonstrating remarkable performance. Similar advancements have also been observed in image generation models, such as Google's Imagen model, OpenAI's DALLE 2, and stable diffusion models, which have exhibited impressive capabilities in generating images. However, similar to large language models, these models still encounter unresolved challenges. Fortunately, the availability of opensource stable diffusion models and their underlying mathematical principles has enabled the academic community to extensively analyze the performance of current image generation models and make improvements based on this stable diffusion framework. This survey aims to examine the existing issues and the current solutions pertaining to image generation models.
Using reference models in variable selection ; Variable selection, or more generally, model reduction is an important aspect of the statistical workflow aiming to provide insights from data. In this paper, we discuss and demonstrate the benefits of using a reference model in variable selection. A reference model acts as a noisefilter on the target variable by modeling its data generating mechanism. As a result, using the reference model predictions in the model selection procedure reduces the variability and improves stability leading to improved model selection performance. Assuming that a Bayesian reference model describes the true distribution of future data well, the theoretically preferred usage of the reference model is to project its predictive distribution to a reduced model leading to projection predictive variable selection approach. Alternatively, reference models may also be used in an adhoc manner in combination with common variable selection methods. In several numerical experiments, we investigate the performance of the projective prediction approach as well as alternative variable selection methods with and without reference models. Our results indicate that the use of reference models generally translates into better and more stable variable selection. Additionally, we demonstrate that the projection predictive approach shows superior performance as compared to alternative variable selection methods independently of whether or not they use reference models.
Vector Learning for Cross Domain Representations ; Recently, generative adversarial networks have gained a lot of popularity for image generation tasks. However, such models are associated with complex learning mechanisms and demand very large relevant datasets. This work borrows concepts from image and video captioning models to form an image generative framework. The model is trained in a similar fashion as recurrent captioning model and uses the learned weights for image generation. This is done in an inverse direction, where the input is a caption and the output is an image. The vector representation of the sentence and frames are extracted from an encoderdecoder model which is initially trained on similar sentence and image pairs. Our model conditions image generation on a natural language caption. We leverage a sequencetosequence model to generate synthetic captions that have the same meaning for having a robust image generation. One key advantage of our method is that the traditional image captioning datasets can be used for synthetic sentence paraphrases. Results indicate that images generated through multiple captions are better at capturing the semantic meaning of the family of captions.
The Effects of Invertibility on the Representational Complexity of Encoders in Variational Autoencoders ; Training and using modern neuralnetwork based latentvariable generative models like Variational Autoencoders often require simultaneously training a generative direction along with an inferentialencoding direction, which approximates the posterior distribution over the latent variables. Thus, the question arises how complex does the inferential model need to be, in order to be able to accurately model the posterior distribution of a given generative model In this paper, we identify an important property of the generative map impacting the required size of the encoder. We show that if the generative map is strongly invertible in a sense we suitably formalize, the inferential model need not be much more complex. Conversely, we prove that there exist noninvertible generative maps, for which the encoding direction needs to be exponentially larger under standard assumptions in computational complexity. Importantly, we do not require the generative model to be layerwise invertible, which a lot of the related literature assumes and isn't satisfied by many architectures used in practice e.g. convolution and pooling based networks. Thus, we provide theoretical support for the empirical wisdom that learning deep generative models is harder when data lies on a lowdimensional manifold.
Paraphrase Generation with Latent Bag of Words ; Paraphrase generation is a longstanding important problem in natural language processing. In addition, recent progress in deep generative models has shown promising results on discrete latent variables for text generation. Inspired by variational autoencoders with discrete latent structures, in this work, we propose a latent bag of words BOW model for paraphrase generation. We ground the semantics of a discrete latent variable by the BOW from the target sentences. We use this latent variable to build a fully differentiable content planning and surface realization model. Specifically, we use source words to predict their neighbors and model the target BOW with a mixture of softmax. We use Gumbel topk reparameterization to perform differentiable subset sampling from the predicted BOW distribution. We retrieve the sampled word embeddings and use them to augment the decoder and guide its generation search space. Our latent BOW model not only enhances the decoder, but also exhibits clear interpretability. We show the model interpretability with regard to emphi unsupervised learning of word neighbors emphii the stepbystep generation procedure. Extensive experiments demonstrate the transparent and effective generation process of this model.footnoteOur code can be found at urlhttpsgithub.comFranxYaodgmlatentbow
MEGATRONCNTRL Controllable Story Generation with External Knowledge Using LargeScale Language Models ; Existing pretrained large language models have shown unparalleled generative capabilities. However, they are not controllable. In this paper, we propose MEGATRONCNTRL, a novel framework that uses largescale language models and adds control to text generation by incorporating an external knowledge base. Our framework consists of a keyword predictor, a knowledge retriever, a contextual knowledge ranker, and a conditional text generator. As we do not have access to groundtruth supervision for the knowledge ranker, we make use of weak supervision from sentence embedding. The empirical results show that our model generates more fluent, consistent, and coherent stories with less repetition and higher diversity compared to prior work on the ROC story dataset. We showcase the controllability of our model by replacing the keywords used to generate stories and rerunning the generation process. Human evaluation results show that 77.5 of these stories are successfully controlled by the new keywords. Furthermore, by scaling our model from 124 million to 8.3 billion parameters we demonstrate that larger models improve both the quality of generation from 74.5 to 93.0 for consistency and controllability from 77.5 to 91.5.
ProphetNetAds A Looking Ahead Strategy for Generative Retrieval Models in Sponsored Search Engine ; In a sponsored search engine, generative retrieval models are recently proposed to mine relevant advertisement keywords for users' input queries. Generative retrieval models generate outputs token by token on a path of the target library prefix tree Trie, which guarantees all of the generated outputs are legal and covered by the target library. In actual use, we found several typical problems caused by Trieconstrained searching length. In this paper, we analyze these problems and propose a looking ahead strategy for generative retrieval models named ProphetNetAds. ProphetNetAds improves the retrieval ability by directly optimizing the Trieconstrained searching space. We build a dataset from a realword sponsored search engine and carry out experiments to analyze different generative retrieval models. Compared with Triebased LSTM generative retrieval model proposed recently, our single model result and integrated result improve the recall by 15.58 and 18.8 respectively with beam size 5. Case studies further demonstrate how these problems are alleviated by ProphetNetAds clearly.
Generative Capacity of Probabilistic Protein Sequence Models ; Potts models and variational autoencoders VAEs have recently gained popularity as generative protein sequence models GPSMs to explore fitness landscapes and predict the effect of mutations. Despite encouraging results, quantitative characterization and comparison of GPSMgenerated probability distributions is still lacking. It is currently unclear whether GPSMs can faithfully reproduce the complex multiresidue mutation patterns observed in natural sequences arising due to epistasis. We develop a set of sequence statistics to assess the generative capacity of three GPSMs of recent interest the pairwise Potts Hamiltonian, the VAE, and the siteindependent model, using natural and synthetic datasets. We show that the generative capacity of the Potts Hamiltonian model is the largest, in that the higher order mutational statistics generated by the model agree with those observed for natural sequences. In contrast, we show that the VAE's generative capacity lies between the pairwise Potts and siteindependent models. Importantly, our work measures GPSM generative capacity in terms of higherorder sequence covariation statistics which we have developed, and provides a new framework for evaluating and interpreting GPSM accuracy that emphasizes the role of epistasis.
Superresolution of spin configurations based on flowbased generative models ; We present a superresolution method for spin systems using a flowbased generative model that is a deep generative model with reversible neural network architecture. Starting from spin configurations on a twodimensional square lattice, our model generates spin configurations of a larger lattice. As a flowbased generative model precisely estimates the distribution of the generated configurations, it can be combined with Monte Carlo simulation to generate large lattice configurations according to the Boltzmann distribution. Hence, the longrange correlation on a large configuration is reduced into the shorter one through the flowbased generative model. This alleviates the critical slowing down near the critical temperature. We demonstrated 8 times increased lattice size in the linear dimensions using our superresolution scheme repeatedly. We numerically show that by performing simulations for 16times 16 configurations, our model can sample lattice configurations at 128times 128 on which the thermal average of physical quantities has good agreement with the one evaluated by the traditional MetropolisHasting Monte Carlo simulation.
Controllable Text Generation with NeurallyDecomposed Oracle ; We propose a general and efficient framework to control autoregressive generation models with NeurAllyDecomposed Oracle NADO. Given a pretrained base language model and a sequencelevel boolean oracle function, we propose to decompose the oracle function into tokenlevel guidance to steer the base model in text generation. Specifically, the tokenlevel guidance is approximated by a neural model trained with examples sampled from the base model, demanding no additional auxiliary labeled data. Based on posterior regularization, we present the closedform optimal solution to incorporate the tokenlevel guidance into the base model for controllable generation. We further provide a theoretical analysis of how the approximation quality of NADO affects the controllable generation results. Experiments conducted on two applications 1 text generation with lexical constraints and 2 machine translation with formality control demonstrate that our framework efficiently guides the base model towards the given oracle while maintaining high generation quality.
Towards Universal Fake Image Detectors that Generalize Across Generative Models ; With generative models proliferating at a rapid rate, there is a growing need for general purpose fake image detectors. In this work, we first show that the existing paradigm, which consists of training a deep network for realvsfake classification, fails to detect fake images from newer breeds of generative models when trained to detect GAN fake images. Upon analysis, we find that the resulting classifier is asymmetrically tuned to detect patterns that make an image fake. The real class becomes a sink class holding anything that is not fake, including generated images from models not accessible during training. Building upon this discovery, we propose to perform realvsfake classification without learning; i.e., using a feature space not explicitly trained to distinguish real from fake images. We use nearest neighbor and linear probing as instantiations of this idea. When given access to the feature space of a large pretrained visionlanguage model, the very simple baseline of nearest neighbor classification has surprisingly good generalization ability in detecting fake images from a wide variety of generative models; e.g., it improves upon the SoTA by 15.07 mAP and 25.90 acc when tested on unseen diffusion and autoregressive models.
Conditional Generation from Unconditional Diffusion Models using Denoiser Representations ; Denoising diffusion models have gained popularity as a generative modeling technique for producing highquality and diverse images. Applying these models to downstream tasks requires conditioning, which can take the form of text, class labels, or other forms of guidance. However, providing conditioning information to these models can be challenging, particularly when annotations are scarce or imprecise. In this paper, we propose adapting pretrained unconditional diffusion models to new conditions using the learned internal representations of the denoiser network. We demonstrate the effectiveness of our approach on various conditional generation tasks, including attributeconditioned generation and maskconditioned generation. Additionally, we show that augmenting the Tiny ImageNet training set with synthetic images generated by our approach improves the classification accuracy of ResNet baselines by up to 8. Our approach provides a powerful and flexible way to adapt diffusion models to new conditions and generate highquality augmented data for various conditional generation tasks.
SequenceMatch Imitation Learning for Autoregressive Sequence Modelling with Backtracking ; In many domains, autoregressive models can attain high likelihood on the task of predicting the next observation. However, this maximumlikelihood MLE objective does not necessarily match a downstream usecase of autoregressively generating highquality sequences. The MLE objective weights sequences proportionally to their frequency under the data distribution, with no guidance for the model's behaviour out of distribution OOD leading to compounding error during autoregressive generation. In order to address this compounding error problem, we formulate sequence generation as an imitation learning IL problem. This allows us to minimize a variety of divergences between the distribution of sequences generated by an autoregressive model and sequences from a dataset, including divergences with weight on OOD generated sequences. The IL framework also allows us to incorporate backtracking by introducing a backspace action into the generation process. This further mitigates the compounding error problem by allowing the model to revert a sampled token if it takes the sequence OOD. Our resulting method, SequenceMatch, can be implemented without adversarial training or major architectural changes. We identify the SequenceMatchchi2 divergence as a more suitable training objective for autoregressive models which are used for generation. We show that empirically, SequenceMatch training leads to improvements over MLE on text generation with language models.
Using Motif Transitions for Temporal Graph Generation ; Graph generative models are highly important for sharing surrogate data and benchmarking purposes. Realworld complex systems often exhibit dynamic nature, where the interactions among nodes change over time in the form of a temporal network. Most temporal network generation models extend the static graph generation models by incorporating temporality in the generation process. More recently, temporal motifs are used to generate temporal networks with better success. However, existing models are often restricted to a small set of predefined motif patterns due to the high computational cost of counting temporal motifs. In this work, we develop a practical temporal graph generator, Motif Transition Model MTM, to generate synthetic temporal networks with realistic global and local features. Our key idea is modeling the arrival of new events as temporal motif transition processes. We first calculate the transition properties from the input graph and then simulate the motif transition processes based on the transition probabilities and transition rates. We demonstrate that our model consistently outperforms the baselines with respect to preserving various global and local temporal graph statistics and runtime performance.
An Accurate Graph Generative Model with Tunable Features ; A graph is a very common and powerful data structure used for modeling communication and social networks. Models that generate graphs with arbitrary features are important basic technologies in repeated simulations of networks and prediction of topology changes. Although existing generative models for graphs are useful for providing graphs similar to realworld graphs, graph generation models with tunable features have been less explored in the field. Previously, we have proposed GraphTune, a generative model for graphs that continuously tune specific graph features of generated graphs while maintaining most of the features of a given graph dataset. However, the tuning accuracy of graph features in GraphTune has not been sufficient for practical applications. In this paper, we propose a method to improve the accuracy of GraphTune by adding a new mechanism to feed back errors of graph features of generated graphs and by training them alternately and independently. Experiments on a realworld graph dataset showed that the features in the generated graphs are accurately tuned compared with conventional models.
Unbiased Face Synthesis With Diffusion Models Are We There Yet ; Texttoimage diffusion models have achieved widespread popularity due to their unprecedented image generation capability. In particular, their ability to synthesize and modify human faces has spurred research into using generated face images in both training data augmentation and model performance assessments. In this paper, we study the efficacy and shortcomings of generative models in the context of face generation. Utilizing a combination of qualitative and quantitative measures, including embeddingbased metrics and user studies, we present a framework to audit the characteristics of generated faces conditioned on a set of social attributes. We applied our framework on faces generated through stateoftheart texttoimage diffusion models. We identify several limitations of face image generation that include faithfulness to the text prompt, demographic disparities, and distributional shifts. Furthermore, we present an analytical model that provides insights into how training data selection contributes to the performance of generative models.
LiDAR Data Synthesis with Denoising Diffusion Probabilistic Models ; Generative modeling of 3D LiDAR data is an emerging task with promising applications for autonomous mobile robots, such as scalable simulation, scene manipulation, and sparsetodense completion of LiDAR point clouds. Existing approaches have shown the feasibility of imagebased LiDAR data generation using deep generative models while still struggling with the fidelity of generated data and training instability. In this work, we present R2DM, a novel generative model for LiDAR data that can generate diverse and highfidelity 3D scene point clouds based on the image representation of range and reflectance intensity. Our method is based on the denoising diffusion probabilistic models DDPMs, which have demonstrated impressive results among generative model frameworks and have been significantly progressing in recent years. To effectively train DDPMs on the LiDAR domain, we first conduct an indepth analysis regarding data representation, training objective, and spatial inductive bias. Based on our designed model R2DM, we also introduce a flexible LiDAR completion pipeline using the powerful properties of DDPMs. We demonstrate that our method outperforms the baselines on the generation task of KITTI360 and KITTIRaw datasets and the upsampling task of KITTI360 datasets. Our code and pretrained weights will be available at httpsgithub.comkazuto1011r2dm.
Sensitivity Analysis of the MCRF Model to Different Transiogram Joint Modeling Methods for Simulating Categorical Spatial Variables ; Markov chain geostatistics is a methodology for simulating categorical fields. Its fundamental model for conditional simulation is the Markov chain random field MCRF model, and its basic spatial correlation measure is the transiogram. There are different ways to get transiogram models i.e., continuouslag transiograms for MCRF simulation based on sample data and expert knowledge linear interpolation method, mathematical model jointfitting method, and a mixed method of the former two. Two case studies were conducted to show how simulated results, including optimal prediction maps and simulated realization maps, would respond to different sets of transiogram models generated by the three different transiogram jointing modeling methods. Results show that the three transiogram joint modeling methods are applicable; the MCRF model is generally not very sensitive to the transiogram models produced by different methods, especially when sample data are sufficient to generate reliable experimental transiograms; and the differences between overall simulation accuracies based on different sets of transiogram models are not significant. However, some minor classes show obvious improvement in simulation accuracy when theoretical transiogram models generated by mathematical model fitting with expert knowledge are used for minor classes. In general, this study indicates that the methods for deriving transiogram models from experimental transiograms can perform well in conditional simulations of categorical soil variables when meaningful experimental transiograms can be estimated. Employing mathematical models for transiogram modeling of minor classes provides a way to incorporate expert knowledge and improve the simulation accuracy of minor classes.
Leveraging Evolution Dynamics to Generate Benchmark Complex Networks with Community Structures ; The past decade has seen tremendous growth in the field of Complex Social Networks. Several network generation models have been extensively studied to develop an understanding of how real world networks evolve over time. Two important applications of these models are to study the evolution dynamics and processes that shape a network, and to generate benchmark networks with known community structures. Research has been conducted in both these directions, relatively independent of the other. This creates a disjunct between real world networks and the networks generated as benchmarks to study community detection algorithms. In this paper, we propose to study both these application areas together. We introduce a network generation model which is based on evolution dynamics of real world networks and, it can generate networks with community structures that can be used as benchmark graphs. We study the behaviour of different community detection algorithms based on the proposed model and compare it with other models to generate benchmark graphs. Results suggest that the proposed model can generate networks which are not only structurally similar to real world networks but can be used to generate networks with varying community sizes and topologies.
Learning Inverse Mapping by Autoencoder based Generative Adversarial Nets ; The inverse mapping of GANs'Generative Adversarial Nets generator has a great potential value.Hence, some works have been developed to construct the inverse function of generator by directly learning or adversarial learning.While the results are encouraging, the problem is highly challenging and the existing ways of training inverse models of GANs have many disadvantages, such as hard to train or poor performance.Due to these reasons, we propose a new approach based on using inverse generator IG model as encoder and pretrained generator G as decoder of an AutoEncoder network to train the IG model. In the proposed model, the difference between the input and output, which are both the generated image of pretrained GAN's generator, of AutoEncoder is directly minimized. The optimizing method can overcome the difficulty in training and inverse model of an non onetoone function.We also applied the inverse model of GANs' generators to image searching and translation.The experimental results prove that the proposed approach works better than the traditional approaches in image searching.
MolGAN An implicit generative model for small molecular graphs ; Deep generative models for graphstructured data offer a new angle on the problem of chemical synthesis by optimizing differentiable models that directly generate molecular graphs, it is possible to sidestep expensive search procedures in the discrete and vast space of chemical structures. We introduce MolGAN, an implicit, likelihoodfree generative model for small molecular graphs that circumvents the need for expensive graph matching procedures or node ordering heuristics of previous likelihoodbased methods. Our method adapts generative adversarial networks GANs to operate directly on graphstructured data. We combine our approach with a reinforcement learning objective to encourage the generation of molecules with specific desired chemical properties. In experiments on the QM9 chemical database, we demonstrate that our model is capable of generating close to 100 valid compounds. MolGAN compares favorably both to recent proposals that use stringbased SMILES representations of molecules and to a likelihoodbased method that directly generates graphs, albeit being susceptible to mode collapse. Code at httpsgithub.comnicoladecaoMolGAN
Deep Structured Generative Models ; Deep generative models have shown promising results in generating realistic images, but it is still nontrivial to generate images with complicated structures. The main reason is that most of the current generative models fail to explore the structures in the images including spatial layout and semantic relations between objects. To address this issue, we propose a novel deep structured generative model which boosts generative adversarial networks GANs with the aid of structure information. In particular, the layout or structure of the scene is encoded by a stochastic andor graph sAOG, in which the terminal nodes represent single objects and edges represent relations between objects. With the sAOG appropriately harnessed, our model can successfully capture the intrinsic structure in the scenes and generate images of complicated scenes accordingly. Furthermore, a detection network is introduced to infer scene structures from a image. Experimental results demonstrate the effectiveness of our proposed method on both modeling the intrinsic structures, and generating realistic images.
Unsupervised Primitive Discovery for Improved 3D Generative Modeling ; 3D shape generation is a challenging problem due to the highdimensional output space and complex part configurations of realworld objects. As a result, existing algorithms experience difficulties in accurate generative modeling of 3D shapes. Here, we propose a novel factorized generative model for 3D shape generation that sequentially transitions from coarse to fine scale shape generation. To this end, we introduce an unsupervised primitive discovery algorithm based on a higherorder conditional random field model. Using the primitive parts for shapes as attributes, a parameterized 3D representation is modeled in the first stage. This representation is further refined in the next stage by adding fine scale details to shape. Our results demonstrate improved representation ability of the generative model and better quality samples of newly generated 3D shapes. Further, our primitive generation approach can accurately parse common objects into a simplified representation.
Parameterization of Forced Isotropic Turbulent Flow using Autoencoders and Generative Adversarial Networks ; Autoencoders and generative neural network models have recently gained popularity in fluid mechanics due to their spontaneity and low processing time instead of high fidelity CFD simulations. Auto encoders are used as model order reduction tools in applications of fluid mechanics by compressing input highdimensional data using an encoder to map the input space into a lowerdimensional latent space. Whereas, generative models such as Variational Autoencoders VAEs and Generative Adversarial Networks GANs are proving to be effective in generating solutions to chaotic models with high 'randomness' such as turbulent flows. In this study, forced isotropic turbulence flow is generated by parameterizing into some basic statistical characteristics. The models trained on presimulated data from dependencies on these characteristics and the flow generation is then affected by varying these parameters. The latent vectors pushed along the generator models like the decoders and generators contain independent entries which can be used to create different outputs with similar properties. The use of neural networkbased architecture removes the need for dependency on the classical meshbased NavierStoke equation estimation which is prominent in many CFD softwares.
Generative Models from the perspective of Continual Learning ; Which generative model is the most suitable for Continual Learning This paper aims at evaluating and comparing generative models on disjoint sequential image generation tasks. We investigate how several models learn and forget, considering various strategies rehearsal, regularization, generative replay and finetuning. We used two quantitative metrics to estimate the generation quality and memory ability. We experiment with sequential tasks on three commonly used benchmarks for Continual Learning MNIST, Fashion MNIST and CIFAR10. We found that among all models, the original GAN performs best and among Continual Learning strategies, generative replay outperforms all other methods. Even if we found satisfactory combinations on MNIST and Fashion MNIST, training generative models sequentially on CIFAR10 is particularly instable, and remains a challenge. Our code is available online footnoteurlhttpsgithub.comTLESORTGenerativeContinualLearning.
Distributional Discrepancy A Metric for Unconditional Text Generation ; The purpose of unconditional text generation is to train a model with real sentences, then generate novel sentences of the same quality and diversity as the training data. However, when different metrics are used for comparing the methods of unconditional text generation, contradictory conclusions are drawn. The difficulty is that both the diversity and quality of the sample should be considered simultaneously when the models are evaluated. To solve this problem, a novel metric of distributional discrepancy DD is designed to evaluate generators based on the discrepancy between the generated and real training sentences. However, it cannot compute the DD directly because the distribution of real sentences is unavailable. Thus, we propose a method for estimating the DD by training a neuralnetworkbased text classifier. For comparison, three existing metrics, bilingual evaluation understudy BLEU versus selfBLEU, language model score versus reverse language model score, and Fr'echet embedding distance, along with the proposed DD, are used to evaluate two popular generative models of long shortterm memory and generative pretrained transformer 2 on both syntactic and real data. Experimental results show that DD is significantly better than the three existing metrics for ranking these generative models.
THINK A Novel Conversation Model for Generating Grammatically Correct and Coherent Responses ; Many existing conversation models that are based on the encoderdecoder framework have focused on ways to make the encoder more complicated to enrich the context vectors so as to increase the diversity and informativeness of generated responses. However, these approaches face two problems. First, the decoder is too simple to effectively utilize the previously generated information and tends to generate duplicated and selfcontradicting responses. Second, the complex encoder tends to generate diverse but incoherent responses because the complex context vectors may deviate from the original semantics of context. In this work, we proposed a conversation model named THINK Teamwork generation Hover around Impressive Noticeable Keywords to make the decoder more complicated and avoid generating duplicated and selfcontradicting responses. The model simplifies the context vectors and increases the coherence of generated responses in a reasonable way. For this model, we propose Teamwork generation framework and Semantics Extractor. Compared with other baselines, both automatic and human evaluation showed the advantages of our model.
MOCHA A MultiTask Training Approach for Coherent Text Generation from Cognitive Perspective ; Teaching neural models to generate narrative coherent texts is a critical problem. Recent pretrained language models have achieved promising results, but there is still a gap between human written texts and machinegenerated outputs. In this work, we propose a novel multitask training strategy for coherent text generation grounded on the cognitive theory of writing, which empowers the model to learn essential subskills needed for writing including planning and reviewing besides endtoend generation. We extensively evaluate our model on three openended generation tasks including story generation, news article writing and argument generation. Experiments show that our model achieves better results on both fewshot and fullysupervised settings than strong baselines, and human evaluations confirm that our model can generate more coherent outputs.
LayoutDM Transformerbased Diffusion Model for Layout Generation ; Automatic layout generation that can synthesize highquality layouts is an important tool for graphic design in many applications. Though existing methods based on generative models such as Generative Adversarial Networks GANs and Variational AutoEncoders VAEs have progressed, they still leave much room for improving the quality and diversity of the results. Inspired by the recent success of diffusion models in generating highquality images, this paper explores their potential for conditional layout generation and proposes Transformerbased Layout Diffusion Model LayoutDM by instantiating the conditional denoising diffusion probabilistic model DDPM with a purely transformerbased architecture. Instead of using convolutional neural networks, a transformerbased conditional Layout Denoiser is proposed to learn the reverse diffusion process to generate samples from noised layout data. Benefitting from both transformer and DDPM, our LayoutDM is of desired properties such as highquality generation, strong sample diversity, faithful distribution coverage, and stationary training in comparison to GANs and VAEs. Quantitative and qualitative experimental results show that our method outperforms stateoftheart generative models in terms of quality and diversity.
Large Language Models are Effective TabletoText Generators, Evaluators, and Feedback Providers ; Large language models LLMs have shown remarkable ability on controllable text generation. However, the potential of LLMs in generating text from structured tables remains largely underexplored. In this paper, we study the capabilities of LLMs for tabletotext generation tasks, particularly aiming to investigate their performance in generating natural language statements that can be logically entailed by a provided table. First, we investigate how LLMs compare to stateoftheart tabletotext finetuned models, and demonstrate that LLMs can generate statements with higher faithfulness compared with previous stateoftheart finetuned models. Given this finding, we next explore whether LLMs can serve as faithfulnesslevel automated evaluation metrics. Through human evaluation, we show that evaluation metrics adopted from LLMs correlates better with human judgments compared with existing faithfulnesslevel metrics. Finally, we demonstrate that LLMs using chainofthought prompting can generate highfidelity natural language feedback for other tabletotext models' generations, provide insights for future work regarding the distillation of text generation capabilities from LLMs to smaller models.
Benchmarking Large Language Model Capabilities for Conditional Generation ; Pretrained large language models PLMs underlie most new developments in natural language processing. They have shifted the field from applicationspecific model pipelines to a single model that is adapted to a wide range of tasks. Autoregressive PLMs like GPT3 or PaLM, alongside techniques like fewshot learning, have additionally shifted the output modality to generation instead of classification or regression. Despite their ubiquitous use, the generation quality of language models is rarely evaluated when these models are introduced. Additionally, it is unclear how existing generation taskswhile they can be used to compare systems at a high levelrelate to the real world use cases for which people have been adopting them. In this work, we discuss how to adapt existing applicationspecific generation benchmarks to PLMs and provide an indepth, empirical study of the limitations and capabilities of PLMs in natural language generation tasks along dimensions such as scale, architecture, input and output language. Our results show that PLMs differ in their applicability to different data regimes and their generalization to multiple languages and inform which PLMs to use for a given generation task setup. We share best practices to be taken into consideration when benchmarking generation capabilities during the development of upcoming PLMs.
Generative Forests ; Tabular data represents one of the most prevalent form of data. When it comes to data generation, many approaches would learn a density for the data generation process, but would not necessarily end up with a sampler, even less so being exact with respect to the underlying density. A second issue is on models while complex modeling based on neural nets thrives in image or text generation etc., less is known for powerful generative models on tabular data. A third problem is the visible chasm on tabular data between training algorithms for supervised learning with remarkable properties e.g. boosting, and a comparative lack of guarantees when it comes to data generation. In this paper, we tackle the three problems, introducing new treebased generative models convenient for density modeling and tabular data generation that improve on modeling capabilities of recent proposals, and a training algorithm which simplifies the training setting of previous approaches and displays boostingcompliant convergence. This algorithm has the convenient property to rely on a supervised training scheme that can be implemented by a few tweaks to the most popular induction scheme for decision tree induction with two classes. Experiments are provided on missing data imputation and comparing generated data to real data, displaying the quality of the results obtained by our approach, in particular against state of the art.
Theory of Superselection Sectors for Generalized Ising models ; We apply the theory of superselection sectors in the same way as done by G.Mack and V.Schomerus for the Ising model to generalizations of this model described by J.Frohlich and T.Kerler.
Path Integral Solubility of a General TwoDimensional Model ; The solubility of a general two dimensional model, which reduces to various models in different limits, is studied within the path integral formalism. Various subtleties and interesting features are pointed out.
Three generation DistlerKachru models ; DistlerKachru models which yield three generations of chiral fermions with gauge group SO10 are found. These models have mirror partners.
The Bag Model of Nuclei ; The basic assumptions and the general results of our bag model for nuclei are presented in detail. Nuclei are considered in a unified integration of the mean field theory and the MIT bag model
On the evolution in the configuration model ; We give precise estimates on the number of activeinactive halfedges in the configuration model used to generate random regular graphs. This is obtained by analyzing a more general urn model with negative eigenvalues.
Learning Inference Models for Computer Vision ; Computer vision can be understood as the ability to perform inference on image data. Breakthroughs in computer vision technology are often marked by advances in inference techniques. This thesis proposes novel inference schemes and demonstrates applications in computer vision. We propose inference techniques for both generative and discriminative vision models. The use of generative models in vision is often hampered by the difficulty of posterior inference. We propose techniques for improving inference in MCMC sampling and messagepassing inference. Our inference strategy is to learn separate discriminative models that assist Bayesian inference in a generative model. Experiments on a range of generative models show that the proposed techniques accelerate the inference process andor converge to better solutions. A main complication in the design of discriminative models is the inclusion of prior knowledge. We concentrate on CNN models and propose a generalization of standard spatial convolutions to bilateral convolutions. We generalize the existing use of bilateral filters and then propose new neural network architectures with learnable bilateral filters, which we call Bilateral Neural Networks'. Experiments demonstrate the use of the bilateral networks on a wide range of image and video tasks and datasets. In summary, we propose techniques for better inference in several vision models ranging from inverse graphics to freely parameterized neural networks. In generative models, our inference techniques alleviate some of the crucial hurdles in Bayesian posterior inference, paving new ways for the use of model based machine learning in vision. In discriminative CNN models, the proposed filter generalizations aid in the design of new neural network architectures that can handle sparse highdimensional data as well as provide a way to incorporate prior knowledge into CNNs.
Infinite forcing and the generic multiverse ; In this article we present a technique for selecting models of set theory that are complete in a modeltheoretic sense. Specifically, we will apply Robinson infinite forcing to the collections of models of ZFC obtained by Cohen forcing. This technique will be used to suggest a unified perspective on generic absoluteness principles.
Uniform bounds for ruin probability in Multidimensional Risk Model ; In this paper we consider some generalizations of the classical ddimensional Brownian risk model. This contribution derives some nonasymptotic bounds for simultaneous ruin probabilities of interest. In addition, we obtain nonasymptotic bounds also for the case of general trend functions and convolutions of our original risk model.
Learning Robust Representations Of Generative Models Using SetBased Artificial Fingerprints ; With recent progress in deep generative models, the problem of identifying synthetic data and comparing their underlying generative processes has become an imperative task for various reasons, including fighting visual misinformation and source attribution. Existing methods often approximate the distance between the models via their sample distributions. In this paper, we approach the problem of fingerprinting generative models by learning representations that encode the residual artifacts left by the generative models as unique signals that identify the source models. We consider these unique traces a.k.a. artificial fingerprints as representations of generative models, and demonstrate their usefulness in both the discriminative task of source attribution and the unsupervised task of defining a similarity between the underlying models. We first extend the existing studies on fingerprints of GANs to four representative classes of generative models VAEs, Flows, GANs and scorebased models, and demonstrate their existence and attributability. We then improve the stability and attributability of the fingerprints by proposing a new learning method based on setencoding and contrastive training. Our setencoder, unlike existing methods that operate on individual images, learns fingerprints from a textitset of images. We demonstrate improvements in the stability and attributability through comparisons to stateoftheart fingerprint methods and ablation studies. Further, our method employs contrastive training to learn an implicit similarity between models. We discover latent families of generative models using this metric in a standard hierarchical clustering algorithm.
Exploring Generative Neural Temporal Point Process ; Temporal point process TPP is commonly used to model the asynchronous event sequence featuring occurrence timestamps and revealed by probabilistic models conditioned on historical impacts. While lots of previous works have focused on goodnessoffit' of TPP models by maximizing the likelihood, their predictive performance is unsatisfactory, which means the timestamps generated by models are far apart from true observations. Recently, deep generative models such as denoising diffusion and score matching models have achieved great progress in image generating tasks by demonstrating their capability of generating samples of high quality. However, there are no complete and unified works exploring and studying the potential of generative models in the context of event occurence modeling for TPP. In this work, we try to fill the gap by designing a unified textbfgenerative framework for textbfneural textbftemporal textbfpoint textbfprocess textscGNTPP model to explore their feasibility and effectiveness, and further improve models' predictive performance. Besides, in terms of measuring the historical impacts, we revise the attentive models which summarize influence from historical events with an adaptive reweighting term considering events' type relation and time intervals. Extensive experiments have been conducted to illustrate the improved predictive capability of textscGNTPP with a line of generative probabilistic decoders, and performance gain from the revised attention. To the best of our knowledge, this is the first work that adapts generative models in a complete unified framework and studies their effectiveness in the context of TPP. Our codebase including all the methods given in Section.5.1.1 is open in urlhttpsgithub.comBIRDTAOGNTPP. We hope the code framework can facilitate future research in Neural TPPs.
Flops and minimal models for generalized pairs ; We show that given any two minimal models of a generalized lc pair, there exist small birational models which are connected by a sequence of symmetric flops. We also present some applications.
Multicolored dimer models in onedimension lattice paths and generalized RogersRamanujan identities ; We define and study multicolored dimer models on a segment and on a circle. The multivariate generating functions for the dimer models satisfy the recurrence relations similar to the one for Fibonacci numbers. We give closed formulae for the generating functions. We show that, in the large size limit with specializations of the formal variables, the generating functions exhibit the summations appearing in generalized RogersRamanujan identities. Further, the generating functions of the dimer models have infinite product formulae for general values of formal variables in the large size limit. These formulae are generalizations of RogersRamanujan identities for multi variables. We also give other several specializations which exhibit simple combinatorial formulae. The analysis of the correlation functions, which we call emptiness formation probabilities and moments, leads to the application of the formal power series associated to the Dyck, Motzkin and Schroder paths to the generating functions for the dimer models. We give descriptions of the generating functions of finite size in terms of these combinatorial objects, Dyck and Motzkin paths with statistics. We have three additional results. First, the convoluted generating functions for Fibonacci, Catalan and Motzkin numbers are shown to be expressed as generating functions of Fibonacci, Dyck and Motzkin words with the weights given by binomial coefficients. The second one is a weight preserving correspondence between a Motzkin path and a set of Dyck paths. The third one is a connection of the generating functions for the dimer models to the generating functions of independent sets of special classes of graphs.
From Text to Source Results in Detecting Large Language ModelGenerated Content ; The widespread use of Large Language Models LLMs, celebrated for their ability to generate humanlike text, has raised concerns about misinformation and ethical implications. Addressing these concerns necessitates the development of robust methods to detect and attribute text generated by LLMs. This paper investigates CrossModel Detection, evaluating whether a classifier trained to distinguish between source LLMgenerated and humanwritten text can also detect text from a target LLM without further training. The study comprehensively explores various LLM sizes and families, and assesses the impact of conversational finetuning techniques on classifier generalization. The research also delves into Model Attribution, encompassing source model identification, model family classification, and model size classification. Our results reveal several key findings a clear inverse relationship between classifier effectiveness and model size, with larger LLMs being more challenging to detect, especially when the classifier is trained on data from smaller models. Training on data from similarly sized LLMs can improve detection performance from larger models but may lead to decreased performance when dealing with smaller models. Additionally, model attribution experiments show promising results in identifying source models and model families, highlighting detectable signatures in LLMgenerated text. Overall, our study contributes valuable insights into the interplay of model size, family, and training data in LLM detection and attribution.
Generalized exponential function and discrete growth models ; Here we show that a particular oneparameter generalization of the exponential function is suitable to unify most of the popular onespecies discrete population dynamics models into a simple formula. A physical interpretation is given to this new introduced parameter in the context of the continuous Richards model, which remains valid for the discrete case. From the discretization of the continuous Richards' model generalization of the Gompertz and Verhuslt models, one obtains a generalized logistic map and we briefly study its properties. Notice, however that the physical interpretation for the introduced parameter persists valid for the discrete case. Next, we generalize the scramble competition thetaRicker discrete model and analytically calculate the fixed points as well as their stability. In contrast to previous generalizations, from the generalized thetaRicker model one is able to retrieve either scramble or contest models.
A Logicbased Approach to Generatively Defined Discriminative Modeling ; Conditional random fields CRFs are usually specified by graphical models but in this paper we propose to use probabilistic logic programs and specify them generatively. Our intension is first to provide a unified approach to CRFs for complex modeling through the use of a Turing complete language and second to offer a convenient way of realizing generativediscriminative pairs in machine learning to compare generative and discriminative models and choose the best model. We implemented our approach as the DPRISM language by modifying PRISM, a logicbased probabilistic modeling language for generative modeling, while exploiting its dynamic programming mechanism for efficient probability computation. We tested DPRISM with logistic regression, a linearchain CRF and a CRFCFG and empirically confirmed their excellent discriminative performance compared to their generative counterparts, i.e. naive Bayes, an HMM and a PCFG. We also introduced new CRF models, CRFBNCs and CRFLCGs. They are CRF versions of Bayesian network classifiers and probabilistic leftcorner grammars respectively and easily implementable in DPRISM. We empirically showed that they outperform their generative counterparts as expected.
Scaffoldbased molecular design using graph generative model ; Searching new molecules in areas like drug discovery often starts from the core structures of candidate molecules to optimize the properties of interest. The way as such has called for a strategy of designing molecules retaining a particular scaffold as a substructure. On this account, our present work proposes a scaffoldbased molecular generative model. The model generates molecular graphs by extending the graph of a scaffold through sequential additions of vertices and edges. In contrast to previous related models, our model guarantees the generated molecules to retain the given scaffold with certainty. Our evaluation of the model using unseen scaffolds showed the validity, uniqueness, and novelty of generated molecules as high as the case using seen scaffolds. This confirms that the model can generalize the learned chemical rules of adding atoms and bonds rather than simply memorizing the mapping from scaffolds to molecules during learning. Furthermore, despite the restraint of fixing core structures, our model could simultaneously control multiple molecular properties when generating new molecules.
Exact rankreduction of network models ; With the advent of the big data era, generative models of complex networks are becoming elusive from direct computational simulation. We present an exact, linearalgebraic reduction scheme of generative models of networks. By exploiting the bilinear structure of the matrix representation of the generative model, we separate its null eigenspace, and reduce the exact description of the generative model to a smaller vector space. After reduction, we group generative models in universality classes according to their rank and metric signature, and work out, in a computationally affordable way, their relevant properties e.g., spectrum. The reduction also provides the environment for a simplified computation of their properties. The proposed scheme works for any generative model admitting a matrix representation, and will be very useful in the study of dynamical processes on networks, as well as in the understanding of generative models to come, according to the provided classification.
Conditioning Deep Generative Raw Audio Models for Structured Automatic Music ; Existing automatic music generation approaches that feature deep learning can be broadly classified into two types raw audio models and symbolic models. Symbolic models, which train and generate at the note level, are currently the more prevalent approach; these models can capture longrange dependencies of melodic structure, but fail to grasp the nuances and richness of raw audio generations. Raw audio models, such as DeepMind's WaveNet, train directly on sampled audio waveforms, allowing them to produce realisticsounding, albeit unstructured music. In this paper, we propose an automatic music generation methodology combining both of these approaches to create structured, realisticsounding compositions. We consider a Long Short Term Memory network to learn the melodic structure of different styles of music, and then use the unique symbolic generations from this model as a conditioning input to a WaveNetbased raw audio generator, creating a model for automatic, novel music. We then evaluate this approach by showcasing results of this work.
CATGen Improving Robustness in NLP Models via Controlled Adversarial Text Generation ; NLP models are shown to suffer from robustness issues, i.e., a model's prediction can be easily changed under small perturbations to the input. In this work, we present a Controlled Adversarial Text Generation CATGen model that, given an input text, generates adversarial texts through controllable attributes that are known to be invariant to task labels. For example, in order to attack a model for sentiment classification over product reviews, we can use the product categories as the controllable attribute which would not change the sentiment of the reviews. Experiments on realworld NLP datasets demonstrate that our method can generate more diverse and fluent adversarial texts, compared to many existing adversarial text generation approaches. We further use our generated adversarial examples to improve models through adversarial training, and we demonstrate that our generated attacks are more robust against model retraining and different model architectures.
Generating Math Word Problems from Equations with Topic Controlling and Commonsense Enforcement ; Recent years have seen significant advancement in text generation tasks with the help of neural language models. However, there exists a challenging task generating math problem text based on mathematical equations, which has made little progress so far. In this paper, we present a novel equationtoproblem text generation model. In our model, 1 we propose a flexible scheme to effectively encode math equations, we then enhance the equation encoder by a Varitional Autoencoder VAE 2 given a math equation, we perform topic selection, followed by which a dynamic topic memory mechanism is introduced to restrict the topic distribution of the generator 3 to avoid commonsense violation in traditional generation model, we pretrain word embedding with background knowledge graph KG, and we link decoded words to related words in KG, targeted at injecting background knowledge into our model. We evaluate our model through both automatic metrices and human evaluation, experiments demonstrate our model outperforms baseline and previous models in both accuracy and richness of generated problem text.