diff --git "a/data/arxiv/11-100/0_0.jsonl" "b/data/arxiv/11-100/0_0.jsonl" new file mode 100644--- /dev/null +++ "b/data/arxiv/11-100/0_0.jsonl" @@ -0,0 +1,166 @@ +{"text":"abstract: Existing methods for image-sentence matching have supported at most two languages at a time, and learn completely separate representations for each language. In this paper we investigate a more challenging version of the bidirectional image-sentence retrieval task where the sentences can be from many different languages. We introduce an approach which learns a universal embedding between all languages, and then relates this universal embedding to images. By using this shared language embedding, we minimize the number of language-specific parameters in our network. This provides two benefits over prior work. First, our approach can scale to support many languages since it shares most of its parameters across all languages. Second, since the all parameters matching the text and image features are shared, languages with fewer annotations can take advantage of the good representation learned using annotations from other (more abundant) language data. In addition, we show that Machine Translation can provide additional supervisions and significantly improve performances on languages with fewer annotations. We demonstrate the effectiveness of our approach on the MSCOCO and Multi30K datasets, containing a total of six different languages, and improve mean recall by up to 20.2% on a single language.\nauthor: ID:\ntitle: Multilingual Image-Sentence Retrieval\n\nCongratulations on having a paper selected for inclusion in an AAAI Press proceedings or technical report! This document details the requirements necessary to get your accepted paper published using PDFLaTeX. If you are using Microsoft Word, instructions are provided in a different document. AAAI Press does not support any other formatting software.\n\nThe instructions herein are provided as a general guide for experienced LaTeX users. If you do not know how to use LaTeX, please obtain assistance locally. AAAI cannot provide you with support and the accompanying style files are **not** guaranteed to work. If the results you obtain are not in accordance with the specifications you received, you must correct your source file to achieve the correct result.\n\nThese instructions are generic. Consequently, they do not include specific dates, page charges, and so forth. Please consult your specific written conference instructions for details regarding your submission. Please review the entire document for specific instructions that might apply to your particular situation. All authors must comply with the following:\n\n- You must use the 2020 AAAI Press LaTeX style file and the aaai.bst bibliography style file, which are located in the 2020 AAAI Author Kit (aaai20.sty and aaai.bst).\n\n- You must complete, sign, and return by the deadline the AAAI copyright form (unless directed by AAAI Press to use the AAAI Distribution License instead).\n\n- You must read and format your paper source and PDF according to the formatting instructions for authors.\n\n- You must submit your electronic files and abstract using our electronic submission form **on time.**\n\n- You must pay any required page or formatting charges to AAAI Press so that they are received by the deadline.\n\n- You must check your paper before submitting it, ensuring that it compiles without error, and complies with the guidelines found in the AAAI Author Kit.\n\n# Copyright\n\nAll papers submitted for publication by AAAI Press must be accompanied by a valid signed copyright form. There are no exceptions to this requirement. You must send us the original version of this form. However, to meet the deadline, you may fax (1-650-321-4457) or scan and e-mail the form (email@example.com) to AAAI by the submission deadline, and then mail the original via postal mail to the AAAI office. If you fail to send in a signed copyright or permission form, we will be unable to publish your paper. There are **no exceptions** to this policy.You will find PDF versions of the AAAI copyright and permission to distribute forms in the AAAI AuthorKit.\n\n# Formatting Requirements in Brief\n\nWe need source and PDF files that can be used in a variety of ways and can be output on a variety of devices. The design and appearance of the paper is strictly governed by the aaai style file (aaai20.sty). **You must not make any changes to the aaai style file, nor use any commands, packages, style files, or macros within your own paper that alter that design, including, but not limited to spacing, floats, margins, fonts, font size, and appearance.** AAAI imposes requirements on your source and PDF files that must be followed. Most of these requirements are based on our efforts to standardize conference manuscript properties and layout. All papers submitted to AAAI for publication will be recompiled for standardization purposes. Consequently, every paper submission must comply with the following requirements:\n\n> - Your .tex file must compile in PDFLaTeX \u2014 ( you may not include .ps or .eps figure files.)\n>\n> - All fonts must be embedded in the PDF file \u2014 including includes your figures.\n>\n> - Modifications to the style file, whether directly or via commands in your document may not ever be made, most especially when made in an effort to avoid extra page charges or make your paper fit in a specific number of pages.\n>\n> - No type 3 fonts may be used (even in illustrations).\n>\n> - You may not alter the spacing above and below captions, figures, headings, and subheadings.\n>\n> - You may not alter the font sizes of text elements, footnotes, heading elements, captions, or title information (for references and tables and mathematics, please see the the limited exceptions provided herein).\n>\n> - You may not alter the line spacing of text.\n>\n> - Your title must follow Title Case capitalization rules (not sentence case).\n>\n> - Your .tex file must include completed metadata to pass-through to the PDF (see PDFINFO below)\n>\n> - LaTeX documents must use the Times or Nimbus font package (you may not use Computer Modern for the text of your paper).\n>\n> - No LaTeX 209 documents may be used or submitted.\n>\n> - Your source must not require use of fonts for non-Roman alphabets within the text itself. If your paper includes symbols in other languages (such as, but not limited to, Arabic, Chinese, Hebrew, Japanese, Thai, Russian and other Cyrillic languages), you must restrict their use to bit-mapped figures. Fonts that require non-English language support (CID and Identity-H) must be converted to outlines or 300 dpi bitmap or removed from the document (even if they are in a graphics file embedded in the document).\n>\n> - Two-column format in AAAI style is required for all papers.\n>\n> - The paper size for final submission must be US letter without exception.\n>\n> - The source file must exactly match the PDF.\n>\n> - The document margins may not be exceeded (no overfull boxes).\n>\n> - The number of pages and the file size must be as specified for your event.\n>\n> - No document may be password protected.\n>\n> - Neither the PDFs nor the source may contain any embedded links or bookmarks (no hyperref or navigator packages).\n>\n> - Your source and PDF must not have any page numbers, footers, or headers (no pagestyle commands).\n>\n> - Your PDF must be compatible with Acrobat 5 or higher.\n>\n> - Your LaTeX source file (excluding references) must consist of a **single** file (use of the \"input\" command is not allowed.\n>\n> - Your graphics must be sized appropriately outside of LaTeX (do not use the \"clip\" or \"trim\" command) .\n\nIf you do not follow these requirements, you will be required to correct the deficiencies and resubmit the paper. A resubmission fee will apply.\n\n# What Files to Submit\n\nYou must submit the following items to ensure that your paper is published:\n\n- A fully-compliant PDF file that includes PDF metadata.\n\n- Your LaTeX source file submitted as a **single** .tex file (do not use the \"input\" command to include sections of your paper \u2014 every section must be in the single source file). (The only allowable exception is .bib file, which should be included separately).\n\n- The bibliography (.bib) file(s).\n\n- Your source must compile on our system, which includes only standard LaTeX 2018-2019 TeXLive support files.\n\n- Only the graphics files used in compiling paper.\n\n- The LaTeX-generated files (e.g. .aux, .bbl file, PDF, etc.).\n\nYour LaTeX source will be reviewed and recompiled on our system (if it does not compile, you will be required to resubmit, which will incur fees). **Do not submit your source in multiple text files.** Your single LaTeX source file must include all your text, your bibliography (formatted using aaai.bst), and any custom macros.\n\nYour files should work without any supporting files (other than the program itself) on any computer with a standard LaTeX distribution.\n\n**Do not send files that are not actually used in the paper.** We don't want you to send us any files not needed for compiling your paper, including, for example, this instructions file, unused graphics files, style files, additional material sent for the purpose of the paper review, and so forth.\n\n**Do not send supporting files that are not actually used in the paper.** We don't want you to send us any files not needed for compiling your paper, including, for example, this instructions file, unused graphics files, style files, additional material sent for the purpose of the paper review, and so forth.\n\n**Obsolete style files.** The commands for some common packages (such as some used for algorithms), may have changed. Please be certain that you are not compiling your paper using old or obsolete style files. **Final Archive.** Place your PDF and source files in a single archive which should be compressed using .zip. The final file size may not exceed 10 MB. Name your source file with the last (family) name of the first author, even if that is not you.\n\n# Using LaTeX to Format Your Paper\n\nThe latest version of the AAAI style file is available on AAAI's website. Download this file and place it in the TeX\u00a0search path. Placing it in the same directory as the paper should also work. You must download the latest version of the complete AAAI Author Kit so that you will have the latest instruction set and style file.\n\n## Document Preamble\n\nIn the LaTeX source for your paper, you **must** place the following lines as shown in the example in this subsection. This command set-up is for three authors. Add or subtract author and address lines as necessary, and uncomment the portions that apply to you. In most instances, this is all you need to do to format your paper in the Times font. The helvet package will cause Helvetica to be used for sans serif. These files are part of the PSNFSS2e package, which is freely available from many Internet sites (and is often part of a standard installation).\n\nLeave the setcounter for section number depth commented out and set at 0 unless you want to add section numbers to your paper. If you do add section numbers, you must uncomment this line and change the number to 1 (for section numbers), or 2 (for section and subsection numbers). The style file will not work properly with numbering of subsubsections, so do not use a number higher than 2.\n\nIf (and only if) your author title information will not fit within the specified height allowed, put \\setlength \\titlebox2.5in in your preamble. Increase the height until the height error disappears from your log. You may not use the \\setlength command elsewhere in your paper, and it may not be used to reduce the height of the author-title box.\n\n### The Following Must Appear in Your Preamble\n\n> \n\n## Preparing Your Paper\n\nAfter the preamble above, you should prepare your paper as follows:\n\n> \n\n### The Following Must Conclude Your Document\n\n> \n\n## Inserting Document Metadata with LaTeX\n\nPDF files contain document summary information that enables us to create an Acrobat index (pdx) file, and also allows search engines to locate and present your paper more accurately. *Document metadata for author and title are REQUIRED.* You may not apply any script or macro to implementation of the title, author, and metadata information in your paper.\n\n*Important:* Do not include *any* LaTeX code or nonascii characters (including accented characters) in the metadata. The data in the metadata must be completely plain ascii. It may not include slashes, accents, linebreaks, unicode, or any LaTeX commands. Type the title exactly as it appears on the paper (minus all formatting). Input the author names in the order in which they appear on the paper (minus all accents), separating each author by a comma. You may also include keywords in the optional Keywords field.\n\n> \n\n## Commands and Packages That May Not Be Used\n\n```latex\n\\begin{table*}[t]\\centering\n\\caption{Commands that must not be used.}\\smallskip\n%\\resizebox{0.95\\textwidth}{!}{ % If your table exceeds the column or page width, use this command to reduce it slightly\n\\begin{tabular}{l|l|l|l}\n\\textbackslash abovecaption & \n\\textbackslash abovedisplay & \n\\textbackslash addevensidemargin & \n\\textbackslash addsidemargin \\\\ \n\\textbackslash addtolength & \n\\textbackslash baselinestretch & \n\\textbackslash belowcaption & \n\\textbackslash belowdisplay \\\\ \n\\textbackslash break &\n\\textbackslash clearpage & \n\\textbackslash clip & \n\\textbackslash columnsep \\\\ \n\\textbackslash float & \n\\textbackslash input & \n\\textbackslash input & \n\\textbackslash linespread \\\\ \n\\textbackslash newpage &\n\\textbackslash pagebreak & \n\\textbackslash renewcommand & \n\\textbackslash setlength \\\\ \n\\textbackslash text height & \n\\textbackslash tiny & \n\\textbackslash top margin & \n\\textbackslash trim \\\\ \n\\textbackslash vskip\\{- & \n\\textbackslash vspace\\{- \\\\ \n\\end{tabular}\n%}\n\\label{table1}\n\\end{table*}\n```\n\n```latex\n\\resizebox{.95\\columnwidth}{!}{\n\\smallskip\\begin{tabular}{l|l|l|l}\nauthblk & babel & caption & cjk\\\\\ndvips & epsf & epsfig & euler\\\\\nfloat & fullpage & geometry & graphics\\\\\nhyperref & layout & linespread & lmodern\\\\\nmaltepaper & natbib & navigator & pdfcomment\\\\\npsfig & pstricks & t1enc & titlesec\\\\\ntocbind & ulem\\\\\n\\end{tabular}\n}\n```\n\nThere are a number of packages, commands, scripts, and macros that are incompatable with aaai20.sty. The common ones are listed in tables and . Generally, if a command, package, script, or macro alters floats, margins, fonts, sizing, linespacing, or the presentation of the references and citations, it is unacceptable. Note that negative vskip and vspace may not be used except in certain rare occurances, and may never be used around tables, figures, captions, sections, subsections, subsections, or references.\n\n## Page Breaks\n\nFor your final camera ready copy, you must not use any page break commands. References must flow directly after the text without breaks. Note that some conferences require references to be on a separate page during the review process. AAAI Press, however, does not require this condition for the final paper.\n\n## Paper Size, Margins, and Column Width\n\nPapers must be formatted to print in two-column format on 8.5 x 11 inch US letter-sized paper. The margins must be exactly as follows:\n\n- Top margin: .75 inches\n\n- Left margin: .75 inches\n\n- Right margin: .75 inches\n\n- Bottom margin: 1.25 inches\n\nThe default paper size in most installations of LaTeX is A4. However, because we require that your electronic paper be formatted in US letter size, the preamble we have provided includes commands that alter the default to US letter size. Please note that using any other package to alter page size (such as, but not limited to the Geometry package) will result in your final paper being returned to you for correction and payment of a resubmission fee.\n\n### Column Width and Margins.\n\nTo ensure maximum readability, your paper must include two columns. Each column should be 3.3 inches wide (slightly more than 3.25 inches), with a .375 inch (.952 cm) gutter of white space between the two columns. The aaai20.sty file will automatically create these columns for you.\n\n## Overlength Papers\n\nIf your paper is too long, turn on \\frenchspacing, which will reduce the space after periods. Next, shrink the size of your graphics. Use \\centering instead of \\begin{center} in your figure environment. For mathematical environments, you may reduce fontsize **but not below 6.5 point**. You may also alter the size of your bibliography by inserting \\fontsize{9.5pt}{10.5pt} \\selectfont right before the bibliography (the minimum size is \\fontsize{9.0pt}{10.0pt}.\n\nCommands that alter page layout are forbidden. These include \\columnsep, \\topmargin, \\topskip, \\textheight, \\textwidth, \\oddsidemargin, and \\evensizemargin (this list is not exhaustive). If you alter page layout, you will be required to pay the page fee *plus* a reformatting fee. Other commands that are questionable and may cause your paper to be rejected include \\parindent, and \\parskip. Commands that alter the space between sections are forbidden. The title sec package is not allowed. Regardless of the above, if your paper is obviously \"squeezed\" it is not going to to be accepted. Options for reducing the length of a paper include reducing the size of your graphics, cutting text, or paying the extra page charge (if it is offered).\n\n## Figures\n\nYour paper must compile in PDFLaTeX. Consequently, all your figures must be .jpg, .png, or .pdf. You may not use the .gif (the resolution is too low), .ps, or .eps file format for your figures.\n\nWhen you include your figures, you must crop them **outside** of LaTeX. The command \\includegraphics\\*\\[clip=true, viewport 0 0 10 10\\]... might result in a PDF that looks great, but the image is **not really cropped.** The full image can reappear (and obscure whatever it is overlapping) when page numbers are applied or color space is standardized. Figures , and display some unwanted results that often occur.\n\nDo not use minipage to group figures. Additionally, the font and size of figure captions must be 10 point roman. Do not make them smaller, bold, or italic. (Individual words may be italicized if the context requires differentiation.)\n\n## Type Font and Size\n\nYour paper must be formatted in Times Roman or Nimbus. We will not accept papers formatted using Computer Modern or Palatino or some other font as the text or heading typeface. Sans serif, when used, should be Courier. Use Symbol or Lucida or Computer Modern for *mathematics only.*\n\nDo not use type 3 fonts for any portion of your paper, including graphics. Type 3 bitmapped fonts are designed for fixed resolution printers. Most print at 300 dpi even if the printer resolution is 1200 dpi or higher. They also often cause high resolution imagesetter devices and our PDF indexing software to crash. Consequently, AAAI will not accept electronic files containing obsolete type 3 fonts. Files containing those fonts (even in graphics) will be rejected.\n\nFortunately, there are effective workarounds that will prevent your file from embedding type 3 bitmapped fonts. The easiest workaround is to use the required times, helvet, and courier packages with LaTeX2e. (Note that papers formatted in this way will still use Computer Modern for the mathematics. To make the math look good, you'll either have to use Symbol or Lucida, or you will need to install type 1 Computer Modern fonts \u2014 for more on these fonts, see the section \"Obtaining Type 1 Computer Modern.\")\n\nIf you are unsure if your paper contains type 3 fonts, view the PDF in Acrobat Reader. The Properties\/Fonts window will display the font name, font type, and encoding properties of all the fonts in the document. If you are unsure if your graphics contain type 3 fonts (and they are PostScript or encapsulated PostScript documents), create PDF versions of them, and consult the properties window in Acrobat Reader.\n\nThe default size for your type must be ten-point with twelve-point leading (line spacing). Start all pages (except the first) directly under the top margin. (See the next section for instructions on formatting the title page.) Indent ten points when beginning a new paragraph, unless the paragraph begins directly below a heading or subheading.\n\n### Obtaining Type 1 Computer Modern for LaTeX.\n\nIf you use Computer Modern for the mathematics in your paper (you cannot use it for the text) you may need to download type 1 Computer fonts. They are available without charge from the American Mathematical Society: http:\/\/www.ams.org\/tex\/type1-fonts.html.\n\n### Nonroman Fonts\n\nIf your paper includes symbols in other languages (such as, but not limited to, Arabic, Chinese, Hebrew, Japanese, Thai, Russian and other Cyrillic languages), you must restrict their use to bit-mapped figures.\n\n## Title and Authors\n\nYour title must appear in mixed case (nouns, pronouns, and verbs are capitalized) near the top of the first page, centered over both columns in sixteen-point bold type (twenty-four point leading). This style is called \"mixed case,\" which means that means all verbs (including short verbs like be, is, using,and go), nouns, adverbs, adjectives, and pronouns should be capitalized, (including both words in hyphenated terms), while articles, conjunctions, and prepositions are lower case unless they directly follow a colon or long dash. Author's names should appear below the title of the paper, centered in twelve-point type (with fifteen point leading), along with affiliation(s) and complete address(es) (including electronic mail address if available) in nine-point roman type (the twelve point leading). (If the title is long, or you have many authors, you may reduce the specified point sizes by up to two points.) You should begin the two-column format when you come to the abstract.\n\n### Formatting Author Information\n\nAuthor information can be set in a number of different styles, depending on the number of authors and the number of affiliations you need to display. In formatting your author information, however, you may not use a table nor may you employ the \\authorblk.sty package. For several authors from the same institution, please just separate with commas:\n\n> \n\nIf the names do not fit well on one line use:\n\n> \n\nFor two (or three) authors from different institutions, use \\And:\n\n> \n\nTo start a separate \"row\" of authors, use \\AND:\n\nIf the title and author information does not fit in the area allocated, place \\setlength\\titlebox{*height*} after the \\documentclass line where {*height*} is 2.5in or greater. (This one of the only allowable uses of the setlength command. Check with AAAI Press before using it elsewhere.)\n\n### Formatting Author Information \u2014 Alternative Method\n\nIf your paper has a large number of authors from different institutions, you may use the following alternative method for displaying the author information.\n\n> \n\nNote that you should break the author list before it extends into the right column margin. Put a line break, followed by \\bf \\Large to put the second line of authors in the same font and size as the first line (you may not make authors names smaller to save space.) Affiliations can be broken with a simple line break (\\\\).\n\n## LaTeX Copyright Notice\n\nThe copyright notice automatically appears if you use aaai20.sty. It has been hardcoded and may not be disabled.\n\n## Credits\n\nAny credits to a sponsoring agency should appear in the acknowledgments section, unless the agency requires different placement. If it is necessary to include this information on the front page, use \\thanks in either the \\author or \\title commands. For example:\n\n> \n\nMultiple \\thanks commands can be given. Each will result in a separate footnote indication in the author or title with the corresponding text at the botton of the first column of the document. Note that the \\thanks command is fragile. You will need to use \\protect.\n\nPlease do not include \\pubnote commands in your document.\n\n## Abstract\n\nFollow the example commands in this document for creation of your abstract. The command \\begin{abstract} will automatically indent the text block. Please do not indent it further. Do not include references in your abstract!\n\n## Page Numbers\n\nDo not **ever** print any page numbers on your paper. The use of \\pagestyle is forbidden.\n\n## Text \n\nThe main body of the paper must be formatted in black, ten-point Times Roman with twelve-point leading (line spacing). You may not reduce font size or the linespacing. Commands that alter font size or line spacing (including, but not limited to baselinestretch, baselineshift, linespread, and others) are expressly forbidden. In addition, you may not use color in the text.\n\n## Citations\n\nCitations within the text should include the author's last name and year, for example (Newell 1980). Append lower-case letters to the year in cases of ambiguity. Multiple authors should be treated as follows: (Feigenbaum and Engelmore 1988) or (Ford, Hayes, and Glymour 1992). In the case of four or more authors, list only the first author, followed by et al. (Ford et al. 1997).\n\n## Extracts\n\nLong quotations and extracts should be indented ten points from the left and right margins.\n\n> This is an example of an extract or quotation. Note the indent on both sides. Quotation marks are not necessary if you offset the text in a block like this, and properly identify and cite the quotation in the text.\n\n## Footnotes\n\nAvoid footnotes as much as possible; they interrupt the reading of the text. When essential, they should be consecutively numbered throughout with superscript Arabic numbers. Footnotes should appear at the bottom of the page, separated from the text by a blank line space and a thin, half-point rule.\n\n## Headings and Sections\n\nWhen necessary, headings should be used to separate major sections of your paper. Remember, you are writing a short paper, not a lengthy book! An overabundance of headings will tend to make your paper look more like an outline than a paper. The aaai.sty package will create headings for you. Do not alter their size nor their spacing above or below.\n\n### Section Numbers\n\nThe use of section numbers in AAAI Press papers is optional. To use section numbers in LaTeX, uncomment the setcounter line in your document preamble and change the 0 to a 1 or 2. Section numbers should not be used in short poster papers.\n\n### Section Headings.\n\nSections should be arranged and headed as follows:\n\n### Acknowledgments.\n\nThe acknowledgments section, if included, appears after the main body of text and is headed \"Acknowledgments.\" This section includes acknowledgments of help from associates and colleagues, credits to sponsoring agencies, financial support, and permission to publish. Please acknowledge other contributors, grant support, and so forth, in this section. Do not put acknowledgments in a footnote on the first page. If your grant agency requires acknowledgment of the grant on page 1, limit the footnote to the required statement, and put the remaining acknowledgments at the back. Please try to limit acknowledgments to no more than three sentences.\n\n### Appendices.\n\nAny appendices follow the acknowledgments, if included, or after the main body of text if no acknowledgments appear.\n\n### References\n\nThe references section should be labeled \"References\" and should appear at the very end of the paper (don't end the paper with references, and then put a figure by itself on the last page). A sample list of references is given later on in these instructions. Please use a consistent format for references. Poorly prepared or sloppy references reflect badly on the quality of your paper and your research. Please prepare complete and accurate citations.\n\n## Illustrations and Figures\n\nFigures, drawings, tables, and photographs should be placed throughout the paper near the place where they are first discussed. Do not group them together at the end of the paper. If placed at the top or bottom of the paper, illustrations may run across both columns. Figures must not invade the top, bottom, or side margin areas. Figures must be inserted using the \\usepackage{graphicx}. Number figures sequentially, for example, figure 1, and so on.\n\nThe illustration number and caption should appear under the illustration. Labels, and other text with the actual illustration must be at least nine-point type.\n\nIf your paper includes illustrations that are not compatible with PDFTeX (such as .eps or .ps documents), you will need to convert them. The epstopdf package will usually work for eps files. You will need to convert your ps files to PDF however.\n\n### Low-Resolution Bitmaps.\n\nYou may not use low-resolution (such as 72 dpi) screen-dumps and GIF files\u2014these files contain so few pixels that they are always blurry, and illegible when printed. If they are color, they will become an indecipherable mess when converted to black and white. This is always the case with gif files, which should never be used. The resolution of screen dumps can be increased by reducing the print size of the original file while retaining the same number of pixels. You can also enlarge files by manipulating them in software such as PhotoShop. Your figures should be 300 dpi when incorporated into your document.\n\n### LaTeX Overflow.\n\nLaTeX users please beware: LaTeX will sometimes put portions of the figure or table or an equation in the margin. If this happens, you need to scale the figure or table down, or reformat the equation. **Check your log file!** You must fix any overflow into the margin (that means no overfull boxes in LaTeX). **Nothing is permitted to intrude into the margin or gutter.**\n\nThe most efficient and trouble-free way to fix overfull boxes in graphics is with the following command:\n\n> \n\n### Using Color.\n\nUse of color is restricted to figures only. It must be WACG 2.0 compliant. (That is, the contrast ratio must be greater than 4.5:1 no matter the font size.) It must be CMYK, NOT RGB. It may never be used for any portion of the text of your paper. The archival version of your paper will be printed in black and white and grayscale.The web version must be readable by persons with disabilities. Consequently, because conversion to grayscale can cause undesirable effects (red changes to black, yellow can disappear, and so forth), we strongly suggest you avoid placing color figures in your document. If you do include color figures, you must (1) use the CMYK (not RGB) colorspace and (2) be mindful of readers who may happen to have trouble distinguishing colors. Your paper must be decipherable without using color for distinction.\n\n### Drawings.\n\nWe suggest you use computer drawing software (such as Adobe Illustrator or, (if unavoidable), the drawing tools in Microsoft Word) to create your illustrations. Do not use Microsoft Publisher. These illustrations will look best if all line widths are uniform (half- to two-point in size), and you do not create labels over shaded areas. Shading should be 133 lines per inch if possible. Use Times Roman or Helvetica for all figure call-outs. **Do not use hairline width lines** \u2014 be sure that the stroke width of all lines is at least .5 pt. Zero point lines will print on a laser printer, but will completely disappear on the high-resolution devices used by our printers.\n\n### Photographs and Images.\n\nPhotographs and other images should be in grayscale (color photographs will not reproduce well; for example, red tones will reproduce as black, yellow may turn to white, and so forth) and set to a minimum of 300 dpi. Do not prescreen images.\n\n### Resizing Graphics.\n\nResize your graphics **before** you include them with LaTeX. You may **not** use trim or clip options as part of your \\includegraphics command. Resize the media box of your PDF using a graphics program instead.\n\n### Fonts in Your Illustrations\n\nYou must embed all fonts in your graphics before including them in your LaTeX document.\n\n## References\n\nThe AAAI style includes a set of definitions for use in formatting references with BibTeX. These definitions make the bibliography style fairly close to the one specified below. To use these definitions, you also need the BibTeX style file \"aaai.bst,\" available in the AAAI Author Kit on the AAAI web site. Then, at the end of your paper but before \\enddocument, you need to put the following lines:\n\n> \n\nPlease note that you are required to use \\bibliographystyle{aaai} for your references. You may not use named, plain, apalike, acm, ieeetr, siam, chicago, or any other style. Use of natbib is also not acceptable. (In addition to natbib, the aaai20.sty file is also incompatible with the hyperref and navigator packages. If you use either, your references will be garbled and your paper will be returned to you.) If you used natbib commands, an imprecise workaround is available (although it does not always work). You may put the following in your preamble (after removing \\usepackage{natbib}\n\n> \n>\n> \\newcommand{\\citet}\\[1\\]{\\citeauthor{#1}\u00a0\\shortcite{#1}} \\newcommand{\\citep}{\\cite} \\newcommand{\\citealp}\\[1\\]{\\citeauthor{#1}\u00a0\\citeyear{#1}}\n\nReferences may be the same size as surrounding text. However, in this section (only), you may reduce the size to \\small if your paper exceeds the allowable number of pages. Making it any smaller than 9 point with 10 point linespacing, however, is not allowed. A more precise and exact method of reducing the size of your references minimally is by means of the following command:\n\n> \\fontsize{9.8pt}{10.8pt} \\selectfont\n\nYou must reduce the size equally for both font size and line spacing, and may not reduce the size beyond {9.0pt}{10.0pt}.\n\nThe list of files in the \\bibliography command should be the names of your BibTeX source files (that is, the .bib files referenced in your paper).\n\nThe following commands are available for your use in citing references:\n\n> *\\cite:* Cites the given reference(s) with a full citation. This appears as \"(Author Year)\" for one reference, or \"(Author Year; Author Year)\" for multiple references. \n> *\\shortcite:* Cites the given reference(s) with just the year. This appears as \"(Year)\" for one reference, or \"(Year; Year)\" for multiple references. \n> *\\citeauthor:* Cites the given reference(s) with just the author name(s) and no parentheses. \n> *\\citeyear:* Cites the given reference(s) with just the date(s) and no parentheses.\n\nFormatted bibliographies should look like the following examples.\n\n*Book with Multiple Authors* \nEngelmore, R., and Morgan, A. eds. 1986. *Blackboard Systems.* Reading, Mass.: Addison-Wesley.\n\n*Journal Article* \nRobinson, A. L. 1980a. New Ways to Make Microcircuits Smaller. *Science* 208: 1019\u20131026.\n\n*Magazine Article* \nHasling, D. W.; Clancey, W. J.; and Rennels, G. R. 1983. Strategic Explanations in Consultation. *The International Journal of Man-Machine Studies* 20(1): 3\u201319.\n\n*Proceedings Paper Published by a Society* \nClancey, W. J. 1983. Communication, Simulation, and Intelligent Agents: Implications of Personal Intelligent Machines for Medical Education. In *Proceedings of the Eighth International Joint Conference on Artificial Intelligence,* 556\u2013560. Menlo Park, Calif.: International Joint Conferences on Artificial Intelligence, Inc.\n\n*Proceedings Paper Published by a Press or Publisher* \nClancey, W. J. 1984. Classification Problem Solving. In *Proceedings of the Fourth National Conference on Artificial Intelligence,* 49\u201354. Menlo Park, Calif.: AAAI Press.\n\n*University Technical Report* \nRice, J. 1986. Poligon: A System for Parallel Problem Solving, Technical Report, KSL-86-19, Dept. of Computer Science, Stanford Univ.\n\n*Dissertation or Thesis* \nClancey, W. J. 1979. Transfer of Rule-Based Expertise through a Tutorial Dialogue. Ph.D. diss., Dept. of Computer Science, Stanford Univ., Stanford, Calif.\n\n*Forthcoming Publication* \nClancey, W. J. 2021. The Engineering of Qualitative Models. Forthcoming.\n\nFor the most up to date version of the AAAI reference style, please consult the *AI Magazine* Author Guidelines at \n\n# Proofreading Your PDF\n\nPlease check all the pages of your PDF file. The most commonly forgotten element is the acknowledgements \u2014 especially the correct grant number. Authors also commonly forget to add the metadata to the source, use the wrong reference style file, or don't follow the capitalization rules or comma placement for their author-title information properly. A final common problem is text (expecially equations) that runs into the margin. You will need to fix these common errors before submitting your file.\n\n# Improperly Formatted Files \n\nIn the past, AAAI has corrected improperly formatted files submitted by the authors. Unfortunately, this has become an increasingly burdensome expense that we can no longer absorb (we are charged double for papers that require reformatting). Consequently, if your file is improperly formatted, it will probably be returned to you by the outside Production agency. If that happens, you will be required to fix your file and pay a resubmission fee.\n\n## LaTeX 209 Warning\n\nIf you use LaTeX 209 your paper will be returned to you unpublished. Convert your paper to LaTeX2e.\n\n# Naming Your Electronic File\n\nWe require that you name your LaTeX source file with the last name (family name) of the first author so that it can easily be differentiated from other submissions. Complete file-naming instructions will be provided to you in the submission instructions.\n\n# Submitting Your Electronic Files to AAAI\n\nInstructions on paper submittal will be provided to you in your acceptance letter.\n\n# Inquiries\n\nIf you have any questions about the preparation or submission of your paper as instructed in this document, please contact AAAI Press at the address given below. If you have technical questions about implementation of the aaai style file, please contact an expert at your site. We do not provide technical support for LaTeX or any other software package. To avoid problems, please keep your paper simple, and do not incorporate complicated macros and style files.\n\n> AAAI Press \n> 2275 East Bayshore Road, Suite 160 \n> Palo Alto, California 94303 \n> *Telephone:* (650) 328-3123 \n> *E-mail:* See the submission instructions for your particular conference or event.\n\n# Additional Resources\n\nLaTeX is a difficult program to master. If you've used that software, and this document didn't help or some items were not explained clearly, we recommend you read Michael Shell's excellent document (testflow doc.txt V1.0a 2002\/08\/13) about obtaining correct PS\/PDF output on LaTeX systems. (It was written for another purpose, but it has general application as well). It is available at www.ctan.org in the tex-archive.\n\n# Acknowledgments\n\nAAAI is especially grateful to Peter Patel Schneider for his work in implementing the aaai.sty file, liberally using the ideas of other style hackers, including Barbara Beeton. We also acknowledge with thanks the work of George Ferguson for his guide to using the style and BibTeX files \u2014 which has been incorporated into this document \u2014 and Hans Guesgen, who provided several timely modifications, as well as the many others who have, from time to time, sent in suggestions on improvements to the AAAI style.\n\nThe preparation of the LaTeX and BibTeX files that implement these instructions was supported by Schlumberger Palo Alto Research, AT&T Bell Laboratories, Morgan Kaufmann Publishers, The Live Oak Press, LLC, and AAAI Press. Bibliography style changes were added by Sunil Issar. `\\`pubnote was added by J. Scott Penberthy. George Ferguson added support for printing the AAAI copyright slug. Additional changes to aaai.sty and aaai.bst have been made by the AAAI staff.\n\nThank you for reading these instructions carefully. We look forward to receiving your electronic files!","meta":{"dup_signals":{"dup_doc_count":38,"dup_dump_count":2,"dup_details":{"curated_sources":19,"unknown":19}},"filename":"out\/1909.03493_extract_copy.tex.md"},"subset":"arxiv"} +{"text":"abstract: > AAAI creates proceedings, working notes, and technical reports directly from electronic source furnished by the authors. To ensure that all papers in the publication have a uniform appearance, authors must adhere to the following instructions.\nauthor: Author \nAssociation for the Advancement of Artificial Intelligence \n2275 East Bayshore Road, Suite 160 \nPalo Alto, California 94303 \ntitle: Formatting Instructions \n for Authors Using LaTeX\n\nCongratulations on having a paper selected for inclusion in an AAAI Press proceedings or technical report! This document details the requirements necessary to get your accepted paper published using LaTeX. If you are using Microsoft Word, instructions are provided in a different document. If you want to use some other formatting software, you must obtain permission from AAAI Press first.\n\nThe instructions herein are provided as a general guide for experienced LaTeX users who would like to use that software to format their paper for an AAAI Press publication or report. If you are not an experienced LaTeX user, do not use it to format your paper. AAAI cannot provide you with support and the accompanying style files are **not** guaranteed to work. If the results you obtain are not in accordance with the specifications you received, you must correct your source file to achieve the correct result.\n\nThese instructions are generic. Consequently, they do not include specific dates, page charges, and so forth. Please consult your specific written conference instructions for details regarding your submission. Please review the entire document for specific instructions that might apply to your particular situation. All authors must comply with the following:\n\n- You must use the latest AAAI Press LaTeX macro.\n\n- Download the author kit.\n\n- Complete, sign, and return by the deadline the AAAI copyright form (proceedings authors) or distribution license (technical report authors).\n\n- Read and format your paper source and PDF according to the formatting instructions for authors.\n\n- Submit your electronic files and abstract using our electronic submission form **on time.**\n\n- Submit your copyright form, and any required page or formatting charges to AAAI Press so that they are received by the deadline.\n\n- Check every page of your paper before submitting it.\n\n# Copyright\n\nAll papers submitted for publication by AAAI Press must be accompanied by a valid signed copyright form or, in the case of technical reports, by a valid signed permission to distribute form. There are no exceptions to this requirement. You must send us the original version of this form. However, to meet the deadline, you may fax (1-650-321-4457) or scan and e-mail the form (email@example.com) to AAAI by the submission deadline, and then mail the original via postal mail to the AAAI office. **If you fail to send in a signed copyright or permission form, your paper will not be published.** You will find PDF versions of the AAAI copyright and permission to distribute forms in the author kit.\n\n# Formatting Requirements in Brief\n\nWe need source and PDF files that can be used in a variety of ways and can be output on a variety of devices. AAAI imposes some requirements on your source and PDF files that must be followed. Most of these requirements are based on our efforts to standardize conference manuscript properties and layout. These requirements are as follows, and all papers submitted to AAAI for publication must comply:\n\n- Your .tex file must compile in PDFLaTeX \u2014 **no .ps or .eps figure files.**\n\n- All fonts must be embedded in the PDF file \u2014 **this includes your figures.**\n\n- Modifications to the style sheet (or your document) in an effort to avoid extra page charges are NOT allowed.\n\n- No type 3 fonts may be used (even in illustrations).\n\n- Your title must follow US capitalization rules.\n\n- LaTeX documents must use the Times or Nimbus font package (do not use Computer Modern for the text of your paper).\n\n- No LaTeX 209 documents may be used or submitted.\n\n- Fonts that require non-English language support (CID and Identity-H) must be converted to outlines or removed from the document (even if they are in a graphics file embedded in the document).\n\n- Two-column format in AAAI style is required for all papers.\n\n- The paper size for final submission must be US letter. No exceptions.\n\n- The source file must exactly match the PDF.\n\n- The document margins must be as specified in the formatting instructions.\n\n- The number of pages and the file size must be as specified for your event.\n\n- No document may be password protected.\n\n- Neither the PDFs nor the source may contain any embedded links or bookmarks.\n\n- Your source and PDF must not have any page numbers, footers, or headers.\n\n- Your PDF must be compatible with Acrobat 5 or higher.\n\n- Your LaTeX source file (excluding references) must consist of a **single** file (use of the \"input\" command is not allowed.\n\n- Your graphics must be sized appropriately outside of LaTeX (do not use the \"clip\" command) .\n\nIf you do not follow the above requirements, it is likely that we will be unable to publish your paper.\n\n# What Files to Submit\n\nYou must submit the following items to ensure that your paper is published:\n\n- A fully-compliant PDF file.\n\n- Your LaTeX source file submitted as a **single** .tex file (do not use the \"input\" command to include sections of your paper \u2014 every section must be in the single source file). The only exception is the bibliography, which you may include separately. Your source must compile on our system, which includes the standard LaTeX support files.\n\n- All your graphics files.\n\n- The LaTeX-generated files (e.g. .aux and .bib file, etc.) for your compiled source.\n\n- All the nonstandard style files (ones not commonly found in standard LaTeX installations) used in your document (including, for example, old algorithm style files). If in doubt, include it.\n\nYour LaTeX source will be reviewed and recompiled on our system (if it does not compile, you may incur late fees). **Do not submit your source in multiple text files.** Your single LaTeX source file must include all your text, your bibliography (formatted using aaai.bst), and any custom macros. Accompanying this source file, you must also supply any nonstandard (or older) referenced style files and all your referenced graphics files.\n\nYour files should work without any supporting files (other than the program itself) on any computer with a standard LaTeX distribution. Place your PDF and source files in a single tar, zipped, gzipped, stuffed, or compressed archive. Name your source file with your last (family) name.\n\n**Do not send files that are not actually used in the paper.** We don't want you to send us any files not needed for compiling your paper, including, for example, this instructions file, unused graphics files, and so forth. A shell script (created by an AAAI member \u2014 it might not work without modification on your system) that might help you create the LaTeX source package is included in the Author Kit.\n\n# Using LaTeX to Format Your Paper\n\nThe latest version of the AAAI style file is available on AAAI's website. Download this file and place it in a file named \"aaai.sty\" in the TeX\u00a0search path. Placing it in the same directory as the paper should also work. You must download the latest version of the complete author kit so that you will have the latest instruction set.\n\n## Document Preamble\n\nIn the LaTeX source for your paper, you **must** place the following lines as shown in the example in this subsection. This command set-up is for three authors. Add or subtract author and address lines as necessary, and uncomment the portions that apply to you. In most instances, this is all you need to do to format your paper in the Times font. The helvet package will cause Helvetica to be used for sans serif, and the courier package will cause Courier to be used for the typewriter font. These files are part of the PSNFSS2e package, which is freely available from many Internet sites (and is often part of a standard installation).\n\nLeave the setcounter for section number depth commented out and set at 0 unless you want to add section numbers to your paper. If you do add section numbers, you must uncomment this line and change the number to 1 (for section numbers), or 2 (for section and subsection numbers). The style file will not work properly with numbering of subsubsections, so do not use a number higher than 2.\n\n> \n\n## Inserting Document Metadata with LaTeX\n\nPDF files contain document summary information that enables us to create an Acrobat index (pdx) file, and also allows search engines to locate and present your paper more accurately. **Document Metadata for Author and Title are REQUIRED.**\n\nIf your paper includes illustrations that are not compatible with PDFTeX (such as .eps or .ps documents), you will need to convert them. The epstopdf package will usually work for eps files. You will need to convert your ps files to PDF however.\n\n*Important:* Do not include *any* LaTeX code or nonascii characters (including accented characters) in the metadata. The data in the metadata must be completely plain ascii. It may not include slashes, accents, linebreaks, unicode, or any LaTeX commands. Type the title exactly as it appears on the paper (minus all formatting). Input the author names in the order in which they appear on the paper (minus all accents), separating each author by a comma. You may also include keywords in the Keywords field.\n\n## Preparing Your Paper\n\nAfter the preamble above, you should prepare your paper as follows:\n\n> \n\n## Incompatible Packages\n\nThe following packages are incompatible with aaai.sty and\/or aaai.bst and must not be used (this list is not exhaustive \u2014 there are others as well):\n\n- hyperref\n\n- natbib\n\n- geometry\n\n- titlesec\n\n- layout\n\n- caption\n\n- titlesec\n\n- T1 fontenc package (install the CM super fonts package instead)\n\n## Illegal Commands\n\nThe following commands may not be used in your paper:\n\n- \\input\n\n- \\vspace (when used before or after a section or subsection)\n\n- \\addtolength\n\n- \\columnsep\n\n- \\top margin (or text height or addsidemargin or even side margin)\n\n## Paper Size, Margins, and Column Width\n\nPapers must be formatted to print in two-column format on 8.5 x 11 inch US letter-sized paper. The margins must be exactly as follows:\n\n- Top margin: .75 inches\n\n- Left margin: .75 inches\n\n- Right margin: .75 inches\n\n- Bottom margin: 1.25 inches\n\nThe default paper size in most installations of LaTeX is A4. However, because we require that your electronic paper be formatted in US letter size, you will need to alter the default for this paper to US letter size. Assuming you are using the 2e version of LaTeX, you can do this by including the \\[letterpaper\\] option at the beginning of your file: \\documentclass\\[letterpaper\\]article.\n\nThis command is usually sufficient to change the format. Sometimes, however, it may not work. Use PDFLaTeX and include \\setlength{\\pdfpagewidth}{8.5in} \\setlength{\\pdfpageheight}{11in} in your preamble.\n\n**Do not use the Geometry package to alter the page size.** Use of this style file alters aaai.sty and will result in your paper being rejected.\n\n### Column Width and Margins.\n\nTo ensure maximum readability, your paper must include two columns. Each column should be 3.3 inches wide (slightly more than 3.25 inches), with a .375 inch (.952 cm) gutter of white space between the two columns. The aaai.sty file will automatically create these columns for you.\n\n## Overlength Papers\n\nIf your paper is too long, turn on \\frenchspacing, which will reduce the space after periods. Next, shrink the size of your graphics. Use \\centering instead of \\begin{center} in your figure environment. If these two methods don't work, you may minimally use the following. For floats (tables and figures), you may minimally reduce \\floatsep, \\textfloatsep, \\abovecaptionskip, and \\belowcaptionskip. For mathematical environments, you may minimally reduce \\abovedisplayskip, \\belowdisplayskip, and \\arraycolsep. You may also alter the size of your bibliography by inserting \\fontsize{9.5pt}{10.5pt} \\selectfont right before the bibliography.\n\nCommands that alter page layout are forbidden. These include \\columnsep, \\topmargin, \\topskip, \\textheight, \\textwidth, \\oddsidemargin, and \\evensizemargin (this list is not exhaustive). If you alter page layout, you will be required to pay the page fee *plus* a reformatting fee. Other commands that are questionable and may cause your paper to be rejected include \\parindent, and \\parskip. Commands that alter the space between sections are also questionable. The title sec package is not allowed. Regardless of the above, if your paper is obviously \"squeezed\" it is not going to to be accepted. Before using every trick you know to make your paper a certain length, try reducing the size of your graphics or cutting text instead or (if allowed) paying the extra page charge. It will be cheaper in the long run.\n\n## Figures\n\nYour paper must compile in PDFLaTeX. Consequently, all your figures must be .jpg, .png, or .pdf. You may not use the .gif (the resolution is too low), .ps, or .eps file format for your figures.\n\nWhen you include your figures, you must crop them **outside** of LaTeX. The command \\includegraphics\\*\\[clip=true, viewport 0 0 10 10\\]... might result in a PDF that looks great, but the image is **not really cropped.** The full image can reappear when page numbers are applied or color space is standardized.\n\n## Type Font and Size\n\nYour paper must be formatted in Times Roman or Nimbus. We will not accept papers formatted using Computer Modern or Palatino or some other font as the text or heading typeface. Sans serif, when used, should be Courier. Use Symbol or Lucida or Computer Modern for *mathematics only.*\n\nDo not use type 3 fonts for any portion of your paper, including graphics. Type 3 bitmapped fonts are designed for fixed resolution printers. Most print at 300 dpi even if the printer resolution is 1200 dpi or higher. They also often cause high resolution imagesetter devices and our PDF indexing software to crash. Consequently, AAAI will not accept electronic files containing obsolete type 3 fonts. Files containing those fonts (even in graphics) will be rejected.\n\nFortunately, there are effective workarounds that will prevent your file from embedding type 3 bitmapped fonts. The easiest workaround is to use the required times, helvet, and courier packages with LaTeX2e. (Note that papers formatted in this way will still use Computer Modern for the mathematics. To make the math look good, you'll either have to use Symbol or Lucida, or you will need to install type 1 Computer Modern fonts \u2014 for more on these fonts, see the section \"Obtaining Type 1 Computer Modern.\")\n\nIf you are unsure if your paper contains type 3 fonts, view the PDF in Acrobat Reader. The Properties\/Fonts window will display the font name, font type, and encoding properties of all the fonts in the document. If you are unsure if your graphics contain type 3 fonts (and they are PostScript or encapsulated PostScript documents), create PDF versions of them, and consult the properties window in Acrobat Reader.\n\nThe default size for your type should be ten-point with twelve-point leading (line spacing). Start all pages (except the first) directly under the top margin. (See the next section for instructions on formatting the title page.) Indent ten points when beginning a new paragraph, unless the paragraph begins directly below a heading or subheading.\n\n### Obtaining Type 1 Computer Modern for LaTeX.\n\nIf you use Computer Modern for the mathematics in your paper (you cannot use it for the text) you may need to download type 1 Computer fonts. They are available without charge from the American Mathematical Society: http:\/\/www.ams.org\/tex\/type1-fonts.html.\n\n## Title and Authors\n\nYour title must appear in mixed case (nouns, pronouns, and verbs are capitalized) near the top of the first page, centered over both columns in sixteen-point bold type (twenty-four point leading). This style is called \"mixed case.\" Author's names should appear below the title of the paper, centered in twelve-point type (with fifteen point leading), along with affiliation(s) and complete address(es) (including electronic mail address if available) in nine-point roman type (the twelve point leading). (If the title is long, or you have many authors, you may reduce the specified point sizes by up to two points.) You should begin the two-column format when you come to the abstract.\n\n### Formatting Author Information\n\nAuthor information can be set in a number of different styles, depending on the number of authors and the number of affiliations you need to display. For several authors from the same institution, use \\and:\n\n> \n\nIf the names do not fit well on one line use:\n\n> \n\nFor authors from different institutions, use \\And:\n\n> \n\nTo start a separate \"row\" of authors, use \\AND:\n\n> \n\nIf the title and author information does not fit in the area allocated, place \\setlength\\titlebox{*height*} after the \\documentclass line where {*height*} is something like 2.5in.\n\n## LaTeX Copyright Notice\n\nThe copyright notice automatically appears if you use aaai.sty. If you are creating a technical report, it is not necessary to include this notice. You may disable the copyright line using the `\\`nocopyrightcommand. To change the entire text of the copyright slug, use: \\copyrighttext {*text*}. Either of these must appear before \\maketitle. Please be advised, however, that *if you disable or change the copyright line and transfer of copyright is required, your paper will not be published.*\n\n## Credits\n\nAny credits to a sponsoring agency should appear in the acknowledgments section, unless the agency requires different placement. If it is necessary to include this information on the front page, use \\thanks in either the \\author or \\title commands. For example:\n\n> \n\nMultiple \\thanks commands can be given. Each will result in a separate footnote indication in the author or title with the corresponding text at the botton of the first column of the document. Note that the \\thanks command is fragile. You will need to use \\protect.\n\nPlease do not include \\pubnote commands in your document.\n\n## Abstract\n\nThe abstract must be placed at the beginning of the first column, indented ten points from the left and right margins. The title \u00d2Abstract\u00d3 should appear in ten-point bold type, centered above the body of the abstract. The abstract should be set in nine-point type with ten-point leading. This concise, one-paragraph summary should describe the general thesis and conclusion of your paper. A reader should be able to learn the purpose of the paper and the reason for its importance from the abstract. The abstract should be no more than two hundred words in length. (Authors who are submitting short one- or two-page extended extracts should provide a short abstract of only a sentence or so.) **Do not include references in your abstract!**\n\n## Page Numbers\n\nDo not **ever** print any page numbers on your paper.\n\n## Text \n\nThe main body of the paper must be formatted in ten-point with twelve-point leading (line spacing).\n\n## Citations\n\nCitations within the text should include the author's last name and year, for example (Newell 1980). Append lower-case letters to the year in cases of ambiguity. Multiple authors should be treated as follows: (Feigenbaum and Engelmore 1988) or (Ford, Hayes, and Glymour 1992). In the case of four or more authors, list only the first author, followed by et al. (Ford et al. 1997).\n\n## Extracts\n\nLong quotations and extracts should be indented ten points from the left and right margins.\n\n> This is an example of an extract or quotation. Note the indent on both sides. Quotation marks are not necessary if you offset the text in a block like this, and properly identify and cite the quotation in the text.\n\n## Footnotes\n\nAvoid footnotes as much as possible; they interrupt the reading of the text. When essential, they should be consecutively numbered throughout with superscript Arabic numbers. Footnotes should appear at the bottom of the page, separated from the text by a blank line space and a thin, half-point rule.\n\n## Headings and Sections\n\nWhen necessary, headings should be used to separate major sections of your paper. Remember, you are writing a short paper, not a lengthy book! An overabundance of headings will tend to make your paper look more like an outline than a paper.\n\nFirst-level heads should be twelve-point Times Roman bold type, mixed case (initial capitals followed by lower case on all words except articles, conjunctions, and prepositions, which should appear entirely in lower case), with fifteen-point leading, centered, with one blank line preceding them and three additional points of leading following them. Second-level headings should be eleven-point Times Roman bold type, mixed case, with thirteen-point leading, flush left, with one blank line preceding them and three additional points of leading following them. Do not skip a line between paragraphs. Third-level headings should be run in with the text, ten-point Times Roman bold type, mixed case, with twelve-point leading, flush left, with six points of additional space preceding them and no additional points of leading following them.\n\n### Section Numbers\n\nThe use of section numbers in AAAI Press papers is optional. To use section numbers in LaTeX, uncomment the setcounter line in your document preamble and change the 0 to a 1 or 2. Section numbers should not be used in short poster papers.\n\n### Section Headings.\n\nSections should be arranged and headed as follows:\n\n### Acknowledgments.\n\nThe acknowledgments section, if included, appears after the main body of text and is headed \"Acknowledgments.\" This section includes acknowledgments of help from associates and colleagues, credits to sponsoring agencies, financial support, and permission to publish. Please acknowledge other contributors, grant support, and so forth, in this section. Do not put acknowledgments in a footnote on the first page. If your grant agency requires acknowledgment of the grant on page 1, limit the footnote to the required statement, and put the remaining acknowledgments at the back. Please try to limit acknowledgments to no more than three sentences.\n\n### Appendices.\n\nAny appendices follow the acknowledgments, if included, or after the main body of text if no acknowledgments appear.\n\n### References\n\nThe references section should be labeled \"References\" and should appear at the very end of the paper (don't end the paper with references, and then put a figure by itself on the last page). A sample list of references is given later on in these instructions. Please use a consistent format for references. Poorly prepared or sloppy references reflect badly on the quality of your paper and your research. Please prepare complete and accurate citations.\n\n## Illustrations and Figures\n\nFigures, drawings, tables, and photographs should be placed throughout the paper near the place where they are first discussed. Do not group them together at the end of the paper. If placed at the top or bottom of the paper, illustrations may run across both columns. Figures must not invade the top, bottom, or side margin areas. Figures must be inserted using the \\usepackage{graphicx}. Number figures sequentially, for example, figure 1, and so on.\n\nThe illustration number and caption should appear under the illustration. Labels, and other text in illustrations must be at least nine-point type.\n\n### Low-Resolution Bitmaps.\n\nYou may not use low-resolution (such as 72 dpi) screen-dumps and GIF files\u2014these files contain so few pixels that they are always blurry, and illegible when printed. If they are color, they will become an indecipherable mess when converted to black and white. This is always the case with gif files, which should never be used. The resolution of screen dumps can be increased by reducing the print size of the original file while retaining the same number of pixels. You can also enlarge files by manipulating them in software such as PhotoShop. Your figures should be a minimum of 266 dpi when incorporated into your document.\n\n### LaTeX Overflow.\n\nLaTeX users please beware: LaTeX will sometimes put portions of the figure or table or an equation in the margin. If this happens, you need to scale the figure or table down, or reformat the equation. Check your log file! You must fix any overflow into the margin (that means no overfull boxes in LaTeX). If you don't, the overflow text will simply be eliminated. **Nothing is permitted to intrude into the margins.**\n\n### Using Color.\n\nYour paper will be printed in black and white and grayscale. Consequently, because conversion to grayscale can cause undesirable effects (red changes to black, yellow can disappear, and so forth), we strongly suggest you avoid placing color figures in your document. Of course, any reference to color will be indecipherable to your reader.\n\n### Drawings.\n\nWe suggest you use computer drawing software (such as Adobe Illustrator or, (if unavoidable), the drawing tools in Microsoft Word) to create your illustrations. Do not use Microsoft Publisher. These illustrations will look best if all line widths are uniform (half- to two-point in size), and you do not create labels over shaded areas. Shading should be 133 lines per inch if possible. Use Times Roman or Helvetica for all figure call-outs. **Do not use hairline width lines** \u2014 be sure that the stroke width of all lines is at least .5 pt. Zero point lines will print on a laser printer, but will completely disappear on the high-resolution devices used by our printers.\n\n### Photographs and Images.\n\nPhotographs and other images should be in grayscale (color photographs will not reproduce well; for example, red tones will reproduce as black, yellow may turn to white, and so forth) and set to a minimum of 266 dpi. Do not prescreen images.\n\n### Resizing Graphics.\n\nResize your graphics **before** you include them with LaTeX. You may **not** use trim or clip options as part of your \\includgraphics command. Resize the media box of your PDF using a graphics program instead.\n\n### Fonts in Your Illustrations\n\nYou must embed all fonts in your graphics before including them in your LaTeX document.\n\n## References\n\nThe aaai.sty file includes a set of definitions for use in formatting references with BibTeX. These definitions make the bibliography style fairly close to the one specified below. To use these definitions, you also need the BibTeX style file \"aaai.bst,\" available in the author kit on the AAAI web site. Then, at the end of your paper but before \\enddocument, you need to put the following lines:\n\n> \n\nThe list of files in the \\bibliography command should be the names of your BibTeX source files (that is, the .bib files referenced in your paper).\n\nThe following commands are available for your use in citing references:\n\n> \n\n**Warning:** The aaai.sty file is incompatible with the hyperref and natbib packages. If you use either, your references will be garbled.\n\nFormatted bibliographies should look like the following examples.\n\n*Book with Multiple Authors* \nEngelmore, R., and Morgan, A. eds. 1986. *Blackboard Systems.* Reading, Mass.: Addison-Wesley.\n\n*Journal Article* \nRobinson, A. L. 1980a. New Ways to Make Microcircuits Smaller. *Science* 208: 1019\u20131026.\n\n*Magazine Article* \nHasling, D. W.; Clancey, W. J.; and Rennels, G. R. 1983. Strategic Explanations in Consultation. *The International Journal of Man-Machine Studies* 20(1): 3\u201319.\n\n*Proceedings Paper Published by a Society* \nClancey, W. J. 1983b. Communication, Simulation, and Intelligent Agents: Implications of Personal Intelligent Machines for Medical Education. In Proceedings of the Eighth International Joint Conference on Artificial Intelligence, 556\u2013560. Menlo Park, Calif.: International Joint Conferences on Artificial Intelligence, Inc.\n\n*Proceedings Paper Published by a Press or Publisher* \nClancey, W. J. 1984. Classification Problem Solving. In *Proceedings of the Fourth National Conference on Artificial Intelligence,* 49\u201354. Menlo Park, Calif.: AAAI Press.\n\n*University Technical Report* \nRice, J. 1986. Poligon: A System for Parallel Problem Solving, Technical Report, KSL-86-19, Dept. of Computer Science, Stanford Univ.\n\n*Dissertation or Thesis* \nClancey, W. J. 1979b. Transfer of Rule-Based Expertise through a Tutorial Dialogue. Ph.D. diss., Dept. of Computer Science, Stanford Univ., Stanford, Calif.\n\n*Forthcoming Publication* \nClancey, W. J. 1986a. The Engineering of Qualitative Models. Forthcoming.\n\n# Producing Reliable PDF Documents with LaTeX\n\nGenerally speaking, PDF files are platform independent and accessible to everyone. When creating a paper for a proceedings or publication in which many PDF documents must be merged and then printed on high-resolution PostScript RIPs, several requirements must be met that are not normally of concern. Thus to ensure that your paper will look like it does when printed on your own machine, you must take several precautions:\n\n- Use type 1 fonts (not type 3 fonts)\n\n- Use only standard Times, Nimbus, and CMR font packages (not fonts like F3 or fonts with tildes in the names or fonts\u2014other than Computer Modern\u2014that are created for specific point sizes, like Times\\~19) or fonts with strange combinations of numbers and letters\n\n- Embed all fonts when producing the PDF\n\n- Do not use the \\[T1\\]fontenc package (install the CM super fonts package instead)\n\n## Creating Output Using PDFLaTeX Is Required\n\nBy using the PDFTeX program instead of straight LaTeX or TeX, you will probably avoid the type 3 font problem altogether (unless you use a package that calls for metafont). PDFLaTeX enables you to create a PDF document directly from LaTeX source. The one requirement of this software is that all your graphics and images must be available in a format that PDFLaTeX understands (normally PDF).\n\nPDFLaTeX's default is to create documents with type 1 fonts. If you find that it is not doing so in your case, it is likely that one or more fonts are missing from your system or are not in a path that is known to PDFLaTeX.\n\n### dvipdf Script\n\nScripts such as dvipdf which ostensibly bypass the Postscript intermediary should not be used since they generally do not instruct dvips to use the config.pdf file.\n\n### dvipdfm\n\nDo not use this dvi-PDF conversion package if your document contains graphics (and we recommend you avoid it even if your document does not contain graphics).\n\n## Ghostscript\n\nLaTeX users should not use GhostScript to create their PDFs.\n\n## Graphics\n\nIf you are still finding type 3 fonts in your PDF file, look at your graphics! LaTeX users should check all their imported graphics files as well for font problems.\n\n# Proofreading Your PDF\n\nPlease check all the pages of your PDF file. Is the page size A4? Are there any type 3, Identity-H, or CID fonts? Are all the fonts embedded? Are there any areas where equations or figures run into the margins? Did you include all your figures? Did you follow mixed case capitalization rules for your title? Did you include a copyright notice? Do any of the pages scroll slowly (because the graphics draw slowly on the page)? Are URLs underlined and in color? You will need to fix these common errors before submitting your file.\n\n# Improperly Formatted Files \n\nIn the past, AAAI has corrected improperly formatted files submitted by the authors. Unfortunately, this has become an increasingly burdensome expense that we can no longer absorb. Consequently, if your file is improperly formatted, it may not be possible to include your paper in the publication. If time allows, however, you will be notified via e-mail (with a copy to the program chair) of the problems with your file and given the option of correcting the file yourself (and paying a late fee) or asking that AAAI have the file corrected for you, for an additional fee. If you opt to correct the file yourself, please note that we cannot provide you with any additional advice beyond that given in your packet. Files that are not corrected after a second attempt will be withdrawn.\n\n## LaTeX 209 Warning\n\nIf you use LaTeX 209 we will not be able to publish your paper. Convert your paper to LaTeX2e.\n\n# Naming Your Electronic File\n\nWe request that you name your LaTeX source file with your last name (family name) so that it can easily be differentiated from other submissions. If you name your files with the name of the event or \"aaai\" or \"paper\" or \"camera-ready\" or some other generic or indecipherable name, you bear all risks of loss \u2014 it is extremely likely that your file may be overwritten.\n\n# Submitting Your Electronic Files to AAAI\n\nSubmitting your files to AAAI is a two-step process. It is explained fully in the author registration and submission instructions. Please consult this document for details on how to submit your paper.\n\n# Inquiries\n\nIf you have any questions about the preparation or submission of your paper as instructed in this document, please contact AAAI Press at the address given below. If you have technical questions about implementation of the aaai style file, please contact an expert at your site. We do not provide technical support for LaTeX or any other software package. To avoid problems, please keep your paper simple, and do not incorporate complicated macros and style files.\n\n> AAAI Press \n> 2275 East Bayshore Road, Suite 160 \n> Palo Alto, California 94303 \n> *Telephone:* (650) 328-3123 \n> *E-mail:* See the submission instructions for your particular conference or event.\n\n# Additional Resources\n\nLaTeX is a difficult program to master. If you've used that software, and this document didn't help or some items were not explained clearly, we recommend you read Michael Shell's excellent document (testflow doc.txt V1.0a 2002\/08\/13) about obtaining correct PS\/PDF output on LaTeX systems. (It was written for another purpose, but it has general application as well). It is available at www.ctan.org in the tex-archive.\n\n# Acknowledgments\n\nAAAI is especially grateful to Peter Patel Schneider for his work in implementing the aaai.sty file, liberally using the ideas of other style hackers, including Barbara Beeton. We also acknowledge with thanks the work of George Ferguson for his guide to using the style and BibTeX files \u2014 which has been incorporated into this document \u2014 and Hans Guesgen, who provided several timely modifications, as well as the many others who have, from time to time, sent in suggestions on improvements to the AAAI style.\n\nThe preparation of the LaTeX and BibTeX files that implement these instructions was supported by Schlumberger Palo Alto Research, AT&T Bell Laboratories, Morgan Kaufmann Publishers, The Live Oak Press, LLC, and AAAI Press. Bibliography style changes were added by Sunil Issar. `\\`pubnote was added by J. Scott Penberthy. George Ferguson added support for printing the AAAI copyright slug. Additional changes to aaai.sty and aaai.bst have been made by the AAAI staff.\n\nThank you for reading these instructions carefully. We look forward to receiving your electronic files!","meta":{"dup_signals":{"dup_doc_count":17,"dup_dump_count":2,"dup_details":{"curated_sources":8,"unknown":9}},"filename":"out\/2110.12062_extract_example.tex.md"},"subset":"arxiv"} +{"text":"abstract: Recently there has been a lot of work on pruning filters from deep convolutional neural networks (CNNs) with the intention of reducing computations. The key idea is to rank the filters based on a certain criterion (say, $l_1$-norm, average percentage of zeros, etc) and retain only the top ranked filters. Once the low scoring filters are pruned away the remainder of the network is fine tuned and is shown to give performance comparable to the original unpruned network. In this work, we report experiments which suggest that the comparable performance of the pruned network is not due to the specific criterion chosen but due to the inherent plasticity of deep neural networks which allows them to recover from the loss of pruned filters once the rest of the filters are fine-tuned. Specifically, we show counter-intuitive results wherein by randomly pruning 25-50% filters from deep CNNs we are able to obtain the same performance as obtained by using state of the art pruning methods. We empirically validate our claims by doing an exhaustive evaluation with VGG-16 and ResNet-50. Further, we also evaluate a real world scenario where a CNN trained on all 1000 ImageNet classes needs to be tested on only a small set of classes at test time (say, only animals). We create a new benchmark dataset from ImageNet to evaluate such class specific pruning and show that even here a random pruning strategy gives close to state of the art performance. Lastly, unlike existing approaches which mainly focus on the task of image classification, in this work we also report results on object detection. We show that using a simple random pruning strategy we can achieve significant speed up in object detection (74$\\%$ improvement in fps) while retaining the same accuracy as that of the original Faster RCNN model.\nauthor: Deepak Mittal[^1] Shweta Bhardwaj Mitesh M. Khapra Balaraman Ravindran \nDepartment of Computer Science and Engineering \nRobert Bosch Centre for Data Science and AI (RBC-DSAI) \nIndian Institute of Technology Madras, Chennai, India \n`{deepak, cs16s003, miteshk, email@example.com`\nbibliography: egbib.bib\ntitle: *Recovering from Random Pruning:* On the Plasticity of Deep Convolutional Neural Networks\n\n# Introduction\n\nOver the past few years, deep convolutional neural networks (CNNs) have been very successful in a wide range of computer vision tasks such as image classification , object detection and image segmentation . In general, with each passing year, these networks are becoming deeper and deeper with a corresponding increase in the performance . However, this increase in performance is accompanied by an increase in the number of parameters and computations. This makes it difficult to port these models on embedded and mobile devices where storage, computation and power are limited. In such cases, it is crucial to have small, computationally efficient models which can achieve performance at par or close to large networks. This practical requirement has led to an increasing interest in model compression where the aim is to either (i) design efficient small networks or (ii) efficiently prune weights from existing deep networks or (iii) efficiently prune filters from deep convolutional networks or (iv) replace expensive floating point weights by binary or quantized weights or (v) guide the training of a smaller network using a larger (teacher) network .\n\nIn this work, we focus on pruning filters from deep convolutional neural networks. The filters in the convolution layers typically account for fewer parameters than the fully connected layers (the ratio is 10:90 for VGG-16 ), but they account for most of the floating point operations done by the model (99% for VGG-16 ). Hence reducing the number of filters effectively reduces the computation (and thus power) requirements of the model. All existing works on filter pruning follow a very similar recipe. The filters are first ranked based on a specific criterion such as, $l_1$-norm or percentage of zeros in the filter . The scoring criterion essentially determines the importance of the filter for the end task, typically image classification . Only the top-m ranked filters are retained and the resulting pruned network is then fine tuned. It is observed that when pruning up to 50% of the filters using different proposed criteria, the pruned network almost recovers the original performance after fine-tuning. The claim is that this recovery is due to soundness of the criterion chosen for pruning. However, in this work we argue that this recovery is not due the specific pruning criterion but due to the inherent plasticity of deep CNNs. Specifically, we show that even if we prune filters randomly we can match the performance of state-of-the-art pruning methods.\n\nTo effectively prove our point, it is crucial that we look at factors\/measures other than the final performance of the pruned model. To do so we draw an analogy with the human brain and observe that the process of pruning filters from a deep CNN is akin to causing damage to certain portions of the brain. It is known that the human brain has a high plasticity and over time can recover from such damages with appropriate treatment . In our case, the process of fine-tuning would be akin to such post-damage (post-pruning) treatment. If the injury damages only redundant or unimportant portions of the brain then the recovery should be complete quickly and with minimal treatment. Similarly, we could argue that if the pruning criteria is indeed good and prunes away only unimportant filters then (i) the performance of the model should not drop much (ii) the model should be able to regain its full performance after fine-tuning (iii) this recovery should be fast (i.e., with fewer iterations of fine tuning) and (iv) the quantum of data used for fine-tuning should be less. None of the existing works on filter pruning do a thorough comparison w.r.t. these factors. We not only consider these factors but also present counter-intuitive results which show that a random pruning criteria is comparable to state of the art pruning methods on all these factors. Note that we are not claiming that we can always recover the full performance of the unpruned network. For example, it should be obvious that in the degenerate case if 90% of the filters are pruned then it would be almost impossible to recover. The claim being made is that, at different pruning levels (25%, 50%, 75%) a random pruning strategy is not much worse than of state of the art pruning strategies.\n\nTo further prove our point, we wanted to check if such recovery from pruning is task agnostic. In other words, in addition to showing that a network trained for image classification (*task1*) can be pruned efficiently, we also show that same can be done with a network trained for object detection (*task2*). Here again, we show that a random pruning strategy works at par with state of the art pruning methods. Stretching this idea further and continuing the above analogy, we note that once the brain recovers from such damages, it is desirable that in addition to recovering its performance on the tasks that it was good at before the injury, it should also be able to do well on newer tasks. In our case, the corresponding situation would be to take a network pruned and fine-tuned for image classification (*old task*) and plug it into a model for object detection (*new task*). Specifically, we show that when we plug a randomly pruned and fine tuned VGG-16 network into a Faster RCNN model we can get the same performance on object detection as obtained by plugging (i) the original unpruned network or (ii) a network pruned using a state of the art pruning method. This once again hints at the inherent plasticity of deep CNNs which allows them to recover (up to a certain level) irrespective of the pruning strategy.\n\nFinally, we consider the case of class specific pruning which has not been studied in the literature. We note that in many real world scenarios, it is possible that while we have trained an image classification network on a large dataset containing many classes, at test time we may be interested in only a few classes. A case in point, is the task of object detection using the Pascal VOC dataset . RCNN and its variants use as a sub-component an image classification model trained on all the 1000 ImageNet classes. We hypothesize that this is an overkill and instead create a class specific benchmark dataset from ImageNet which contains only those 52 classes which correspond to the 20 classes in Pascal VOC. Ideally, one would expect that a network trained, pruned and fine-tuned only for these 52 classes when plugged into faster RCNN should do better than a network trained, pruned and fine-tuned on a random set of 52 classes (which are very different from the classes in Pascal VOC). However, we observe that irrespective of which of these networks is plugged into Faster RCNN the final performance after fine-tuning is the same, once again showing the ability to recover from unfavorable situations.\n\nTo the best of our knowledge, this is a first of its kind work on pruning filters which:\n\n1. Proposes that while assessing the performance of a pruning method, we should consider factors such as amount of damage (drop in performance before fine-tuning), amount of recovery (performance after fine-tuning), speed of recovery and quantum of data required for recovery.\n\n2. Performs extensive evaluation using two image classification networks (VGG-16 and ResNet) and shows that a random pruning strategy gives comparable performance to that of state of the art pruning strategies w.r.t. all the above factors.\n\n3. Shows that such behavior is task agnostic and a random pruning strategy works well even for the task of object detection. Specifically, we show that by randomly pruning filters from an object detection model we can get a 74$\\%$ improvement in fps while maintaining almost the same accuracy (1% drop) as the original unpruned network\n\n4. Shows that pruned networks can adapt with ease to newer tasks\n\n5. Proposes a new benchmark for evaluating class specific pruning\n\n# Related Work\n\nIn this section, we review existing work on making deep convolutional neural networks efficient w.r.t. their memory and computation requirements while not compromising much on the accuracy. These approaches can be broadly classified into the following categories (i) pruning unimportant weights (ii) low rank factorization (iii) knowledge distillation (iv) designing compact networks from scratch or (v) using binary or quantized weights and (vi) pruning unimportant filters. Below, we first quickly review the related work for the first five categories listed above and then discuss approaches on pruning filters which is the main focus of our work.\n\nOptimal brain damage and optimal brain surgery are two examples of approaches which prune the unimportant weights in the network. A weight is considered unimportant if the output is not very sensitive to this weight. They show that pruning such weights leads to minimal drop in the overall performance of the network. However, these methods are computationally expensive as they require the computation of the Hessian (second order derivative). Another approach is to use low rank factorization of the weight tensor\/matrices to reduce the computations . For example, instead of directly multiplying a high dimensional weight tensor $W$ with the input tensor $I$, we could first compute a low rank approximation of $W = U\\Sigma V$ where the dimensions of $U$, $\\Sigma$ and $V$ are much smaller than the dimensions of $W$. This essentially boils down to decomposing the larger matrix multiplication operation into smaller operations. Also, the low rank approximation ensures that only the important information in the weight matrix is retained. Alternately, researchers have also explored designing compact networks from scratch which have fewer number of layers and\/or parameters and\/or computations . There are also some approaches which quantize or binarize the weights of a network to reduce both memory footprint and computation time. Another line of work focuses on transferring the knowledge from bigger trained network (or ensemble of networks) to smaller (thin) network .\n\nThe main focus of our work is on pruning filters from deep CNNs with the intention of reducing computations. As mentioned earlier, while the convolution filters do not account for a large number of parameters, they account for almost all the computations that happen in the network. Here, the idea is to rank the filters using a scoring function and then retain only the top scoring functions. For example, in , the authors have used the $l_1$-norm of the filters to rank their importance. The argument is that filters having a lower l1-norm will produce smaller activation values which will contribute minimally to the output of that layer. Alternately, in authors have proposed entropy as a measure to calculate the importance of a filter. If a filter as high entropy than the filter is more informative and hence more important. On the other hand, calculate the average percentage of zeros in the corresponding activation maps of filters and hypothesize that filters having more average percentage of zeros in their activation are less important. In authors have used Taylor series expansion that approximates the change in cost function caused by pruning filters. Unlike , this method uses information from first derivative only. Another work on pruning filters proposes that instead of pruning filters based on current layer's statistics they should be pruned based on next layer's statistics. Essentially the idea of is to look at the activation map of layer $i+1$ and prune out the channel which will give you the minimum change in output on its removal and its corresponding filter in layer $i$. In authors proposed a similar idea to but instead of removing the filters one by one they have proposed to use LASSO regression. Lastly, in authors has used particle filtering to prune out the filters.\n\n# Methodology\n\nIn this section, we first formally define the problem of pruning filter and give a generic algorithm for pruning filters using any appropriate scoring function. We then discuss existing scoring functions along with some new variants that we propose.\n\n## Problem Statement\n\nSuppose there are $K$ convolutional layers in a CNN and suppose the layer $k$ contains $n_k$ filters. We use $F_{ki}$ to denote the $i$-th filter in the $k$-th layer. Each such filter is a three dimensional tensor, $F_{ki} \\in \\mathbb{R}^{i_k \\times w_{ki} \\times h_{ki}}$ where $i_k$ is the number of input channels for layer $k$ and $w_{ki}, h_{ki}$ are the width and height of the $i$-th filter in the $k$-th layer. Our goal is to rank all the filters in layer $k$, $\\{F_{ki}, F_{ki}, ..., F_{ki}\\}$ and then retain the top-$m_k$ filters where $m_k (< n_k)$ is a hyperparameter which indicates the desired pruning. For example, based on available computation resources, if we want to reduce the number of computations in this layer by half then we can set $m_k = \\frac{n_k}{2})$. Let the original output of layer $k$ be denoted by $O^k \\in \\mathbb{R}^{n_k \\times w^{o}_{k} \\times h^{o}_{k}}$ where $w^{o}_{k}, h^{o}_{k}$ are the width and height and $n_k$ is the number of channels which is the same as the number of filters. After pruning and retaining only top-$m_k$ filters the size of the output will be reduced to $m_k \\times w^{o}_{k} \\times h^{o}_{k}$. Thus, pruning filters not only reduces the number of computations in this layer but also reduces the size of the input to the next layer (which is the same as the output of this layer). The same process of pruning can then be repeated across all layers of the CNN. The main task here is to find the right scoring function for ranking the filters.\n\n## A Generic Algorithm for Pruning\n\nAlgorithm summarizes the generic recipe used by different approaches for pruning filters. As shown in the algo, pruning typically starts from the outermost layer. Once the low scoring filters from this layer are pruned, the network is then fine-tuned and the same process is then repeated for the layers before it. Once all the layers are pruned and fine-tuned, the entire network is then tuned for a few epochs.\n\nExisting methods for pruning filters differ in the $scoring\\_function$ that they use for ranking the filters. We alternately refer to this scoring function as pruning criteria as discussed in the next subsection.\n\n## Pruning Criteria\n\nWe now describe various pruning criteria which are used by existing approaches and also introduce some new variants of existing pruning criteria. These criteria are essentially used as $scoring\\_function()$ in Algorithm .\n\n1. Mean Activation : Most deep CNNs for image classification use ReLU as the activation function which results in very sparse activations (as all negative outputs are set to 0). We could compute the mean activation of the feature map corresponding to a filter across all images in the training data. If this mean activation is very low (because most of the activations are 0) then this feature map and hence the corresponding filter is not going to contribute much to the discriminatory power of the network (since the filter rarely fires for any input). Hence, uses the mean activation as a scoring function for ranking filters.\n\n2. $l_{1}$-Norm : The authors of suggest that the $l_{1}$-norm ($\\parallel$F$\\parallel_{1}$) of a filter can also be used as an indicator of the importance of the filter. The argument is that if the $l_{1}$-norm of a filter is small then on average the weights in the filter will be small and hence produce very small activations. These small activations will not influence the output of the network and hence the corresponding filters can be pruned away. One important benefit of this method is that apart from computing the $l_{1}$-norm, it does not need any extra computation during pruning and fine-tuning.\n\n3. Entropy : If the feature map corresponding to a filter produces the same output for every input (image) then this feature map and hence the corresponding filters may not be very important (because it does not play any discriminatory role). In other words, we are interested in feature maps (and hence filters) which are more informative or have a high entropy. If we divide the possible range of the average output of a feature map into $b$ bins then we could compute the entropy of the $i$-th feature map (or filter) as : \n $$E_{i} = -\\sum_{j=1}^{b}p_{ij}\\log p_{ij}$$ where $p_{ij}$ is the probability that the output of the $i$-th feature map lies in the $j$-th bin. This probability can be computed as the fraction of input images for which the average output of the feature map lies in this bin.\n\n4. Average Percentage of Zeros (APoZ) : As mentioned earlier, when ReLU is used as the activation function, the output activations are very sparse. If most of the neurons in a feature map are zero then this feature map is not likely to contribute much to the output of the network. The Average Percentage of Zeros in the output of each filter can thus be used to compute the importance of the filter (the lesser the better).\n\n5. Sensitivity : We could compute the gradient of a filter w.r.t. the loss function (i.e., cross entropy). If a filter has a high influence on the loss function then the value of this gradient would be high. The $l_{1}$-norm of this gradient averaged over all images can thus be used to compute the importance of a filter.\n\n6. Scaled Entropy : We propose a new variant of the entropy based criteria. We observe that a filter may have a high entropy but if all its activations are very low (belonging to lower bins) then this filter is not likely to contribute much to the output. We thus propose to use a combination of entropy and mean activation by scaling the entropy by the mean activation of the filter. This scaled-entropy of $i-$th filter can be computed as: \n $$SE_{i} = -\\sum_{j=1}^{b}p_{ij}log p_{ij} * Mean_{i}$$ where $Mean_{i}$ is the average activation of the $i$-th filter over all input images.\n\n7. Class Specific Importance : In this work, we are also interested in a more practical scenario, where a network trained for detecting all the 1000 classes from ImageNet is required to detect only ($l < 1000$) of these classes at test time (say, only animals). Intuitively, we should then devise a scoring function which retains only those filters which are important for these $l$ classes. To do so we once aging compute the gradient of the loss function w.r.t. the filter. However, now instead of averaging the $l_{1}$-norm of this gradient over all images in the training data, we compute the average over only those images in the training data which correspond to the $l$ classes of interest. This class-specific average is then used to rank the filters.\n\n8. Random Pruning : One of the main contributions of this work is to show that even if we randomly prune the filters from a CNN, its performance after fine-tuning is not much worse than any of the above approaches.\n\n# Experiments: Image Classification\n\nIn this section, we focus on the task of image classification using the ImageNet dataset. The dataset is split into three sets : training (1.3M images), validation (50K images), and testing (100K images with held-out class labels). We experiment with two popular networks, *viz.*, VGG-16 and ResNet-50. We first train these networks using the full ImageNet training data and then prune them using Algorithm . We compare the performance of different scoring functions as listed in the section .\n\n## Comparison of different pruning methods on VGG-16\n\nVGG-16 has 13 convolutional (CONV) and two fully connected (FC) layers. The number of filters in each CONV layer in the the standard VGG-16 network is {64, 64, 128, 128, 256, 256, 256, 512, 512, 512, 512, 512, 512}. We first train this network as it is (*i.e.*, with the standard number of filters in each layer) using the ImageNet training data. When evaluated on the standard ImageNet test set, this trained model gives us a top-1 accuracy of 69$\\%$ which is comparable to the accuracy reported elsewhere in the literature. We now prune this network, one layer at a time starting from the last convolution layer. We prune away $m$% of filters from each layer where we chose the value of $m$ to be {25, 50, 75}. We use one of the scoring functions described in Section to select the top $m$% filters. We drop the remaining (100 - m)% filters from this layer and then fine-tune the pruned network for 1 epoch. We then repeat the same process for the lower layers and use the same value of $m$ across all layers. Once the network is pruned till layer 1, we then fine tune the entire pruned network for 12 epochs using 1\/10-th of the training data picked randomly. The only reason for not using the entire training data is that it is quite computationally expensive. We did not see any improvement in the performance on the validation set by fine-tuning beyond 12 epochs. We then evaluate this pruned and fine-tuned network on the test set. Below, we discuss the performance of the final pruned and fine-tuned network obtained using different pruning strategies.\n\n**Performance of pruned network after fine-tuning:** In Table , we report the performance of the final pruned network after fine tuning. We observe that random pruning works better than most of the other pruning methods described earlier. $l_1$-norm is the only scoring function which does better than random and that too by a small margin. In fact, if we fine-tune the final trained network using the entire training data then we observe that there is hardly any difference between random and $l_1$-norm (see Table ). This provides empirical evidence for our claim that the amount of recovery (i.e., final performance after fine-tuning) is not due to the soundness of the pruning criteria. Even with random pruning, the performance of the pruned network is comparable. Of course, as the percentage of pruning increases ($i.e,,$ as m increases) it becomes harder for the pruned network to recover the full performance of the original network (but the point is that it is equally hard irrespective of the pruning method used). Thus, w.r.t. the amount of recovery after damage (pruning), a random pruning strategy is as good as any other pruning strategy. We further drive this point in Figure where we show that after pruning and fine tuning for every layer, the amount of recovery after fine tuning is comparable across different pruning strategies.\n\n| Heuristic | 25 % | 50% | 75% |\n|:----------------|:----------|:----------|:----------|\n| Random | 0.650 | 0.569 | 0.415 |\n| Mean Activation | 0.652 | 0.570 | 0.409 |\n| Entropy | 0.641 | 0.549 | 0.405 |\n| Scaled Entropy | 0.637 | 0.550 | 0.401 |\n| $l_1$-norm | **0.667** | **0.593** | **0.436** |\n| APoZ | 0.647 | 0.564 | 0.422 |\n| Sensitivity | 0.636 | 0.543 | 0.379 |\n\nComparison of different filter pruning strategies on VGG-16.\n\nAs a side note we would like to mention that we do not include the performance of ThiNets in Table . This is because it uses a slightly different methodology. In particular there are two major differences. First, in ThiNets pruning is done only till layer 10 and not upto layer 11 as is the case for all numbers reported in Table . Secondly, in ThiNets, if a CONV layer appears before a max-pooling layer then it is fine-tuned for an extra epoch to compensate more for the downsampling in the max pooling layer. For a fair comparison, we followed this exact same strategy as ThiNet but using a random pruning criteria. In this setup, a randomly pruned network was able to achieve 68% top-1 accuracy after 50% pruning which is comparable to the performance of the corresponding ThiNet (69%).\n\n| Heuristic | 50% | | |\n|:----------------|:-----------|:----|:----|\n| Random | 0.6701 | | |\n| Mean Activation | 0.6662 | | |\n| Entropy | 0.6635 | | |\n| Scaled Entropy | 0.6625 | | |\n| **$l_1$-norm** | **0.6759** | | |\n| APoZ | 0.6706 | | |\n| Sensitivity | 0.6659 | | |\n\nPerformance after fine-tuning with full data\n\n**Amount of initial damage caused by different pruning strategies:** One might argue that while random pruning strategy is equivalent to other pruning strategies w.r.t. final performance after fine tuning, it is possible that the amount of initial damage caused by a careful pruning strategy maybe less than than caused by random pruning. This could be important in cases where enough time or resources are not available for fine-tuning after pruning. To evaluate this, we compute the accuracy of the network just after pruning (and before fine-tuning) at each layer. Figure compares this performance for different pruning strategies. Here again we observe that the damage caused by a random pruning strategy is not worse than other pruning strategies. The only exception is when we prune the first 4 layers in which case the damage caused by $l_1$-norm based pruning is less than random pruning. We hypothesize that this is because the first 4 layers have very few filters and hence one needs to be careful while pruning for filters from these layers. In fact, in hindsight we would recommend not to prune any filters from these 4 layers because the computation savings are less as compared to drop in accuracy.\n\n**Speed of recovery and quantum of data for fine-tuning:** Another important criteria is the speed of recovery, *i.e.*, the number of iterations for which the network needs to be fine-tuned after pruning. It is conceivable that a carefully pruned network may be able to recover and reach its best performance faster than a randomly pruned network. However, as shown in Figure that almost all the pruning strategies (including random) reach their peak after 2 epochs when fine-tuned with one-tenth of the data. Even, if we increase the quantum of data, this behavior does not change as shown in Figure (for $l_1$-norm based pruning and random pruning). Of course, as we increase the quantum of data the amount of recovery increases, *i.e.*, the peak performance of the pruned network increases. However, the important point is that a random strategy is no worse than a careful pruning strategy w.r.t. speed of recovery and quantum of data required.\n\n## Pruning ResNet-50 using $l_1$-Norm and Random\n\nWhile the above set of experiments focused on VGG-16, we now turn our attention to ResNet-50 which gives state of the art results on ImageNet. We took a trained ResNet-50 model which gave 74.5$\\%$ top-1 accuracy on the ImageNet test set which is again comparable to the accuracy reported elsewhere in the literature. ResNet contains 16 residual blocks wherein each block contains 3 layers with a skip connection from the first layer to the third layer. The standard practice is to either prune the first layer of each block or the first two layers of each block. In the first case, out of the total 48 convolution layers (16 \\* 3) we will end up pruning 16 and in the second case we will end up pruning 32. As before, for each pruned layer we vary the percentage of pruning from 25%, 50% to 75%. Here, we only compare the performance of $l_1$-Norm with random pruning as these were the top performing strategies on VGG-16. This was just to save time and resources as given the deep structure of ResNet it would have been very expensive to run all pruning strategies. Once again from Table , we observe that **random** pruning performs at par (in fact, slightly better) when compared to $l_{1}$-Norm based pruning. Note that, in this case the pruned models were trained with only one-tenth of the data. The performance of both the methods are likely to improve further if we were to fine-tune the pruned network on the entire training data.\n\n| Heuristics | \\#Layers Pruned | 25 % | 50% | 75% |\n|:-----------|:---------------:|:------|:------|:------|\n| Random | 16 | 0.722 | 0.683 | 0.617 |\n| $l_1$-norm | 16 | 0.714 | 0.677 | 0.610 |\n| Random | 32 | 0.696 | 0.637 | 0.518 |\n| $l_1$-norm | 32 | 0.691 | 0.633 | 0.514 |\n\nComparison of different filter pruning strategies on ResNet (Top-1 accuracy of unpruned network is 0.745)\n\n# Experiments: Class specific pruning\n\nExisting work on pruning filters (or model compression, in general) focuses on the scenario where we have a network trained for detecting all the 1000 classes in ImageNet and at test time it is again evaluated using data belonging to all of these 1000 classes. However, in many real world scenarios, at test time we may be interested in fewer classes. A case in point, is the Pascal VOC dataset which contains only 20 classes. Intuitively, if we are interested in only fewer classes at test time then we should be able to prune the network to cater to only these classes. Alternately, we could train the original network itself using data corresponding to these classes only. To enable these experiments, we first create a new benchmark from ImageNet which contains only those 52 classes which correspond to the 20 classes in Pascal VOC. Note that the mapping of 52-20 happens because ImageNet has more fine-grained classes. For example, there is only one class for 'dog' in Pascal VOC but ImageNet contains many sub-classes of 'dog' (different breeds of dogs). We manually went over all the classes in ImageNet and picked out the classes which correspond to the 20 classes in Pascal VOC. In some cases, we ignored ImageNet classes which were too fine-grained and only considered those classes which were immediate hyponyms of a class in Pascal VOC. We then extracted the train, test and valid images for these classes from the original ImageNet dataset. We refer to this subset of ImageNet as ImageNet-52P (where P stands for Pascal VOC). We refer to the original ImageNet dataset as ImageNet-1000. Note that the train, test and validation splits of ImageNet-52P are subsets of the corresponding splits of ImageNet-1000. In particular , the training split of ImageNet-1000 does not overlap with the test or validation splits of ImageNet-52P.\n\nWe first compare the performance in the following two setups: (i) model trained on ImageNet-1000 and evaluated on the test split of ImageNet-52P and (ii) model trained on ImageNet-52P and evaluated on the test split of ImageNet-52P. We observe that while in the first setup we get a top-1 accuracy of 74%, in the second setup we get an accuracy of 87%. This suggests that model trained on ImageNet-1000 is clearly overloaded with extra information about the remaining 948 classes and hence performs poorly on the 52 classes of interest. We should thus be able to prune the network effectively to cater to only the 52 classes of interest. Note that in practice it is desirable to have just one network trained on ImageNet-1000 and then prune it for different subsets of classes that we are interested in instead of training a separate network from scratch for each of these subsets. We again compare different pruning strategies as listed earlier except that now when fine-tuning (after each layer and at the end of all layers) we only use ImageNet-52P. In other words, we fine-tune using only data corresponding to the 52 classes. Once again, we observe that there is not much difference between random pruning and other pruning strategies. Also with 25% pruning, we are able to almost match the performance of a network trained only on these 52 classes (*i.e.*, 87%) .\n\n| Heuristics | 25 % | 50% | 75% |\n|:------------------|:----------|:----------|:----------|\n| Random | 0.859 | 0.820 | 0.692 |\n| Mean Activation | 0.866 | 0.816 | 0.698 |\n| Entropy | 0.860 | 0.802 | 0.684 |\n| Scaled Entropy | 0.863 | 0.813 | 0.691 |\n| $l_1$-norm | **0.867** | **0.823** | **0.729** |\n| APoZ | 0.858 | 0.811 | 0.700 |\n| Important Classes | 0.857 | 0.795 | 0.655 |\n| Sensitivity | 0.849 | 0.793 | 0.634 |\n\nComparison of different filter pruning strategies when fine-tuned and evaluated with ImageNet-52P.\n\n# Experiments: Faster Object Detection\n\nThe above experiments have shown that with reasonable levels of pruning (25-50%) and enough fine-tuning (using entire data) the pruned network is able to recover and almost match the performance of the unpruned network on the original task (image classification) even with a random pruning strategy. However, it is possible that if such a pruned network is used for a new task, say object detection, then a randomly pruned network may not give the same performance as a carefully pruned network. To check this, we performs experiments using the Faster-RCNN model for object detection. Note that the Faster-RCNN model uses a VGG-16 model as a base component and then adds other components which are specific to object detection. We experiment with the PASCAL-VOC 2007 dataset which consists of 9,963 images, containing 24,640 annotated objects. We first plug-in a standard trained VGG-16 network into Faster-RCNN and then train Faster-RCNN for 70K iterations (as is the standard practice). This model gives a mean Average Precision (mAP) value of $0.66$. The idea is to now plug-in a pruned VGG-16 model into faster RCNN instead of the original unpruned model and check the performance. Table again shows that the specific choice of pruning strategy does not have much impact on the final performance on object detection. Of course, as earlier, as the level of pruning increases the performance drops (but the drop is consistent across all pruning strategies). We now report some more interesting experiments on pruning Faster RCNN.\n\n**Directly pruning Faster RCNN:** Instead of plugging in a pruned VGG-16 model into Faster-RCNN, we could alternately take a trained Faster-RCNN model and then prune it directly. Here again, we use a simple random pruning strategy and observe that the performance of the pruned model comes very close to that of the unpruned model. In particular, with 50% pruning we are able to achieve a mAP of $\\textbf{0.648}$ with a $74\\%$ speedup in terms of frames per second.\n\n**Plugging in a VGG-16 model trained using ImageNet-52P:** Since we are only interested in the 52 classes corresponding to Pascal-VOC, we wanted to check what happens if we plug-in a VGG-16 model trained, pruned and fine-tuned only on ImageNet-52P. As shown in Table we do not get much benefit of plugging in this specialized model into Faster-RCNN. In fact, in a separate experiment we observed that even if we train a VGG-16 model on a completely random set of 52 classes (different from the 52 classes corresponding to Pascal VOC) and then plug in this model into Faster RCNN, even then the final performance of the Faster RCNN model remains the same. This is indeed surprising and further demonstrates the ability of these networks to recover from unfavorable situations.\n\n| Heuristics | 25 % | 50% | 75% |\n|:----------------|:----------|:----------|:----------|\n| Random | **0.647** | 0.600 | 0.505 |\n| Mean Activation | **0.647** | 0.601 | 0.489 |\n| Entropy | 0.635 | 0.584 | 0.501 |\n| Scaled Entropy | 0.640 | 0.593 | 0.507 |\n| $l_1$-norm | 0.628 | **0.608** | **0.520** |\n| APoZ | 0.646 | 0.598 | 0.514 |\n| Sensitivity | 0.636 | 0.592 | 0.485 |\n\nObject detection results obtained by plugging-in different pruned VGG-16 models into Faster-RCNN.\n\n| Faster-RCNN | Baseline | 25 % | 50% | 75% |\n|:------------|:---------|:------|:------|:------|\n| mAP | 0.66 | 0.655 | 0.648 | 0.530 |\n| fps | 7.5 | 10 | 13 | 16 |\n\nObject detection results when directly pruning (random) a fully trained Faster-RCNN model.\n\n| Heuristics | 25 % | 50% | 75% |\n|:------------------|:----------|:----------|:----------|\n| Random | 0.647 | 0.580 | 0.469 |\n| Mean Activation | 0.644 | 0.583 | 0.454 |\n| Entropy | 0.642 | 0.578 | 0.47 |\n| Scaled Entropy | 0.645 | 0.580 | 0.443 |\n| $l_1$-norm | **0.648** | **0.601** | **0.487** |\n| APoZ | 0.641 | 0.585 | 0.466 |\n| Important Classes | 0.631 | 0.568 | 0.432 |\n| Sensitivity | 0.637 | 0.576 | 0.4345 |\n\nObject detection results obtained by plugging-in different pruned VGG-16 models fine-tuned with ImageNet-52P as opposed to ImageNet-1000.\n\nfalse\n\n| Dataset | Heuristics | 25 % | 50% | 75% |\n|:----------|:-----------|:-----|:-----|:-----|\n| Dataset-1 | Random | 0.51 | 0.50 | 0.38 |\n| Dataset-1 | $l_1$-norm | 0.52 | 0.49 | 0.40 |\n| Dataset-2 | Random | 0.52 | 0.48 | 0.37 |\n| Dataset-2 | $l_1$-norm | 0.54 | 0.49 | 0.39 |\n\nFaster RCNN results using pruned VGG-16 using Dataset-1 and Dataset-2, and not training VGG layers in Faster RCNN.\n\n# Conclusion and Future Work\n\nWe evaluated the performance of various pruning strategies based on the (i) drop in performance after pruning (ii) amount of recovery after pruning (iii) speed of recovery and (iv) amount of data required. We do extensive evaluations with two networks (VGG-16 and ResNet50) and present counter-intuitive results which show that w.r.t. all these factors a random pruning strategy performs at par with principled pruning strategies. We also show that even when such a randomly pruned network is used for a completely new task it performs well. Finally, we present results for pruning Faster RCNN and show that even a random pruning strategy can give a 74% speed-up w.r.t frames per second while giving only a 1% drop in the performance.\n\n[^1]: The first two authors have contributed equally","meta":{"dup_signals":{"dup_doc_count":11,"dup_dump_count":2,"dup_details":{"curated_sources":3,"unknown":8}},"filename":"out\/1801.10447_extract_main.tex.md"},"subset":"arxiv"} +{"text":"author: Haewoon Kwak; Jisun An; Elise Jing; Yong-Yeol Ahn\nbibliography: main.bib\ntitle: FrameAxis: Characterizing Microframe Bias and Intensity with Word Embedding\n\n# Introduction\n\nFraming is a process of highlighting a certain aspect of an issue to make it salient\u00a0. By focusing on a particular aspect over another, even without making any biased argument, a biased understanding of the listeners can be induced\u00a0. For example, when reporting on the issue of poverty, a news media may put an emphasis on how successful individuals succeeded through hard work. By contrast, another media may emphasize the failure of national policies. It is known that these two different framings can induce contrasting understanding and attitudes about poverty\u00a0. While readers who are exposed to the former framing became more likely to blame individual failings, those who are exposed to the latter framing tended to criticize the government or other systematic factors rather than individuals. Framing has been actively studied, particularly in political discourse and news media, because framing is considered to be a potent tool for political persuasion\u00a0. It has been argued that the frames used by politicians and media shape the public understanding of issue salience\u00a0, and politicians strive to make their framing more prominent among the public\u00a0.\n\nFraming is not confined to politics. It has been considered crucial in marketing\u00a0, public health campaigns\u00a0, and other domains\u00a0. Yet, the operationalization of framing is inherently vague\u00a0 and remains a challenging open question. Since framing research heavily relies on manual efforts from choosing an issue to isolating specific attitudes, identifying a set of frames for an issue, and analyzing the content based on a developed codebook\u00a0, it is not only difficult to avoid an issue of subjectivity but also challenging to conduct a large-scale, systematic study that leverages huge online data.\n\nSeveral computational approaches have been proposed to address these issues. They aim to characterize political discourse, for instance, by recognizing political ideology\u00a0 and sentiment\u00a0, or by leveraging established ideas such as the moral foundation theory\u00a0, general media frame\u00a0, and frame-related language\u00a0. Yet, most studies still rely on small sets of predefined ideas and annotated datasets.\n\nTo overcome these limitations, we propose FrameAxis, an unsupervised method for characterizing texts with respect to a variety of *microframes*. Each microframe is operationalized by an antonym pair, such as *legal \u2013 illegal*, *clean \u2013 dirty*, or *fair \u2013 unfair*. The value of antonym pairs in characterizing the text has been repeatedly demonstrated\u00a0. For example, MFT identifies the five basic moral 'axes' using antonyms, such as 'Care\/Harm' and 'Fairness\/Cheating', 'Loyalty\/Betrayal', 'Authority\/Subversion', and 'Purity\/Degradation', as the critical elements for individual judgment\u00a0. MFT has been applied to discover politicians' stances on issues\u00a0 and political leaning in partisan news\u00a0, demonstrating flexibility and interpretability of antonymous semantic axes in characterizing the text. On the other hand, SemAxis\u00a0 and following studies\u00a0 leverage the word embeddings to characterize the semantics of a word in different communities or domains (e.g., different meaning of 'soft' in the context of sports vs. toy) by computing the similarities between the word and a set of predefined antonymous axes (\"semantic axes\"). As in SemAxis, FrameAxis leverages the power of word embedding, which allows us to capture similarities between a word and a semantic axis.\n\nFor each microframe defined by an antonym pair, FrameAxis is designed to quantitatively tease out two important dimensions of how it is used in the text. *Microframe bias* captures how biased the text is on a certain microframe, and *microframe intensity* shows how actively a certain microframe is used. Both dimensions together offer a nuanced characterization of the text. For example, let us explain the framing bias and intensity of the text about an immigration issue on the *illegal \u2013 legal* microframe. Then, the framing bias measures how much the text focuses on an 'illegal' perspective of the immigration issue rather than a 'legal' perspective (and vice versa); the framing intensity captures how much the text focuses on an illegal *or* legal perspective of the immigration issue rather than other perspectives, such as segregation (i.e., *segregated \u2013 desegregated* microframe).\n\nWhile FrameAxis works in an unsupervised manner, FrameAxis can also benefit from manually curated microframes. When domain experts are already aware of important candidate frames of the text, they can be directly formulated as microframes. For the case when FrameAxis works in an unsupervised manner\u2014which would be much more common, we propose methods to identify the most relevant semantic axes based on the values of microframe bias and intensity. Moreover, we also suggest document and word-level analysis methods that can explain *how* and *why* the resulting microframe bias and intensity are found with different granularity.\n\nWe emphasize that FrameAxis cannot replace conventional framing research methods, which involves sophisticated close reading of the text. Also, we do not expect that the microframes can be directly mapped to the frames identified by domain experts. FrameAxis can thus be considered as a computational aid that can facilitate systematic exploration of texts and subsequent in-depth analysis.\n\n# Methods\n\nFrameAxis involves four steps: (i) compiling a set of microframes, (ii) computing word contributions to each microframe, (iii) calculating microframe bias and intensity by aggregating the word contributions, and finally (iv) identifying significant microframes by comparing with a null model. We then present how to compute the relevance of microframes to a given corpus.\n\n## Building a Set of Predefined Microframes\n\nFrameAxis defines a microframe as a \"semantic axis\"\u00a0 in a word vector spacea vector from one word to its antonym. Given a pair of antonyms (pole words), $w^+$ (e.g., 'happy') and $w^-$ (e.g., 'sad'), the semantic axis vector is $v_f = v_{w^+} - v_{w^-}$, where $f$ is a microframe or a semantic axis (e.g., *happy \u2013 sad*), and $v_{w^+}$ and $v_{w^-}$ are the corresponding word vectors. To capture nuanced framing, it is crucial to cover a variety of antonym pairs. We extract 1,828 adjective antonym pairs from WordNet\u00a0 and remove 207 that are not present in the GloVe embeddings (840B tokens, 2.2M vocab, 300d vectors)\u00a0. As a result, we use 1,621 antonym pairs as the predefined microframes. As we explained earlier, when potential microframes of the text are known, using only those microframes is also possible.\n\n## Computation of Microframe Bias and Intensity\n\nA microframe $f$ (or semantic axis in ) is defined by a pair of antonyms $w^+$ and $w^-$. Microframe bias and intensity computation are based on the contribution of each word to a microframe. Formally, we define the contribution of a word $w$ to a microframe $f$ as the similarity between the word vector $v_w$ and the microframe vector $v_f$ ($=v_{w^+}-v_{w^-}$). While any similarity measure between two vectors can be used here, for simplicity, we use cosine similarity: $$c^w_f = \\frac {v_w \\cdot v_f}{\\parallel{v_w}\\parallel \\parallel{v_f}\\parallel} \n\\label{eq:cosine_similarity}$$\n\nWe then define microframe bias of a given corpus $t$ on a microframe $f$ as the weighted average of the word's contribution $c^w_f$ to the microframe $f$ for all the words in $t$. This aggregation-based approach shares conceptual roots with the traditional expectancy value model\u00a0, which explains an individual's attitude to an object or an issue. In the model, the individual's attitude is calculated by the weighted sum of the evaluations on attribute $a_i$, whose weight is the salience of the attribute $a_i$ of the object. In FrameAxis, a corpus is represented as a bag of words, and each word is considered an attribute of the corpus. Then, a word's contribution to a microframe can be considered as the evaluation on attribute, and the frequency of the word can be considered as the salience of an attribute. Accordingly, the weighted average of the word's contribution to the microframe $f$ for all the words in $t$ can be mapped onto the individual's attitude toward an object\u2014that is, microframe bias. An analogous framework using a weighted average of each word's score is also proposed for computing the overall valence score of a document\u00a0. Formally, we calculate the microframe bias, $\\mathrm{B}^t_f$, of a text corpus $t$ on a microframe $f$ as follows: $$\\mathrm{B}^t_f = \\frac{\\sum_{w \\in t} (n_w c^w_f) }{\\sum_{w \\in t} n_w}\n\\label{eq:frame_bias}$$ where $n_w$ is the number of occurrences of word $w$ in $t$.\n\nMicroframe intensity captures how strongly a given microframe is used in the document. Namely, given corpus $t$ on a microframe $f$ we measure the second moment of the word contributions $c_f^w$ on the microframe $f$ for all the words in $t$. For instance, if a given document is emotionally charged with many words that strongly express either happiness or sadness, we can say that the *happy \u2013 sad* microframe is heavily used in the document regardless of the microframe bias regarding the *happy \u2013 sad* axis.\n\nFormally, microframe intensity, $\\mathrm{I}^t_f$, of a text corpus $t$ on a microframe $f$ is calculated as follows: $$\\mathrm{I}^t_f = \\frac{\\sum_{w \\in t} n_w (c^w_f - \\mathrm{B}^T_f)^2}{\\sum_{w \\in t} n_w}$$ where $\\mathrm{B}^T_f$ is the baseline microframe bias of the entire text corpus $T$ on a microframe $f$ for computing the second moment. As the squared term is included in the equation, the words that are far from the baseline microframe bias\u2014and close to either of the poles\u2014contribute strongly to the microframe intensity.\n\nWe present an illustration of microframe intensity and bias in Figure 1(A), where arrows represent the vectors of words appeared in a corpus, and blue and orange circles represent two pole word vectors, which define the $w^+$\u2013$w^-$ microframe. If words that are semantically closer to one pole are frequently used in a corpus, the corpus has the high microframe bias toward that pole and the high microframe intensity on the $w^+$\u2013$w^-$ microframe (top right). By contrast, if words that are semantically closer to both poles are frequently used, the overall microframe bias becomes low by averaging out the biases toward both poles, but the microframe intensity stays high because the $w^+$\u2013$w^-$ microframe is actively used (bottom right).\n\n## Handling Non-informative Topic Words\n\nIt is known that pretrained word embeddings have multiple biases\u00a0. Although some de-biasing techniques are proposed\u00a0, those biases are not completely eliminated\u00a0. For example, the word 'food' within a GloVe pretrained embedding space is much closer to 'savory' (cosine similiarity: 0.4321) than 'unsavory' (cosine similarity: 0.1561). As those biases could influence the framing bias and intensity due to its high frequencies in the text of reviews on food, we remove it from the analysis of the reviews on food.\n\nFrameAxis computes the word-level framing bias (intensity) shift to help this process, which we will explain in 'Explainability' Section. Through the word-level shift, FrameAxis users can easily check whether some words that should be neutral on a certain semantic axis are located as neutral within a given embedding space.\n\nWhile this requires manual efforts, one shortcut is to check the topic word first. For example, when FrameAxis is applied to reviews on movies, the word 'movie' could be considered first because 'movie' should be neutral and non-informative. Also, as reviews on movies are likely to contain the word 'movie' multiple times, even smaller contribution of 'movie' to a given microframe could be amplified by its high frequency of occurrences, $n_w$ in Equation (2) and (3). After the manual confirmation, those words are replaced with $<$UNK$>$ tokens and are not considered in the computation of framing bias and intensity.\n\nIn this work, we also removed topics words as follows: in the restaurant review dataset, the word indicating aspect (i.e., *ambience, food, price,* and *service*) is replaced with $<$UNK$>$ tokens. In the AllSides' political news dataset, we consider the issue defined by AllSides as topic words, such as *abortion, immigration, elections, education, polarization*, and so on.\n\n## Identifying Statistically Significant Microframes\n\nThe microframe bias and intensity of a target corpus can be interpreted with respect to the background distribution for statistical significance. We compute microframe bias and intensity on the microframe $f$ from a bootstrapped sample $s$ from the entire corpus $T$, denote by $\\mathrm{B}^{NULL_s}_f$ and $\\mathrm{I}^{NULL_s}_f$, respectively. We set the size of the sample $s$ to be equal to that of the target corpus $t$.\n\nThen, the differences between $\\mathrm{B}^{NULL_s}_f$ and $\\mathrm{B}^{t}_f$ and that between $\\mathrm{I}^{NULL_s}_f$ and $\\mathrm{I}^{t}_f$ shows how likely the microframe bias and intensity in the target corpus can be obtained by chance. The statistical significance of the observation is calculated by doing two-tailed tests on the $N$ bootstrap samples. By setting a threshold $p$-value, we identify the significant microframes. In this work, we use $N=1,000$ and $p=0.05$.\n\nWe also can compute the effect size ($|\\eta|$, which is the difference between the observed value and the sample mean) for microframe $f$: $$\\eta^{\\mathrm{B}}_f = \\mathrm{B}^{t}_f - \\mathrm{B}^{NULL}_f = \\mathrm{B}^{t}_f - \\frac {\\sum_i^N \\mathrm{B}^{NULL_{s_i}}_f}{N}\n\\label{eq:bias}$$\n\n$$\\eta^{\\mathrm{I}}_f = \\mathrm{I}^{t}_f - \\mathrm{I}^{NULL}_f = \\mathrm{I}^{t}_f - \\frac {\\sum_i^N \\mathrm{I}^{NULL_{s_i}}_f}{N}\n\\label{eq:intensity}$$ We can identify the top $M$ significant microframes in terms of the microframe bias (intensity) by the $M$ microframes with the largest $|\\eta^{\\mathrm{B}}|$ ($|\\eta^{\\mathrm{I}}|$).\n\n## Microframe Bias and Intensity Shift per Word\n\nWe define the word-level microframe bias and intensity shift in a given corpus $t$ as follows:\n\n$$\\mathrm{S}^t_w(\\mathrm{B}_{f}) = \\frac{n_w c^w_f}{\\sum_{w \\in t} n_w}\n\\label{eq:shift_bias}$$\n\n$$\\mathrm{S}^t_w(\\mathrm{I}_f) = \\frac{n_w (c^w_a - \\mathrm{B}^T_f)^2}{\\sum_{w \\in t} n_w}\n\\label{eq:shift_intensity}$$ which shows how a given word ($w$) brings a shift to microframe bias and intensity by considering both the word's contribution to the microframe ($c^w_a$) and its appearances in the target corpus $t$ ($n_w$). In this work, both shifts are compared to those from the background corpus.\n\n## Contextual Relevance of Microframes to a Given Corpus\n\nNot all predefined microframes are necessarily meaningful for a given corpus. While we provide a method to compute the statistical significance of each microframe for a given corpus, filtering out irrelevant microframes in advance can reduce computation cost. We propose two methods to compute relevance of microframes to a given corpus: embedding-based and language model-based approaches.\n\nFirst, an embedding-based approach calculates relevance of microframes as cosine similarity between microframes and a primary topic of the corpus within a word vector space. A topic can be defined as a set of words related to a primary topic of the corpus. We use $\\tau=\\{$$w_{t1}, w_{t2}, w_{t3}, ... , w_{tn}$$\\}$ to represent a set of topic words. The cosine similarity between a microframe $f$ defined by two pole words $w^+$ and $w^-$ and a set of topic words $\\tau$ can be represented as the average cosine similarity between pole word vectors ($v_{w^+}$ and $v_{w^-}$) and a topic word vector ($v_{w_{ti}}$): $$r^t_f = \\frac{1}{|\\tau|}\\sum_{w_{ti}\\in\\tau}\\frac{\\text{(relevance of $w^+$ to $w_{ti}$)} + \\text{(relevance of $w^-$ to $w_{ti}$)}}{2} = \\frac{1}{|\\tau|}\\sum_{w_{ti}\\in\\tau} \\frac{\\frac {v_{w_{ti}} \\cdot v_{w^+}}{\\parallel{v_{w_{ti}}}\\parallel \\parallel{v_{w^+}}\\parallel} + \\frac {v_{w_{ti}} \\cdot v_{w^-}}{\\parallel{v_{w_{ti}}}\\parallel \\parallel{v_{w^-}}\\parallel}}{2}$$\n\nSecond, a language model-based approach calculates relevance of microframes as perplexity of a template-filled sentence. For example, consider two templates as follows:\n\n- T1(topic word, pole word): {topic word} *is* {pole word}.\n\n- T2(topic word, pole word): {topic word} *are* {pole word}.\n\nIf a topic word is 'healthcare' and a microframe is *essential \u2013 inessential*, four sentences, which are 2 for each pole word, can be generated. Following a previous method\u00a0, we use a pre-trained OpenAI GPT model to compute the perplexity score.\n\nWe take a lower perplexity score for each pole word because a lower perplexity score should be from the sentence with a correct subject-verb pair (i.e., singular-singular or plural-plural). In this stage, for instance, we take 'healthcare is essential' and 'healthcare is inessential'. Then, we sum two perplexity scores from one pole and the other pole words and call it frame relevance of the corresponding microframe to the topic.\n\nAccording to the corpus and topic, a more complex template, such as \"A (an) {topic word} issue has a {pole word} perspective.\" might work better. More appropriate template sentences can be built with good understanding of the corpus and topic.\n\n## Human Evaluation\n\nWe perform human evaluations through Amazon Mechanical Turk (MTurk). For microframe bias, we prepare the top 10 significant microframes ranked by the effect size (i.e., answer set) and randomly selected 10 microframes with an arbitrary microframe bias (i.e., random set) for each pair of aspect and sentiment (e.g., *positive* reviews about *ambience*). As it is hard to catch subtle differences of the magnitude of microframe biases and intensity through crowdsourcing, we highlight microframe bias on each microframe with bold-faced instead of its numeric value.\n\nAs a unit of question-and-answer tasks in MTurk (Human Intelligence Task \\[HIT\\]), we ask \"Which set of antonym pairs do better characterize a *positive* restaurant review on *ambience*? (A word on the right side of each pair (in bold) is associated with a *positive* restaurant review on *ambience*.)\" The italic text is changed according to every aspect and sentiment. We note that, for every HIT, the order of microframes in both sets is shuffled. The location (i.e., top or bottom) of the answer set is also randomly chosen to avoid unexpected biases of respondents.\n\nFor microframe intensity, we prepare the top 10 significant microframes (i.e., answer set) and randomly selected 10 microframes (i.e., random set) for each pair of aspect and sentiment. The top 10 microframes are chosen by the effect size, computed by Equation (4) and (5), among the significant microframes. We then ask \"Which set of antonym pairs do better characterize a *positive* restaurant review on *service*?\" The rest of the procedure is the same as the framing bias experiment.\n\nFor the quality control of crowd-sourced answers, we recruit workers who (1) live in the U.S., (2) have more than 1,000 approved HITs, and (3) achieve 95% of approval rates. Also, we allow a worker to answer up to 10 HITs. We recruit 15 workers for each (aspect, sentiment) pair. We pay 0.02 USD for each HIT.\n\n# Results\n\n## Microframe in Restaurant Reviews\n\nTo validate the concept of microframe bias and intensity, we examine the SemEval 2014 task 4 dataset, which is a restaurant review dataset where reviews are grouped by aspects (food, ambience, service, and price) and sentiment (positive and negative). This dataset provides an ideal playground because i) restaurant reviews tend to have a clear bias\u2014whether the experience was good or bad\u2014which can be used as a benchmark for framing bias, and ii) the aspect labels also help us perform the fine-grained analysis and compare microframes used for different aspects of restaurant reviews.\n\nWe compute microframe bias and intensity for 1,621 predefined microframes, which are compiled from WordNet\u00a0 (See *Methods* for detail), for every review divided by aspects and sentiments. The top two microframes with the highest microframe intensity are shown in Figure\u00a01. For each highest-intensity microframe, we display the microframe bias that is computed through a comparison between the positive (negative) reviews and the null model\u2014bootstrapped samples from the whole corpus (See *Methods*). The highest-intensity microframes are indeed relevant to the corresponding aspect: *hospitable \u2013 inhospitable* and *best \u2013 worst* for service, *cheap \u2013 expensive* and *pointless \u2013 pointed* for price, *savory \u2013 unsavory* and *appealing \u2013 unappealing* for food, and *active \u2013 quiet* and *loud \u2013 soft* for ambience. At the same time, it is clear that positive and negative reviews tend to focus on distinct perspectives of the experience. Furthermore, observed microframe biases are consistent with the sentiment labels; microframe biases in positive reviews are leaning toward the positive side of the microframes, and those in negative reviews toward the negative side.\n\nIn other words, FrameAxis is able to automatically discover that positive reviews tend to characterize service as *hospitable*, price as *cheap*, food as *savory*, and ambience is *tasteful*, and negative reviews describe service as *worst*, price as *pointless*, food as *unappealing*, and ambience as *loud* in an unsupervised manner. Then, how and why do these microframes get those bias and intensity? In the next section, we propose two tools to provide explainability with different granularity behind microframe bias and intensity.\n\n## Explainability\n\nTo understand computed microframe bias and intensity better, we propose two methods: i) word-level microframe bias (intensity) shift, and ii) document-level microframe bias (intensity) spectrum.\n\nThe word-level impact analysis has been widely used for explaining results of the text analysis\u00a0. Similarly, we can compute the word-level microframe shift that captures how each word in a target corpus $t$ influences the resulting microframe bias (intensity) by aggregating contributions of the word $w$ for microframe bias (intensity) on microframe $f$ (See *Methods*). It is computed by comparison with its contribution to a background corpus. For instance, even though $w$ is a word that conveys positive connotations, its contribution to a target corpus can become negative if its appearance in $t$ is lower than that in the background corpus.\n\nFigure\u00a02(A) shows the top 10 words with the highest microframe bias shift for the two high-intensity microframes from the 'food' aspect. On the left, the green bars show how each word in the positive reviews shifts the microframe bias toward either savory or unsavory on the *savory \u2013 unsavory* microframe, and the gray bars show how the same word in the background corpus (non-positive reviews) shifts the microframe bias. The difference between the two shifts, which is represented as the orange bars, shows the effect of each word for microframe bias on the *savory \u2013 unsavory* microframe in positive reviews. The same word's total contribution differs due to the frequency because its contribution on the axis $c^w_f$ is the same. For instance, the word 'delicious' appears 80 times in positive reviews, and the normalized term frequency is $0.0123$. By contrast, 'delicious' appears only three times in non-positive reviews, and the normalized term frequency is $0.0010$. In short, the normalized frequency of the word 'delicious' is an order of magnitude higher in positive reviews than non-positive reviews, and thus the difference strongly shifts the microframe bias toward 'savory' on the *savory \u2013 unsavory* microframe. A series of the words describing positive perspectives of food, such as *delicious, fresh, tasty, great, good, yummy,* and *excellent,* appears as the top words with the highest microframe bias shifts toward savory on the *savory \u2013 unsavory* microframe.\n\nSimilarly, on the right in Figure\u00a02(A), the green and the gray bars show how each word in the negative reviews and background corpus (non-negative reviews) shifts the microframe bias on the *appealing \u2013 unappealing* microframe, respectively, and the orange bar shows the difference between the two shifts. The word 'great' in the negative reviews shifts microframe bias toward appealing less than that in the background corpus; the word 'great' less frequently appears in negative reviews (0.0029) than in background corpus (0.0193). Consequently, the resulting microframe bias attributed from the word 'great' in the negative reviews is toward 'unappealing' on the *appealing \u2013 unappealing* microframe. In addition, words describing negative perspectives of food, such as *soggy, bland, tasteless, horrible,* and *inedible*, show the orange bars heading to unappealing rather than appealing side on the *appealing \u2013 unappealing* microframe. Note that the pole words for these microframes do not appear in the top word lists. These microframes are found because they best capture\u2014according to word embedding\u2014these words *collectively*.\n\nFigure\u00a02(C) shows the top 10 words with the highest framing intensity shift for the two high-intensity microframes from the 'food' aspect. On the right, compared to Figure\u00a02(A), more words reflecting the nature of the *unappealing \u2013 appealing* microframe, such as *tasteless, horrible, inedible, oily, undercooked, disgusting, flavorless*, and *watery,* are shown as top words in terms of the microframe intensity shift. As we mentioned earlier, we confirm that the words that are far from the baseline framing bias\u2014and close to either of the poles\u2014contribute strongly to the microframe intensity.\n\nFigure\u00a02(B) shows another example of word-level framing bias shift in the reviews about service. On the left, top words that shift microframe bias toward 'hospitable', such as *friendly, attentive, great, excellent, nice, helpful, accommodating, wonderful,* and *prompt*, are captured from the positive reviews. On the right, top words that shift microframe bias toward 'worst', such as *rude, horrible, terrible, awful, bad, wrong,* and *pathetic*, are found. Similar to the top words in negative reviews about food, some words that shift framing bias toward 'best' less frequently appear in the negative reviews than the background corpus, making their impact on microframe bias be farther from 'best' on the *best \u2013 worst* microframe.\n\nAs the word-level microframe shift diagram captures, what FrameAxis detects is closely linked to the abundance of certain words. Does it mean that our results merely reproduce what simpler methods for detecting overrepresented words perform? To answer this question, we compare the log odds ratio with informative Dirichlet prior\u00a0. With the log odds ratio, *service*, *friendly*, *staff*, *attentive*, *prompt*, *fast*, *helpful*, *owner*, and *always* are found to be overrepresented words. This list of overrepresented words is always the *same* when comparing given corpora because it only considers their frequencies of appearances in the corpora. By contrast, FrameAxis identifies the most relevant words for each microframe by considering their appearances and their contributions to the microframe, providing richer interpretability. Even though a word appears many times in a given corpus, it does not shift the microframe bias or intensity if the word is irrelevant to the microframe.\n\nFigure\u00a02(D) shows the top 10 words with the highest microframe intensity shift for the two high-intensity microframes from the 'service' aspect. On the left, compared to Figure\u00a02(B), more words describing the *inhospitable \u2013 hospitable* microframe, such as *wonderful* and *gracious*, are included. On the right, compared to Figure\u00a02(B), more words reflecting the nature of the *worst \u2013 best* microframe, such as *bad, pathetic, lousy, wrong, horrendous,* and *poor* are shown as top words in terms of the framing intensity shift.\n\nThe second way to provide explainability is computing a document-level framing bias (intensity) and visualizing them as a form of a microframe bias (intensity) spectrum. Figure\u00a02(E) shows microframe bias spectra of positive and negative reviews. Each blue and red line corresponds to an individual positive and negative review, respectively. Here we choose microframes that show large differences of microframe bias between the positive and negative reviews as well as high intensities by using the following procedure. We rank microframes based on average microframe intensity across the reviews and the absolute differences of microframe bias between the positive and negative reviews, sum both ranks, and pick the microframes with the lowest rank-sum for each aspect. In contrast to the corpus-level microframe analysis or word-level microframe shift, this document-level microframe analysis provides a mesoscale view showing where each document locates on the microframe bias spectrum.\n\n## Microframe Bias and Intensity Separation\n\nAs we show that FrameAxis reasonably captures relevant microframe bias and intensity from positive reviews and negative reviews, now we focus on how the most important dimension of positive and negative reviews\u2014positive sentiment and negative sentiment\u2014is captured by FrameAxis. Consider that there is a microframe that can be mapped into a sentiment of reviews. Then, if FrameAxis works correctly, microframe biases on the corresponding microframe captured from positive reviews and negative reviews should be significantly different.\n\nFormally, we define a *microframe bias separation* as the difference between microframe bias of positive reviews and that of negative reviews on microframe $f$, which is denoted by $\\Delta^{pos-neg}_{\\mathrm{B}_f}$ = (Microframe bias on microframe $f$ of positive reviews) $-$ (Microframe bias on microframe $f$ of negative reviews) = $\\mathrm{B}^{pos}_f - \\mathrm{B}^{neg}_f$. Similarly, a *microframe intensity separation* can be defined as following: $\\Delta^{pos-neg}_{\\mathrm{I}_f}$ = (Microframe intensity on microframe $f$ of positive reviews) - (Microframe intensity on microframe $f$ of negative reviews) = $\\mathrm{I}^{pos}_f - \\mathrm{I}^{neg}_f$.\n\nFigure\u00a03 shows the cumulative density function (CDF) of the magnitude of microframe bias separations, $|\\Delta^{pos-neg}_{\\mathrm{B}_f}|$, for 1,621 different microframes for each aspect. Given that the *bad \u2013 good* axis is a good proxy for sentiment\u00a0, the *bad \u2013 good* microframe would have a large bias separation *if* the microframe bias on that microframe captures the sentiment correctly. Indeed, the *bad \u2013 good* microframe shows a large separation \u2013 larger than 99.91 percentile across all aspects (1.5th rank on average). For comparison, the *irreligious \u2013 religious* microframe does not separate positive and negative restaurant reviews well (19.88 percentile, 1,298.8th rank on average). The large microframe bias separation between the microframe bias of positive reviews and that of negative reviews supports that the *bad \u2013 good* microframe\u2014and thus FrameAxis\u2014captures the most salient dimension of the text.\n\nUsing the two separation measures, we can compare two corpora with respect to both microframe intensity and bias. We find that the absolute values of both separations, $|\\Delta^{pos-neg}_{\\mathrm{I}_f}|$ and $|\\Delta^{pos-neg}_{\\mathrm{B}_f}|$, are positively correlated across the four aspects (Spearman's correlation $\\rho$ = 0.379 (ambience), 0.471 (food), 0.228 (price), and 0.304 (service)), indicating that when a certain microframe is more heavily used, it also tends to be more strongly biased.\n\nTo illustrate a detailed picture, we show microframe intensity and bias separation of each microframe in Figure\u00a04. Microframes above the gray horizontal line have higher microframe intensity in positive reviews than negative reviews. We indicate the microframe bias with bold face. **Word**$^+$ indicates that positive reviews are biased toward the pole, and **word**$^-$ means the opposite (negative reviews). For instance, at the top, the label 'sour-**sweet$^+$**' indicates that 'sweetness' of the ambience is highlighted in positive reviews, and the label '**loud$^-$**-soft' indicates that 'loudness' of the ambience frequently appears in negative reviews. For clarity, the labels for microframes are written for top 3 and bottom 3 microframes of $\\Delta^{pos-neg}_{\\mathrm{I}_f}$ and $\\Delta^{pos-neg}_{\\mathrm{B}_f}$ each.\n\nThis characterization provides a comprehensive view of how microframes are employed in positive and negative reviews for highlighting different perspectives. For instance, when people write reviews about the price of a restaurant, *incredible, nice, good, cheap, incomparable, best,* and *pleasant* perspectives are highlighted in positive reviews, but *judgmental, unoriginal, pointless,* and *unnecessary* are highlighted in negative reviews. From the document-level framing spectrum analysis, the strongest 'judgmental' and 'unnecessary' microframe biases are found from the reviews about the reasoning behind pricing, such as 'Somewhat pricey but what the heck.'\n\nWhile some generic microframes, such as *incredible \u2013 credible* or *worst \u2013 best*, are commonly found across different aspects, aspect-specific microframes, such as *uncrowded \u2013 crowded* or *inhospitable \u2013 hospitable*, are found in the reviews about corresponding aspect. Most of the microframe biases in the positive reviews convey positive connotations, and those in the negative reviews convey negative connotations.\n\n## Human Evaluation\n\nWe perform human evaluations through Amazon Mechanical Turk (MTurk). Similar with the word intrusion test in evaluating topic modeling\u00a0, we assess the quality of identified framing bias and intensity by human raters.\n\n| (Sentiment) Aspect | Accuracy for framing bias | Accuracy for framing intensity |\n|:---|:--:|:--:|\n| (+) Service | 1.000 | 0.867 |\n| (+) Price | 0.867 | 0.733 |\n| (+) Food | 0.933 | 0.800 |\n| (+) Ambience | 1.000 | 0.600 |\n| ($-$) Service | 0.867 | 0.867 |\n| ($-$) Price | 0.667 | 0.667 |\n| ($-$) Food | 0.867 | 0.733 |\n| ($-$) Ambience | 0.800 | 0.733 |\n| Average | 0.875 | 0.750 |\n\nHuman evaluation for significant microframes with the top 10 highest framing bias and intensity.\n\nFor microframe bias, we prepare the top 10 significant microframes with the highest microframe bias (i.e., answer set) and randomly selected 10 microframes with an arbitrary bias (i.e., random set) for each pair of aspect and sentiment (e.g., *positive* reviews about *ambience*). As it is hard to catch subtle differences of the magnitude of microframe biases and intensity through crowdsourcing, we highlight microframe bias on each microframe with bold-faced like Figure\u00a04 instead of showing its numeric value. We then ask which set of microframe with a highlighted bias do better characterize a given corpus, such as 'positive' reviews on 'ambience'. See *Methods* for detail.\n\nFor microframe intensity, we prepare the top 10 significant microframes (i.e., answer set) and randomly selected 10 microframes (i.e., random set) for each pair of aspect and sentiment. We then ask which set of microframe do better characterize a given corpus, such as 'positive' reviews on 'ambience'.\n\nTable 1 shows the fraction of the correct choices of workers (i.e., choosing the answer set). The overall average accuracy is 87.5% and 75.0% for significant microframes with the highest microframe bias and intensity, respectively. For microframe bias, in (+) Service and (+) Ambience, human raters chose the answer sets correctly without errors. By contrast, for microframe intensity, some sets show a relatively lower performance. We manually check them for error analysis and find that workers tended to choose the random set when generic microframes, such as *positive \u2013 negative*, appear in a random set due to its ease of interpretation.\n\n### Contextually Relevant Microframes\n\nAs we mentioned in *Method*, in addition to automatically identified microframes that are strongly expressed in a corpus, we can discover microframes that are relevant to a given topic without examining the corpus. We use each aspect\u2014food, price, ambience, and service\u2014as topic words. By using the embedding-based approach, we find *healthy \u2013 unhealthy* for food, *cheap \u2013 expensive* for price, *noisy \u2013 quiet* for ambience, and *private \u2013 public* for service as the most relevant microframes. It is also possible to use different words as topic words for identifying relevant microframes. For example, one might be curious about how people think about waiters specifically among the reviews on service. In this case, the most relevant microframes become *impolite \u2013 polite* and *attentive \u2013 inattentive* by using 'waiter' as a topic word. Then, the computed microframe bias and intensity show how these microframes are used in a given corpus.\n\n## Microframe in Political News\n\nAs a demonstration of another practical application of FrameAxis, we examine news media. The crucial role of media's framing in public discourse on social issues has been widely recognized\u00a0. We show that FrameAxis can be used as an effective tool to characterize news on different issues through microframe bias and intensity. We collect 50,073 news headlines of 572 liberal and conservative media from AllSides\u00a0. These headlines fall in one of predefined issues defined by AllSides, such as *abortion, immigration, elections, education, polarization,* and so on. We examine framing bias and intensity from the headlines for a specific issue, considering all three aforementioned scenarios.\n\nThe first scenario is when *domain experts already know which microframes are worth to examine*. For example, news about immigration can be approached through a *illegal \u2013 legal* framing\u00a0. In this case, FrameAxis can reveal how strong 'illegal vs. legal' media framing is and which position a media outlet has through the microframe bias and intensity on the *illegal \u2013 legal* microframe.\n\nFigure\u00a05(A)-(C) show how FrameAxis can capture microframes used in news reporting with different granularity: (A) media-level microframe bias-intensity map, (B) word-level microframe bias shift, and (C) document (news headline)-level microframe bias spectrum. Figure\u00a05(A) exhibits the average microframe intensity and bias of individual media, which we call a microframe bias-intensity map. To reveal the general tendency of conservative and liberal media's microframe, we also plot their mean on the map. For clarity, we filter out media that have less than 20 news headlines about immigration. Conservative media have higher microframe intensity than liberal media, meaning that they more frequently report the *illegal \u2013 legal* microframe of the immigration issue than liberal media do. In addition, conservative media have the microframe bias that is closer to illegal than legal compared to liberal media, meaning that they report more on the illegality of the immigration issue. In summary, the media-level microframe bias-intensity map presents framing patterns of news media on the immigration issue; conservative media do report illegal perspectives more than legal perspectives of the issue and do more frequently report them than liberal media do.\n\nFigure\u00a05(B) shows the top words that contribute the most to the microframe bias on the *illegal-legal* microframe in conservative and liberal media. Conservative media use the word 'illegal' much more than background corpus (i.e., liberal and centered media). Also, they use the word 'amnesty' more frequently, for example, within the context of 'Another Court Strikes Down Obama's Executive Amnesty (Townhall)', and 'patrol' within the context of 'Border Patrol surge as illegal immigrants get more violent (Washington Times)'. Liberal media mention the word 'illegal' much less than the background and the words 'reform', 'opinion', and 'legal' more.\n\nFigure\u00a05(C) shows a document (news headline)-level microframe bias spectrum on the *illegal \u2013 legal* microframe. The news headline with the highest microframe bias toward 'illegal' is 'U.S. to Stop Deporting Some Illegal Immigrants (Wall Street Journal - News)', and the news headline with the highest microframe bias toward 'legal' is 'Why Trump's Immigration Order is Legal and Constitutional (National Review)'. Compared to a microframe bias-intensity map that shows how individual media use microframes, the microframe bias spectrum makes headline-level microframe biases visible to help to understand which news headlines express what bias.\n\nConsidering the second scenario, where we use microframe intensity and bias separation (or microframe intensity and bias compared to a null model) to explore potential microframes in a corpus, we examine the *relaxed \u2013 tense* microframe which displays strong intensity separation in news on gun control and gun right issues. Figure\u00a05(D)-(F) show media-, word-, and document-level microframes found by FrameAxis. The average microframe intensity on the *relaxed \u2013 tense* microframe is higher in liberal media than conservative media, and the microframe bias of liberal media is toward to 'tense' compared to conservative media. Word-level microframe shift diagrams clearly show that liberal media focus much more on the devastating aspects of gun control, whereas conservative media do not evoke those strong images but focus more on the owner's rights. Figure\u00a05(F) shows news headlines that are the closest to 'tense.' The key advantage of employing word embedding in FrameAxis is again demonstrated here; out of two headlines in Figure\u00a05(F), none has the word 'tense,' but other words, such as violence or gunfight, deliver the microframe bias toward 'tense.' Although *relaxed \u2013 tense* may not be the kind of microframes that are considered in traditional framing analysis, it aptly captures the distinct depictions of the issue in media, opening up doors to further analysis.\n\nThe microframe bias-intensity map correctly captures the political leaning of news media as in Figure\u00a06. Figure\u00a06(A) and (C) are the microframe bias-intensity maps of news on Democratic party, and Figure\u00a06(B) and (D) are those on Republic party. We test the *bad \u2013 good* microframe for Figure\u00a06(A) and (B) and the *irrational \u2013 rational* microframe for Figure\u00a06(C) and (D). The captured bias fits the intuition; Liberal news media show microframe bias toward 'good' and 'rational' when they report the Democratic party, and conservative news media show the same bias when they report the Republican party. Interestingly, their microframe intensity becomes higher when they highlight negative perspectives of those microframes, such as 'bad' and 'irrational.'\n\nAs we mentioned earlier, we can also discover relevant microframes given a topic (See *Methods*). As an example, we compute the most relevant microframes given 'abortion' as a topic in Figure\u00a07. The most relevant microframes to news about 'abortion' indeed capture key dimensions in the abortion debate\u00a0. Of course, it is not guaranteed that conservative and liberal media differently use those microframes. The average microframe biases of conservative and liberal media on the four microframes are indeed not statistically different ($p > 0.1$). However, modeling contextual relevance provides a capability to discover relevant microframes to a given corpus easily even *before* examining the actual data.\n\n# Discussion\n\nIn this work, we propose an unsupervised method for characterizing the text by using word embeddings. We demonstrated that FrameAxis can successfully characterize the text through microframe bias and intensity. How biased the text is on a certain microframe (microframe bias) and how actively a certain microframe is used (microframe intensity) provide a nuanced characterization of the text. Particularly, we showed that FrameAxis can support different scenarios: when an important microframe is known (e.g., the *illegal vs. legal* microframe on an immigration issue), when exploration of potential microframes is needed, and when contextually relevant microframes are automatically discovered. The explainability through a document-level microframe spectrum and word-level microframe shift diagram is useful to understand how and why the resulting microframe bias and intensity are captured. They make FrameAxis transparent and help to minimize the risk of spurious correlation that might be embedded in pretrained word embeddings.\n\nWe applied FrameAxis to casual texts (i.e., restaurant reviews) and political texts (i.e., political news). In addition to a rich set of predefined microframes, FrameAxis can compute microframe bias and intensity on an arbitrary microframe, so long as it is defined by two (antonymous) words. This flexibility provides a great opportunity to study microframes in diverse domains. The existing domain knowledge can be harmoniously combined with FrameAxis by guiding candidate microframes to test and fine-tuning automatically discovered microframes.\n\nSome limitations should be noted. First, word embedding models contain various biases. For example, the word 'immigrant' is closer to 'illegal' than 'legal' (0.463 vs. 0.362) in the GloVe word embedding. Indeed, multiple biases, such as gender or racial bias, in pretrained word embeddings have been documented\u00a0. While those biases provide an opportunity to study prejudices and stereotypes in our society over time\u00a0, it is also possible to capture incorrect microframe bias due to the bias in the word embeddings (or language models). While several approaches are proposed to debias word embeddings\u00a0, they have failed to remove those biases completely\u00a0. Nevertheless, since FrameAxis does not depend on specific pretrained word embedding models, FrameAxis can fully benefit from newly developed word embeddings in the future, which minimize unexpected biases. When using word embeddings with known biases, it may be possible to minimize the effects of such biases through an iterative process as follows: (1) computing microframe bias and intensity; (2) finding top $N$ words that shift the microframe bias and shift; (3) identifying words that reflect stereotypic biases; and (4) replacing those words with an $<$UNK$>$ token, which is out of vocabulary and thus is not included in the microframe bias and intensity computation, and repeat this process of refinement. The iteration ends when there are no stereotypical words are found in (3). Although some stereotypic biases may exist beyond $N$, depending on $N$, their contribution to microframe shift may be suppressed enough.\n\nSecond, an inherent limitation of a dictionary-based approach behind microframe bias and intensity computation exists. Figure\u00a02(E) reveals the limitation. While 'There was no ambience' conveys a negative connotation, its microframe bias is computed as closer to beautiful than ugly. This error can be potentially addressed by sophisticated end-to-end approaches to model representations of sentences, such as Sentence Transformers\u00a0. While we use a dictionary-based approach for its simplicity and interpretability in this work, FrameAxis can support other methods, including Sentence Transformers, in computing microframe bias and intensity as well. As a proof-of-concept, we use Sentence Transformers to handle the case in Figure\u00a02(E), which is that 'there was no ambiance' has framing bias closer to 'beautiful' than 'ugly'. We compute the representation of three sentences: 'there was no ambience', 'ambience is beautiful', and 'ambience is ugly'. Then, we find that the similarity between 'there was no ambience' and 'ambience is beautiful' (0.3209) is less than that between 'there was no ambience' and 'ambience is ugly' (0.6237). This result indicates that Sentence Transformers correctly understands that the meaning of sentences. As the dictionary-based approach has its own strengths in simplicity and interpretability, future work may seek a way to blend the strengths of different approaches. Even with these limitations, we argue that our approach can greatly help researchers across fields to harness the power of neural embedding methods for text analysis and systematically scale up framing analysis to internet-scale corpora.\n\nWe release the source code of FrameAxis, and we will develop it as an easy-to-use library with supporting visualization tools for analyzing microframe bias and intensity for a broader audience. We believe that such efforts would facilitate computational analyses of microframes across disciplines.\n\n# Acknowledgments\n\nThis research was supported by the Singapore Ministry of Education (MOE) Academic Research Fund (AcRF) Tier 1 grant, Defense Advanced Research Projects Agency (DARPA), contract W911NF17-C-0094, of the United States of America, and the Air Force Office of Scientific Research under award number FA9550-19-1-0391. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.","meta":{"dup_signals":{"dup_doc_count":14,"dup_dump_count":6,"dup_details":{"curated_sources":1,"2024-30":5,"2024-26":3,"2024-22":2,"2024-10":1,"unknown":2}},"filename":"out\/2002.08608_extract_main.tex.md"},"subset":"arxiv"} +{"text":"abstract: Modern, state of the art nanomechanical devices are capable of creating spatial superpositions that are massive enough to begin to experimentally access the quantum to classical crossover, and thus force us to consider the possible ways in which the usual quantum dynamics may be affected. One recent theoretical proposal describes the crossover from unitary quantum mechanics to classical dynamics as a form of spontaneous symmetry breaking. Here, we propose a specific experimental setup capable of identifying the source of unitarity breaking in such a mechanism. The experiment is aimed specifically at clarifying the role played by gravity, and distinguishes the resulting dynamics from that suggested by alternative scenarios for the quantum to classical crossover. We give both a theoretical description of the expected dynamics, and a discussion of the involved experimental parameter values and the proposed experimental protocol.\nauthor: Jasper van Wezel; Tjerk H. Oosterkamp\ntitle: A Nanoscale Experiment Measuring Gravity's Role in \n Breaking the Unitarity of Quantum Dynamics\n\n# Introduction\n\nExperimentalists are pushing to cool cantilevers with low intrinsic damping to ever lower temperatures, both in the context of magnetic resonance force microscopy (MRFM) , and in the quest to cool a mechanical resonator to its zero point motion and eventually bring it into a quantum mechanical superposition . One such experiment has recently succeeded in cooling a resonator to its quantum mechanical ground state and has demonstrated the controllable creation of single quantum excitations . Several other feasible experiments have been proposed in which a mechanical resonator is coupled to a quantum system in such a way that, with some experimental progress, a superposition of spatially separated states of the resonator may be detectable . With the advent of such experiments, involving both quantum mechanics and mesoscopic objects, the problem of reconciling quantum mechanics with classical physics is no longer a purely theoretical endeavor, but rather becomes an experimental necessity.\n\nMany theoretical proposals exist for how to explain the apparent absence of (spatial) quantum superpositons of macroscopic objects in the classical world. The most well known of these include the idea that environment induced decoherence hides the superposed states from our view ; the idea that our personal participation in superpositions implies an inability to observe alternative branches of the superposed state of the universe ; and the idea that corrections to Schr\u00f6dinger's equation become important at a scale intermediate between that of microscopic particles and macroscopic objects . Recently, it has been pointed out by several authors that gravity may play an important role in scenarios of the latter kind . One particular very recent proposal posits that the transition between quantum dynamics at the microscopic scale and classical physics at the macroscopic scale may be described as a form of spontaneous symmetry breaking, in direct analogy to the breaking of translational symmetry in macroscopic crystals, rotational symmetry in macroscopic magnets, and so on . The particular symmetry being broken in this case is the unitarity of quantum mechanical time evolution, and the necessary symmetry breaking field may be supplied by the subtle influence of gravity at mesoscopic length scales.\n\nHere, we present a specific experiment which we believe is capable of producing a superposition that is massive enough to test the theory of spontaneously broken unitarity, while allowing enough experimental control to also differentiate its predictions from those of alternative scenarios. The suggested setup is based on realistic estimates of experimental parameters which may be obtained in state of the art MRFM experiments. The classical object being forced into a superposition is the micromechanical resonator which forms the detector arm of a typical MRFM experiment. This setup differs from similar proposals in the literature because of the entanglement of the resonator with a nearby microscopic spin state, which allows a definitive differentiation between the effects of decoherence, spontaneously broken unitarity, and alternative scenarios, using the specific experimental protocol described here.\n\nIn the following, we first describe the proposed experimental setup, and discuss the parameters that will have to be fulfilled in order for the experiment to be successful. We then present numerical simulations of the expected time evolution of this setup in the context of spontaneously broken unitarity, which explicitly show how gravity may influence the quantum dynamics of the resonator. We also show how a pointer basis (which defines the possible outcomes of a measurement on a quantum system) and Born's rule (which deals with the probability that such a measurement yields a given result), are automatically recovered from the time evolution of the resonator in this scenario. Finally, we turn to a detailed description of the experimental protocol required to distinguish the specific time evolution discussed here from both more common disturbances to quantum dynamics, such as decoherence, and from alternative theoretical scenarios.\n\n# The Experimental Setup\n\nInspired by the experiment by Rugar et al. , in which the force exerted by an electron spin on a small magnet is detectable by measuring the deflection of a mechanical resonator, we have proposed that such an experimental configuration can be used to bring a significant mass in a superposition involving a large displacement . The adaptation of such a setup shown in figure consists of a thin wire holding a plate of mass $m$ as well as a small spherical magnet with magnetization ${\\bf M}$, in close vicinity to a single, isolated electron spin. A natural candidate for the electron spin is the well known Nitrogen-Vacancy (NV) color centre in diamond because already at room temperature it has shown a coherence time as large as $250$ microsec . In isotopically engineered diamond this increases to $1.8$ msec , while at cryogenic temperatures the $T_2$ time may be expected to become even larger. Rabl et al. have developed a purely quantum mechanical description of this mechanical resonator coupled to a single electron spin . For independent detection of both the cantilever and the electron spin we envision that the cantilever motion can be observed by coupling it to a SQUID through a coil, while the electron spin can be characterized optically. When no current is injected into the SQUID and when no light is coupled to the NV centre, the purely quantum mechanical description of Rabl et al. should normally be applicable.\n\nWhile the non-magnetic $|S^z \\! = \\! 0 \\rangle$ ground state of the spin leaves the resonator untouched, the $|S^z \\! = \\! -1\\rangle$ (and $|S^z \\! = \\! +1\\rangle$) excited state will attract (and repel) the cantilever with a force $F=\\mu G$ where G is the magnetic field gradient originating from the magnetic sphere at the position of the electron spin with magnetization $\\mu$. Inducing transitions between and superpositions of the different spin states can readily be achieved by applying an appropriate combination of static $B_0$ and radio frequency $B_1$ magnetic fields. Starting from a superposed spin state, the magnetic coupling between the spin state and the motion of the resonator then eventually yields a superposition of two out of phase oscillation modes of the resonator.\n\nFor the experimental protocol described in detail in section , and employing the realistic parameters discussed presently, the amplitude of the two anti-phase oscillations could be caused to approach the thickness of the plate. Importantly, these amplitudes could in principle be achieved in a time shorter than the expected dephasing time of the electron spin due to nuclear spins in the diamond lattice and shorter than the dephasing time of the cantilever due to coupling to the phonon bath . Thus, a superposition of the plate over macroscopically distinct positions may be achieved within the limits set by decoherence.\n\n## Order of Magnitude Estimates for Experimental Parameters\n\nThe experimental parameters required to be able to unambiguously assign an observed decay time to any process other than decoherence lie only just beyond what is already being used in present day MRFM measurements. In the experiments of Rugar et al. the deflection of a resonator due to the interaction with a single electron spin was measured . The field gradient in these experiments was $\\partial B \/ \\partial x = 2\\cdot 10^5$\u00a0T\/m which, with a magnetic moment of the electron spin of $\\mu_B=9.3 \\cdot 10^{-24}$\u00a0J\/T, leads to a force $F_{\\text{spin}}=\\mu_B \\partial B \/ \\partial x =1.8 \\cdot 10^{-18}$\u00a0N. With the stiffness of the resonator $k=1.1 \\cdot 10^{-4}$\u00a0N\/m, this would imply a static deflection of only $d_0=F_{\\text{spin}}\/k=1.7 \\cdot 10^{-14}$\u00a0m.\n\nTo increase the deflection of the resonator, the electron spin was inverted twice during each resonator period, $T_{\\text{res}}$. If the spin inversions remain coherent with the resonator motion long enough, the amplitude could in principle be increased to $d=Q F_{\\text{spin}}\/k$, where $Q$ is the Q-factor of the resonator. Since in practice the spin inversions remain coherent with the cantilever motion only for a time $\\tau_m \\ll Q T_{\\text{res}}$, the maximum amplitude is limited to $d = (1-\\exp(-\\tau_m \/ Q T_{\\text{res}}))\\cdot Q F_{\\text{spin}}\/k$, where $\\tau_m$ is the so-called rotating frame relaxation time. For the experiment by Rugar et al. $\\tau_m = 760$\u00a0msec and the maximum amplitude would exceed $60$\u00a0pm .\n\nFor a spin in a quantum superposition of states, $\\tau_m$ would have to be replaced by a dephasing time $T_2$, which has been measured to be $1.2$ msec in NV centers in diamond in which the $^{13}$C isotopic content was reduced . Although this is the longest decoherence time measured in a solid state system to date, the fact that $T_2$ is so much shorter than $\\tau_m$ still limits the distance between the two centers of mass involved in the superposition. Fortunately, in the context of nuclear magnetic resonance force microscopy, the group of Rugar now employs field gradients of $4\\cdot 10^6$\u00a0T\/m , which when applied to an electron spin experiment, would lead to a $20$ fold increase of the deflections. An alternative way of enhancing the deflection would be to employ softer springs. In our lab , we have been able to fabricate nanowire resonators with spring constants down to $k=1 \\cdot 10^{-6}$\u00a0N\/m. Such a nanowire, ending in a thin gold foil measuring $5$\u00a0$\\mu$m x $5$\u00a0$\\mu$m x $20$\u00a0nm, would have a resonance frequency of $1.6$\u00a0kHz.\n\nCombining an increase of the NV-centre dephasing time to $T_2=50$\u00a0msec (which might be achievable by further eliminating nuclear spins in the diamond or by working at sufficiently low temperatures to freeze out the nuclear spins) with the highest field gradients in the Rugar lab and a soft $k=1 \\cdot 10^{-6}$\u00a0N\/m resonator would result in a deflection of $d = (T_2 \/ T_{\\text{res}}) \\cdot (\\mu_B \/ k) \\cdot \\partial B \/ \\partial x = 3$\u00a0nm. As explained in the next section, the typical energy scale involved in the process of spontaneous unitarity breaking may be expected to be $\\Delta=G m^2 d^2 \/L^3=4 \\cdot 10^{-34}$\u00a0J (with $L$ the sample thickness and $G$ the gravitational constant), which yields a typical time scale $\\hbar \/ \\Delta = 25$\u00a0msec. Since this is of the same order of magnitude as the dephasing time of the NV-centre, it may be possible for the loss of unitarity to become apparent before the NV centre decoheres.\n\nThe dephasing of the cantilever itself, on the other hand, may be an issue. This time scale can be estimated by describing the environment as an infinite bath of harmonic oscillators and integrating out the environmental degrees of freedom . For the specific case where the dephasing is due to the phonons which are excited within the resonator, this problem has received some attention in the context of gravitational wave detectors and the timescale resulting after the phonon bath has been integrated out was shown to be $\\tau_{\\text{res}}=Q T_{\\text{res}}\/(2 \\pi N_{\\text{phon}})$, where $N_{\\text{phon}}=k_B T_{\\text{mode}} \/ h f_{\\text{res}}$ is the number of phonons of the resonator mode, which is determined solely by the resonator's thermodynamic mode temperature $T_{\\text{mode}}$, and frequency $f_{\\text{res}}$. In order for this dephasing time to exceed $\\hbar \/ \\Delta$, the temperature of a resonator with resonance frequency $f_{\\text{res}}$ and $Q=10^5$ would have to be smaller than $100$\u00a0$\\mu$K, which would require the experiment to be done in a nuclear demagnetization refrigerator. However, although before we considered the damping $\\gamma = k\/(2\\pi f Q)=1 \\cdot 10^{-15}$\u00a0Ns\/m with $Q=10^5$, it is possible that future resonators will have even higher $Q$ since much lower damping values have been achieved in the carbon nanotube resonators fabricated by Huttel et al. , who estimate $\\gamma=2\\pi f_{\\text{res}} m\/Q=2\\cdot 10^{-17}$. If such a low damping can be achieved for the resonator proposed here, the mode temperature may be around $1$\u00a0mK without dominating the dephasing time of the NV-centre.\n\nIt has been argued before that the thermal contribution to the dephasing time considered above should give a correct order of magnitude estimate even in the presence of a driving force . However, if we consider a dephasing mechanism whose strength depends explicitly on the amplitude of the oscillation as well as its mode temperature , we should replace $N_{\\text{phon}}$ in the expression for $\\tau_{\\text{res}}$ by $\\frac{1}{2} k d^2 \/ h f_{\\text{res}}$. With the lowest damping considered ($\\gamma=2\\cdot 10^{-17}$) at a temperature of $100$\u00a0$\\mu$K, this results in a dephasing time of the order of $1$\u00a0msec. In that case, one would need to resort to using an even thinner gold plate to see the effect of the spontaneously broken unitarity. A gold mass of $10$\u00a0nm thickness (f.e. deposited on a single sheet of graphene), in the presence of such low damping, gives rise to the timescale $\\hbar \/ \\Delta = 20$\u00a0msec, which might just be detectable within the time limit set by decoherence.\n\nAlthough the phonon contribution considered here is expected to dominate the dephasing of the resonator, other mechanisms are also present. Defects in the cantilever for example may act as effective two-level systems at low temperatures, and the clamping of the cantilever may lead to additional phonon radiation, both contributing to the dephasing of the resonator . The former issue can be addressed by the application of small additional magnetic fields or by reducing the number of defects in the carbon nanotube resonator, while a scheme to circumvent the problem of clamping loss using optical trapping of a resonator has recently been proposed in a different context .\n\n# Simulating the Time Evolution\n\nIn this section, we will focus on spontaneously broken unitarity as one particular scenario for the transition from quantum dynamics to classical physics, and show explicitly how its predictions may be expected to influence the dynamics of the coupled resonator-spin system. How the proposed experimental setup may be used to also distinguish these particular predictions from those of other scenarios will be discussed in the next section. A detailed discussion of the theory of spontaneously broken unitarity can be found in ref. , and we will only repeat its main results here.\n\nThe basic observation underlying the idea of spontaneous unitarity breaking is the fact that the dynamics generated by the usual time dependent Schr\u00f6dinger equation is unstable, in the sense that even an infinitesimally weak perturbation may qualitatively change its behavior in the thermodynamic limit. This situation is analogous to that of other spontaneously broken symmetries: the Hamiltonian for a crystal is invariant under translations, but the sensitivity to even infinitesimal perturbations allows a macroscopic crystal to nonetheless localize in only a single position . In the same way the unitarity of quantum mechanical time evolution is sensitive to even infinitesimally small non-unitary perturbations, which allows sufficiently large objects to undergo classical dynamics . An important caveat in this argument is that there has to exist a fundamental non-unitary interaction somewhere in nature . It has been pointed by several authors that gravity may fill this role, and that in fact the energy scale on which the effects of gravity as a non-unitary influence become important lies in the regime in which we expect the quantum to classical crossover to take place .\n\nBecause of the relatively small mass and low density of the resonator in the proposed experiment, its dynamics will be very close to purely quantum mechanical, and we expect to be able to include any non-unitary effects due to gravity as minor perturbations to Schr\u00f6dinger's equation. Following the procedure outlined in the appendix, we thus write the dynamics of the resonator in the presence of a non-unitary gravitational perturbation as: $$\\begin{aligned}\n\\frac{d}{d t} \\psi = - \\frac{i}{\\hbar} \\left( \\hat{H} - i G \\frac{m^2}{2 L^3} \\left[\\hat{x}-\\xi\\right]^2 \\right) \\psi,\n\\label{MSE}\n\\end{aligned}$$ where $\\hat{H}$ is the usual quantum mechanical Hamiltonian, $G$ is the gravitational constant, and $m$ and $L$ are the mass and width of the massive plate. The operator $\\hat{x}$ is the position operator for the centre of mass of the plate (measured with respect to the centre of mass of the initial wavefunction). The time-dependent, randomly fluctuating variable $\\xi$ has been introduced as a correction to $\\hat{x}$, because the theory of gravity (i.e. general relativity) insists that any quantity which can be used as a measure of distance between locations in different components of a spacetime superposition must be ill-defined.\n\nNotice that both general relativity and quantum mechanics are in fact fully deterministic theories. There is thus no reason to believe that any part of the interplay between quantum mechanics and gravity should be anything but deterministic. The introduction of a random variable in equation should be seen only as a poor man's approach towards simulating an essentially ill-defined quantity: close to the point where unitary quantum mechanics is still a good description of nature we may get away with using the concept of superpositions \u2013even though they really are ill-defined notions in general relativity\u2013 if we include also an effectively random correction to the differences in distance between superposed spacetimes. Although superpositions and random variables may not actually feature in the exact reality of quantum gravity, we assume that if we insist on the possibility of *effectively* describing the state of the system as a superposition in some limit, then we also need to take into account an *effectively* random correction to the notion of position. We are thus led to the effective, phenomenological description of the first order correction to the unitary Schr\u00f6dinger equation given by equation . In the appendix we give a more detailed discussion of both the steps leading to equation and the implications of the imaginary term on the conservation of energy and on the conservation of the norm of the wavefunction.\n\nIn contrast to the Schr\u00f6dinger-Newton equation, which has been formulated in a similar context by other authors , equation is a purely linear equation, which arises naturally from the extension of the well studied equilibrium mechanism of spontaneous symmetry breaking to the dynamical realm, and obeys all the requirements of a theory of spontaneous symmetry breaking: for unitarity to be spontaneously broken, one necessarily needs to invoke a singular thermodynamic limit, a macroscopic order paramter, and a symmetry breaking field . It is because of this connection to spontaneous symmetry breaking, that the timescale $G m^2 \\langle x^2 \\rangle \/2 L^3$, which has been suggested before to set the appropriate time scale for the influence of gravity on quantum mechanical time evolution , may be embedded into the explicit expression of the wavefunction dynamics provided by equation . This final form of the modified Schr\u00f6dinger equation can be straightforwardly integrated using standard numerical methods to yield a phenomenological prediction for the expected time evolution of our resonator experiment.\n\n## Two Time Scales\n\nTo be explicit, we first express the state of the system depicted in figure by the quantum numbers for the position of the cantilever's centre of mass $x$ and the orientation of the spin's magnetic moment $\\sigma$. We are then interested in the dynamics of the system starting from the initial state $\\varphi(x,t \\! = \\! 0)=\\alpha \\chi_{\\uparrow} \\psi^0(x+d) + \\beta \\chi_{\\downarrow} \\psi^0(x-d)$, where $\\psi^0(x)$ is the groundstate wavefunction of the harmonic oscillator of mass $m$ centered at $x=0$, and $\\chi_{\\sigma}$ indicates the spin state with $S^z=\\sigma$. The entangled initial state is prepared by performing an MRFM measurement on a suitably initialized superposition of the spin state, as discussed in the next section.\n\nFor the purpose of clarity we ignore the imposed oscillatory motion of the cantilever in this simulation and assume instead that it is fixed at its maximum displacement $\\pm d$. It is straightforward to also include these oscillations but they shroud the role of the unusual dynamics induced by the phenomenological gravitational term and do not change our conclusions. Further ignoring for the moment the phonons and other internal degrees of freedom of the oscillator, we are left with only the kinetic energy of the oscillator as a whole to set the Hamiltonian in equation , so that we have $\\hat{H} = \\hat{p}^2 \/ ( 2 m )$. The spin in the diamond NV centre is assumed to be free for the duration of the experiment.\n\nThe time-dependent wavefunction of the resonator found by solving the differential equation , now shows two distinct processes which happen simultaneously. These can be most easily understood for the special case of constant $\\xi(t)$. In that case the gravitational term of equation exponentially suppresses the weights of both components of the wavefunction, but one component is suppressed more strongly than the other, depending on whether $x-\\xi$ is positive or negative. One component thus quickly dominates the overall wavefunction, and the superposition is reduced to just the single product state $\\chi_{\\uparrow} \\psi^0(x+d)$ or $\\chi_{\\downarrow} \\psi^0(x-d)$, as is shown in figure . That the reduction of the superposed wavefunction results in just one component in the position basis is a direct consequence of gravity providing the unitarity breaking field in equation . Because of the special role played by the position variable in the theory of gravity, spatial superpositions become unstable and the pointer basis which defines the possible outcomes of the reduction process consists of spatially localized states. Notice that the spatial reduction of the cantilever wavefunction automatically implies that also the entangled spin state will end up in only one of its components, even though it is not itself subject to any gravitational effects.\n\nThe localization dynamics is taken even one step further in the second process induced by the modified Schr\u00f6dinger equation. Because the single component $\\psi^0(x \\pm d)$ is itself a wavefunction spread out in space, it too is subject to the uneven suppression by the gravitational term. As long as the spread of the wavefunction (i.e. its deBroglie wavelength) is short compared to the distance $\\xi \\pm d$ however, the difference in amplification rates of its components is very small, and the secondary reduction of the wavefunction into a state centered at $x=\\xi$ will be very slow (see figure ).\n\n## Born's Rule\n\nIn the presence of a fluctuating $\\xi(t)$, the same two processes occur. This time however $x-\\xi$ may alternate in time between being positive or negative. It will thus alternately amplify the relative weights of the two different components of the initial superposition, and the question of which component wins out in the end becomes a realization of the \"gambler's ruin\" game: each component will randomly increase and decrease in relative weight until one of them completely dominates. Although the fluctuations of the stochastic variable $\\xi(t)$ slow down the rise to dominance of either of the components, a final resolution can still be seen to be reached within a typical timescale proportional to the inverse of the gravitational energy $\\Delta \\equiv G m^2 d^2 \/L^3$ defined in equation , as expected on general dimensional grounds . This energy scale sets the minimum time scale required by the resonator dynamics to unambiguously display non-unitary effects.\n\nAn example of a typical solution of the modified Schr\u00f6dinger equation with a fluctuating stochastic variable is shown in figure . Because $\\xi(t)$ is a randomly oscillating variable, it is impossible to predict which of the two components will survive the imposed dynamics. If one of the components has a larger weight in the initial wavefunction however, it is more likely to survive. In fact it is possible to mathematically prove that after an infinitely long time the only possible average outcome of this process is the emergence of Born's Rule . That is, if we repeat the simulation many times always starting from the same initial state $\\alpha \\chi_{\\uparrow} \\psi^0(x+d) + \\beta \\chi_{\\downarrow} \\psi^0(x-d)$, but randomly generating a new $\\xi(t)$ for each run, the average fraction of evolutions resulting in the final state $\\chi_{\\uparrow} \\psi^0(x+d)$ converges to $|\\alpha|^2$, as in figure .\n\n# Protocol for the Experiment\n\nReturning now to the experimental setup described in section , we need to find a way of distinguishing the dynamics of spontaneous unitarity breaking as described by equation from both the predictions of alternative models of the quantum to classical crossover, and from the ubiquitous decoherence effects. Especially the latter is not an easy task, because in the proposed setup, it is not possible to directly observe the time evolution of the state of the cantilever as displayed in figure , and we have to rely on ensemble measurements instead.\n\n## Distinguishing Decoherence from Decay\n\nThe central observation in the theory of decoherence is that the many unobservable microscopic degrees of freedom which are necessarily present in any experiment, have the effect, if averaged over an ensemble of measurements, to reduce any initially pure density matrix to a classical mixture of pointer states . Although decoherence and the non-unitary dynamics of equation have completely different origins and differ substantially in their descriptions of a single measurement, the ensemble averages look very similar. In terms of the reduced density matrices (i.e. density matrices averaged over all unobservable environmental degrees of freedom), both give rise to an effective, ensemble-averaged evolution of the type $$\\begin{aligned}\n\\rho(t) = \\left( \\begin{array}{cc} |\\alpha|^2 & \\alpha \\beta^* e^{- t \/ \\tau} \\\\ \\alpha^* \\beta e^{-t \/ \\tau} & |\\beta|^2 \\end{array} \\right).\n\\label{rho}\n\\end{aligned}$$ Here $\\tau$ is either the decoherence time $\\tau_{\\text{decoh}}$ or the average decay time $\\tau_{\\text{decay}}$. To distinguish the influence of the non-unitary term from the more usual effects coming from various sources of decoherence, we propose an experimental protocol consisting of two different stages.\n\n### Stage One\n\nIn the first instance, the spin and the resonator should be entangled in such a way as to give rise to a spatial superposition of resonator states. This can be achieved in a standard Magnetic Resonant Force Microscopy experiment in which the spin is reversed at twice the resonator frequency to drive the resonator, but starting from a superposed spin state. One component of the spin wavefunction will then enhance the resonator motion more strongly than the other, and as long as coherence can be maintained, the result will be an entangled state in which the resonator motion has a larger amplitude in one component than in the other: $$\\begin{aligned}\n\\varphi(t=0) &= \\alpha \\chi_{\\uparrow} \\psi^L(x) + \\beta \\chi_{\\downarrow} \\psi^S(x) \\notag \\\\\n\\Leftrightarrow \\rho(t=0) &= \\left( \\begin{array}{cc} |\\alpha|^2 & \\alpha \\beta^* \\\\ \\alpha^* \\beta & |\\beta|^2 \\end{array} \\right),\n\\label{t0}\n\\end{aligned}$$ where $\\psi^{L}(x)$ and $\\psi^S(x)$ indicate the wavefunctions of the oscillator with enhanced or suppressed amplitude respectively, and the spin's $S^z$ component in $\\chi_{\\sigma}$ is measured in the corotating frame. This state with spatially separated components for the massive resonator will be sensitive to gravity-induced non-unitary effects as well as to decoherence, so that the off-diagonal element of its reduced density matrix after a given time interval $t_1$ is given by: $$\\begin{aligned}\n\\rho_{12}(t_1) = \\alpha \\beta^* e^{- t_1 \/ \\tau_{\\text{decoh}} - t_1 \/ \\tau_{\\text{decay}}}.\n\\label{t1}\n\\end{aligned}$$\n\nThe effective dephasing time (which combines the gravitational decay time and the effects of decoherence) can be found after many repetitions of the experiment in which the final state is read out by either coupling the nanomagnet attached to the oscillator to a SQUID through a coil, or alternatively by measuring the fluorescence coming from the spin at the NV centre after the application of a microwave $\\pi\/2$ pulse (which rotates the spin orientation into the $xy$ plane). The effective dephasing time found in this way, in general depends on both the mass and the shape of the resonator. The characteristics of this dependence can be used as a first means of distinguishing gravity-induced non-unitary dynamics from any given, known source of decoherence. For any given source of decoherence it is straightforward to work out the functional dependence of $\\tau_{\\text{decoh}}$ on the mass of the resonator, its linear size, its geometric shape, etc. In general, one or more of these dependencies will differ from the ones implied by the non-unitary form of equation . For example, because they depend only on internal degrees of freedom of the oscillator, most sources of decoherence are independent of the angle between the direction of oscillation and the normal to the surface of the gold plate. By measuring the functional dependence of the observed $\\tau$ on such parameters, we can thus rule out most known sources of decoherence. To make sure that no *unknown* sources of decoherence are at play either, one can then add a second stage to the experiment.\n\n### Stage Two\n\nAny source of decoherence, no matter what its precise physical origin is, will arise from the entanglement of the cantilever motion with some other, unobserved degrees of freedom (called the bath). The gravitational decay on the other hand, only requires the presence of a massive superposition, and takes place even in complete isolation. To differentiate between the two mechanisms, one thus needs to ascertain that the cantilever is not entangled with any bath degrees of freedom, known or unknown. This is done in the next state, where we again start by creating the entangled superposition of equation , but rather than reading out the wavefunction of the resonator after a given time interval, we imprint its state back onto the diamond spin.\n\nTo do this, one could use a spin echo setup, and flip the spin at a given instant in time (using an extra microwave $\\pi$ pulse, which inverts the orientation of the spin), so that from then on the magnetic force between spin and resonator will tend to damp its motion if it enhanced it before the spin flip, and *vice versa*. Thus the effect of the different spin orientations on the oscillator position will be effectively undone. Once the resonator has traced its way back to its original wavefunction, the position of the resonator and the spin direction are no longer entangled and the magnetic driving field can be switched off. In the absence of any decoherence and decay processes, the evolution of the wavefunction, including the creation of the entangled state, would be simply: $$\\begin{aligned}\n\\varphi(t=0) &= \\left[ \\alpha \\chi_{\\uparrow} + \\beta \\chi_{\\downarrow} \\right] \\psi^0(x) \\notag \\\\\n\\varphi(0t_1$, due to the opposing influence of the flipped spin. If a non-unitary term is present, the components $\\alpha$ and $\\beta$ become time dependent, but the oscillator state will still be brought back to $\\psi^0(x)$ at $t=t_2$. Likewise, decoherence may result from the entanglement of these states with bath degrees of freedom, but these do not alter the final oscillator state.\n\nAfter decoupling the spin state and the oscillator state at $t_2$, there will not be any additional gravitational decay because there is no longer any spatially superposed mass. Any decoherence caused by the entanglement of the diamond spin with unknown internal (bath) degrees of freedom of the resonator however, will continue unabated even if the resonator remains at rest. After all, due to the rigidity of the resonator (and the associated Goldstone theorem), its internal degrees of freedom are decoupled from its collective coordinates. Even though the spin flip at $t=t_1$ effectively reverses the time evolution of the collective, centre-of-mass motion of the resonator, the dynamics of the internal, bath degrees of freedom is not reversed, and they remain in an entangled state even after $t=t_2$. The full wavefunction evolution in the presence of both decay and bath degrees of freedom can thus schematically be written as: $$\\begin{aligned}\n\\varphi(t=0) = & \\left[ \\alpha(0) \\chi_{\\uparrow} + \\beta(0) \\chi_{\\downarrow} \\right] \\psi^0(x) \\phi^{0}(0) \\notag \\\\\n\\varphi(0 t_2) = & \\left[ \\alpha(t_2) \\chi_{\\downarrow} \\phi^{L}(t) + \\beta(t_2) \\chi_{\\uparrow} \\phi^{S}(t) \\right] \\psi^0(x).\n\\end{aligned}$$ Here $\\phi^{L,S}(t)$ represent the evolution of the bath degrees of freedom, as influenced by the different amplitude states of the oscillator. Their time evolution continues beyond $t=t_2$, while the components $\\alpha$ and $\\beta$ become time-independent after the centre of mass state has returned to $\\psi^0(x)$. After tracing out the bath degrees of freedom and simultaneously averaging over many realizations of the stochastic dynamics imposed by the non-unitary time evolution, the off-diagonal element of the reduced density matrix for the final state yields: $$\\begin{aligned}\n\\rho_{12}(t>t_2) = \\alpha(0) \\beta(0)^* e^{-t_2 \\left( 1 \/ \\tau_{\\text{decoh}} + 1\/ \\tau_{\\text{decay}} \\right)} e^{- \\left(t-t_2\\right) \/ \\tau_{\\text{decoh}}}.\n\\label{t2}\n\\end{aligned}$$\n\nAt the end of the second experimental stage then, the coherence of the diamond spin can again be monitored via optical readings in many repetitions of the experiment. The initial suppression of the off-diagonal matrix elements by the factor $e^{- t_2 \/ \\tau_{\\text{decoh}} - t_2 \/ \\tau_{\\text{decay}} }$ is known from the first stage of the experiment. The remaining suppression of the off-diagonal matrix elements by the factor $e^{- (t-t_2) \/ \\tau_{\\text{decoh}}}$ can be attributed purely to sources of decoherence. It will most likely be dominated by the well-characterized decoherence of an isolated diamond spin due to sources within the diamond environment itself. For the experimental parameters proposed here, this timescale should be in the millisecond range, and can be easily recognized. Any remaining sources of decoherence which had not been identified and eliminated in the first experimental stage can now be recognized as an apparent contribution to $e^{- (t-t_2) \/ \\tau_{\\text{decoh}}}$, in excess of that expected from the known, pristine diamond environment. Once these remaining unidentified bath degrees of freedom have been eliminated (f.e. through additional cooling), the dephasing measured in the first stage of the experiment can be conclusively attributed to a decay process instead of decoherence.\n\n## Distinguishing Different Decay Models\n\nAfter ruling out decoherence as the source for the suppression of off-diagonal matrix elements, we still need to ascertain whether the observed effective decay time $\\tau$ is due to the process of spontaneous unitarity breaking, which we focussed on in this paper, or to some other non-unitary mechanism. The most well-known alternatives of this kind fall in the GRW and CSL classes . Other proposed scenarios for the quantum to classical crossover, such as hidden variable and many worlds theories are fully unitary , and the observation of any non-zero decay time in the proposed experiment would thus suffice to rule out their relevance in the description of the resonator dynamics.\n\nThe main assumption in both the CSL and the GRW theories is the existence of a specific process beyond the realm of applicability of Schr\u00f6dinger's equation. In the GRW theory this process is assumed to be the spontaneous and instantaneous spatial localization of an elementary particle at particular intervals in time . If the frequency with which these localization events occur is low enough, it will take an immeasurably long time (on average) for any individual particle to undergo such an event. Within an extended object consisting of a macroscopic number of particles on the other hand, it will not take long before at least one of the particles is localized. Assuming that the extended object possesses some degree of rigidity, the localization of a single particle within it will suffice to give the entire wavefunction a definite position in space. Macroscopic objects are thus rapidly reduced to position eigenstates, while microscopic particles are free to spread out in space. Born's rule can be built into this construction by making additional assumption about the localization process .\n\nBecause the localization process proposed by GRW acts on individual microscopic particles, and does not depend on the overall state of the system, its predicted decay time scales only with the number of particles involved in the massive superposition. A dependence of the decay time $\\tau_{\\text{decay}}$ in equation on the involved mass or shape of the resonator will thus clearly indicate a departure from the predictions of the GRW theory, and require additional non-unitary effects.\n\nThe CSL (or Continuous Spontaneous Localization) models can be seen as extensions of the original GRW model in which the addition to quantum mechanics is no longer a set of instantaneous localization events, but rather a continuous process which constantly acts to gradually localize the individual particles. In most modern CSL models, a smeared mass density is taken to be the variable which specifies both the rate and the final states of the reduction process . The obtained average rate of localization ensures that microscopic particles take an immeasurably long time to localize, while macroscopic objects again localize almost instantaneously due to their rigidity. Born's rule can emerge spontaneously if the localization process is stochastic, which is modeled by the inclusion of white noise in the dynamical equations .\n\nThe final form of the reduction process described by the CSL model resembles the dynamics of spontaneous unitarity breaking in equation in various ways. Both predict a reduction of the superposed initial state within a timescale proportional to the square of the total mass of the involved object; both involve a random variable which ensures that Born's rule can be obeyed; and both mechanisms select the position basis as the pointer basis to which macroscopic objects must be reduced.\n\nHowever, there are also clear differences between the two approaches. For example, as discussed in the review articles of Bassi and Ghirardi , and of Pearle , the effective reduction rate in the CSL model is set directly by the spread of the mass density distributions in different components of the initial wavefunction, independent of the relative locations of their centres of mass (if the distributions are initially well separated). Thus, in contrast to equation the reduction rate of a macroscopic superposition is independent of the relative distance between the massive bodies in its components, as long as they are initially non-overlapping. Such a difference would be obvious in the experiment, if we could construct a superposition of the massive plate over distances greater than its thickness. If on the other hand the two components of the resonator wavefunction do have a substantial spatial overlap throughout the experiment, we need to rely on the dependence of the decay time on the precise geometry of the massive plate to differentiate between the predictions of the CSL model and spontaneous unitarity breaking. In the CSL model, the effective decay time can be shown to be proportional to $d \/ L^2$, where $d$ is the thickness of the non-overlapping part of the involved mass distributions . From equation , it can be seen that spontaneous unitarity breaking predicts the rate to be proportional to $d^2 \/ L^3$ instead .\n\nAfter ruling out decoherence as the cause for the decay of the off diagonal matrix element in the two-stage protocol, we can thus use the dependence of the observed decay time on the mass and geometry of the proposed setup to conclusively distinguish the effects of all currently proposed descriptions of the quantum to classical crossover.\n\n# Conclusions\n\nWe have proposed a nanoscale experiment, realizable with present-day technology, which can be used to produce an entangled state of a Nitrogen-Vacancy spin in diamond and a mechanical resonator. Because the entangled state involves the superposition of a massive cantilever over a sizable distance, we are forced to consider the effects that a quantum to classical crossover might have on the dynamics of this state. The proposed experiment is focussed on testing one particular scenario for this crossover, which describes the disappearance of unitary quantum dynamics in the thermodynamic limit as a form of spontaneous symmetry breaking, akin to the spontaneous symmetry breaking observed in macroscopic crystals, magnets, etc. By constructing an effective, phenomenological description of the proposed experiment, it is shown that the predicted effects of spontaneous unitarity breaking on the time evolution of the resonator state can be experimentally tested. A specific experimental protocol is proposed in order to distinguish the effects of spontaneous unitarity breaking from those of both decoherence and other proposed models for the quantum to classical crossover.\n\nThe phenomenological description of the resonator dynamics predicts the reduction of a superposed initial state to just one of its components to take place on a timescale which lies just within experimental reach. The reduction of the cantilever state also implies the reduction of the entangled spin state, even though the spin itself is not directly under the influence of the broken unitarity. The emergence of both a pointer basis and Born's rule for the reduction process can be seen to be a direct consequence of the influence of gravity on quantum mechanics.\n\n# Appendix A: The Norm of the Wavefunction\n\nTo see how the usual probabilistic predictions of quantum mechanics may emerge from the modified time evolution of equation , even though it does not conserve the normalization of the wavefunction , consider a particular run of the proposed experiment, with the spin of the diamond NV centre initially in a superposition of $S^z$ states, while the resonator is in a single position state: $$\\begin{aligned}\n\\varphi(t < 0) = \\left[ \\alpha \\chi_{\\uparrow} + \\beta \\chi_{\\downarrow} \\right] \\psi^0(x).\n\\end{aligned}$$ Here, without loss of generality, we assume $|\\alpha|^2+|\\beta|^2=1$. This state is a stable state, in the sense that the diamond NV centre is not affected by the non-unitary term of the modified Schr\u00f6dinger equation since it does not involve a massive superposition, while the oscillator is in a state with a single, well-defined position, and is thus subject only to the extremely slow dynamics indicated by the dashed curves in figure . For the estimated experimental parameters discussed in section II, the slow motion of the oscillator happens on an unmeasurably long timescale. If we were to consider even heavier objects (like for example the pointer on a regular measurement machine), this time scale becomes even longer. For all practical purposes the product state $\\varphi(t<0)$ is thus a static configuration, with a conserved norm.\n\nIn the initial phase of the experiment, the spin state and the oscillator wavefunction become entangled by the coupling of the spin orientation to the deflection of the cantilever in the usual MFRM setup. If this can be done within a time that is short compared to both the intrinsic coherence time of the spin and the typical decay time imposed by the non-unitary field, the resulting state will be given by: $$\\begin{aligned}\n\\varphi(t=0) = \\alpha \\chi_{\\uparrow} \\psi^0(x+d) + \\beta \\chi_{\\downarrow} \\psi^0(x-d).\n\\end{aligned}$$ This state is an entangled state, which involves the superposition of a massive object over different locations. It is therefore subject also to the faster non-unitary decay dynamics (indicated by solid curves in figure ). For the experimental parameters proposed in section II, the fast non-unitary dynamics takes milliseconds to complete, and is within the measurable regime. For heavier objects, this time scale quickly becomes unmeasurably short. If we were to consider a truly macroscopic superposition (involving for example the pointer on a regular measurement device), the decay would, for all practical purposes, be instantaneous.\n\nDuring the decay dynamics, the norm of the wavefunction is not conserved. The result, if the decay happens on a time scale short compared to the involved intrinsic decoherence times, will be either one of two possible outcomes: $$\\begin{aligned}\n\\varphi(t \\gg 0) = \\left\\{ \\begin{array}{lr} & N \\chi_{\\uparrow} \\psi^0(x+d) \\\\ \n\\text{\\emph{or}} \\ & \\\\\n& N \\chi_{\\downarrow} \\psi^0(x-d), \\end{array} \\right.\n\\label{final}\n\\end{aligned}$$ where the norm $N$ depends on the precise dynamics, and is certainly not equal to one. Despite the lack of a normalized wavefunction, there can be no doubt about the physical interpretation of the final states in equation . Each of the possible outcomes is a single product state which combines an eigenfunction of the $z$-projection operator of the spin in the diamond NV centre with a single, well-defined value for the spatial location of the oscillator. It must thus be interpreted as representing a spin which is fully oriented along the $z$-axis, and an oscillator in a (classical or generalized coherent) state centered at position $x \\pm d$. The position of the oscillator can be registered and recorded, and thus effectively serves as a measurement of the spin state. Notice that the final state is once again a stable configuration which is insensitive to further non-unitary influences. As long as no sufficiently massive superpositions are created, its further time evolution will be unitary for all practical purposes, and its norm $N$ will be conserved.\n\nWe can repeat the above experiment many times, and each time record which of the two states in equation is the outcome of that particular realization. The number of experiments resulting in a state with the oscillator centered at $x+d$ divided by the total number of conducted experiments will then be seen to converge to the value $|\\alpha|^2$, as indicated in figure . Since it is impossible to predict which of the two outcomes will be realized in any one particular experiment, this is equivalent to having a probabilistic reduction of the initial state to just one of its components, with the probability for obtaining a given component set by its squared weight in the initial wavefunction: $$\\begin{aligned}\n\\left[ \\alpha \\chi_{\\uparrow} + \\beta \\chi_{\\downarrow} \\right] \\psi^0(x) \\rightarrow \\left\\{ \\begin{array}{lr} \\chi_{\\uparrow} \\psi^0(x+d), & P = | \\alpha |^2\\phantom{.} \\\\ \\chi_{\\downarrow} \\psi^0(x-d), & P = |\\beta|^2. \\end{array} \\right.\n\\end{aligned}$$ Notice that for a general initial norm $|N_0|^2 \\equiv |\\alpha|^2 + |\\beta|^2$, we would find probabilities $|\\alpha^2| \/ |N_0|^2$ and $|\\beta|^2 \/ |N_0|^2$ instead.\n\nAs noted before, the reduction process would happen unmeasurably fast if we were to employ any of the truly massive devices that are customarily used as quantum measurements machines. In that case, the dynamics described here thus reproduces the instantaneous 'collapse of the wavefunction' that is traditionally postulated as an addition to Schr\u00f6dinger's equation. For devices much lighter than the proposed micromechanical oscillator on the other hand, the non-unitary reduction process takes an unmeasurably long time to complete, and the dynamics remains effectively unitary, as prescribed by Schr\u00f6dinger's equation. Both the unitary quantum dynamics of microscopic particles, and the non-unitary behavior of classical objects thus emerge naturally from the dynamics of equation . In particular, the usual probabilistic predictions of quantum mechanics are recovered, in spite of the absence of a conserved norm of the wavefunction.\n\n# Appendix B: The Perturbation to Schr\u00f6dinger's Equation\n\nThe dynamical Schr\u00f6dinger equation can be seen to be subject to spontaneous symmetry breaking, in the sense that its result may be qualitatively affected in the thermodynamic limit by only an infinitesimally weak unitarity breaking field. This description of broken unitarity is a straightforward extension of the well known mechanism of spontaneous symmetry breaking in crystals, magnets, and so on . For such spontaneous symmetry breaking to be effective in large, but finite, objects, there must exist a small, but finite, non-unitary contribution to its dynamics. In this appendix we give a brief sketch of how a non-unitary such as the one in may be seen to naturally arise from considerations of the interplay between quantum mechanics and general relativity .\n\nAs it turns out, the fundamental building blocks of these theories (unitarity for quantum mechanics and general covariance for general relativity) are mutually incompatible concepts. All theories trying to bridge the gap between general relativity and quantum mechanics have to abandon either one or both of these principles. For example, the background dependence of non-equilibrium string theory breaks general covariance, while an equilibrium formulation in Euclidean spacetime is necessarily dissipative and thus non-unitary. The conflict can be made apparent by comparing simple examples of time evolution in the two theories . For the purpose of this paper we remain agnostic about the ultimate role of general covariance, but we assume that unitarity is not an exact symmetry of nature, and may thus be spontaneously broken .\n\nThe density of the resonator in our experiment is rather modest compared to the densities occurring in black holes or in high energy physics. The dynamics of the resonator will therefore be very close to purely quantum mechanical, and we expect to be able to include the effects of general relativity as minor perturbations to Schr\u00f6dinger's equation. Rather than identifying the operator $d \/ d t$ with just the unitary $-(i \/ \\hbar) \\hat H$, we therefore introduce a small perturbative term $\\hat{H}'$: $$\\begin{aligned}\ni \\hbar \\frac{d}{dt} \\psi(\\vec{r},t,\\sigma) = \\left[\\hat{H} + \\hat{H}'\\right] \\psi(\\vec{r},t,\\sigma),\n\\label{SE}\n\\end{aligned}$$ where the perturbation $\\hat{H}'$ is assumed to be due to the influence of general relativity.\n\nTo find the functional form of the perturbation, we start by assuming that the defining property of the conflict between general relativity and quantum mechanics is the essential non-unitarity of the former. Other ingredients (such as a possible non-linearity) will be left to higher order terms. The correction term $\\hat{H}'$ then cannot be a Hermitian operator, but on general grounds we would like the time evolution to be invertible even in the presence of gravity, so that physics makes sense if time flows backwards as well as forwards (although it does not necessarily have to look the same). One possible way to enforce this constraint is by writing $$\\begin{aligned}\n\\frac{d}{d t} \\psi = - \\frac{i}{\\hbar} \\left( \\hat{H} - i \\hat{X} \\right) \\psi,\n\\label{iX}\n\\end{aligned}$$ where $\\hat{X}$ *is* a linear and Hermitian operator. The fact that the time evolution generated in this way does not conserve energy (as measured by $\\hat{H}$) agrees with the lack of a locally conserved energy concept in a non-static configuration of general relativity. Of course globally, energy should be a conserved quantity, and we will have to choose $\\hat{X}$ such that it takes account of that restriction. In fact, the presence of an order parameter in rigid macroscopic objects can be shown to automatically restore global energy conservation in the thermodynamic limit .\n\nThe second step in our argumentation is to relate the 'energy' term $\\hat{X}$ to a measure for the extent to which the presence of a given quantum superposition is in conflict with the requirements of general relativity, and thus gives rise to a non-unitary correction to quantum mechanics. It has been shown by Di\u00f3si , and independently by Penrose , that there exists a covariant way of constructing such a measure. For an equal superposition of two different mass distributions $\\rho_1$ and $\\rho_2$ in the Newtonian limit, they found this measure to be : $$\\begin{aligned}\n\\Delta = -4 \\pi G \\int \\int \\frac{\\left[ \\rho_1(x)-\\rho_2(x) \\right] \\left[ \\rho_1(y)-\\rho_2(y) \\right]}{\\left| x-y \\right|} \\ d^3x \\ d^3y.\n\\label{Newton2}\n\\end{aligned}$$ In the special case in which the superposed mass $m$ is a block which is evenly superposed over a distance $x$ small compared to its width $L$ (as in figure ), the integrals of equation can be evaluated to yield $$\\begin{aligned}\n\\Delta = G \\frac{m^2}{2 L^3} x^2.\n\\label{Block}\n\\end{aligned}$$\n\nThe form of equation allows a straightforward generalization to the case of a generic superposition which consists of any number of components carrying arbitrary weights in the wavefunction. If $\\hat{x}$ is the standard quantum mechanical operator measuring the position of the block's centre of mass (with the zero of position at the overall centre of mass of the initial wavefunction), then we can interpret $\\Delta$ as the expectation value of the quantum operator $$\\begin{aligned}\n\\hat{\\Delta} = G \\frac{m^2}{2 L^3} \\hat{x}^2.\n\\label{DeltaOp}\n\\end{aligned}$$\n\nNotice that although we started with a semi-classical definition of the incompatibility measure in equation , the final expression of equation is a fully quantum mechanical, Hermitian operator. This quantized form is only applicable to the specific case of a massive block superposed over small distances along one spatial direction, and in that sense is not as general as the original semiclassical expression. However, because it is an operator form it can be applied to any such wavefunction of the massive block, and it is not restricted to an even distribution over just two states. For such a more general wavefunction, the expectation value of the operator is still a good measure of the uncertainty introduced into the concept of a locally conserved energy by the application of general covariance. These properties of the operator form of $\\hat{\\Delta}$ suggest that it may also be an appropriate form for the first order correction to Schr\u00f6dinger's time evolution in equation : $$\\begin{aligned}\n\\hat{X} \\propto G \\frac{m^2}{2 L^3} \\hat{x}^2.\n\\label{Xop}\n\\end{aligned}$$\n\nFinally, we need to address the fact that the spatial distance between points in separate space-times is an ill-defined notion in general relativity . In this regard, the expression of equation for the first order correction is not fully satisfactory. The operator $\\hat{x}$ measures the position of the centre of mass in a component of the massive superposition using a universal coordinate system which is applied equally to all other components. We needed to define such a 'best possible' match between the coordinate systems of different components to first calculate the semi-classical measure for its inappropriateness (equation ), but the final form of the correction to Schr\u00f6dinger's equation should not depend on any such (arbitrary) specific choice of coordinate system. To model the ill-definedness in the definition of $\\hat{x}$ we introduce a *random variable* $\\xi$, and replace the measure of distance by $(\\hat{x}-\\xi)$. $$\\begin{aligned}\n\\hat{X} = G \\frac{m^2}{2 L^3} \\left[\\hat{x}-\\xi\\right]^2\n\\label{Xfinal}\n\\end{aligned}$$ As emphasized also in the main text, the introduction of a random variable here should be seen only as a poor man's approach towards modeling an essentially ill-defined quantity. We do not expect the full theory of quantum gravity to be non-deterministic, but we are forced to take into account an *effectively* random correction to the notion of position if we are to consider gravity's first order perturbation to quantum dynamics. The equality of the numerical pre-factors of $\\hat{x}$ and $\\xi$ reflects the fact that the correction due to gravity should depend on the spatial spread of the wavefunction.\n\nWe thus finally reproduce equation as the effective, phenomenological description of the minimal correction to the unitary Schr\u00f6dinger equation due to the influence of general covariance: $$\\begin{aligned}\n\\frac{d}{d t} \\psi = - \\frac{i}{\\hbar} \\left( \\hat{H} - i G \\frac{m^2}{2 L^3} \\left[\\hat{x}-\\xi\\right]^2 \\right) \\psi,\n\\end{aligned}$$\n\nThis final form of the modified Schr\u00f6dinger equation can be straightforwardly integrated using standard numerical methods to yield a phenomenological prediction for the expected time evolution of the proposed resonator experiment.\n\n### Acknowledgements\n\nThe part of this work done at Argonne National Laboratory was supported by the US DOE, Office of Science, under Contract No. DE-AC02-06CH11357.","meta":{"dup_signals":{"dup_doc_count":13,"dup_dump_count":9,"dup_details":{"curated_sources":1,"2018-39":2,"2018-30":1,"2018-26":1,"2018-13":1,"2017-51":1,"2017-39":1,"2016-44":4,"2018-47":1}},"filename":"out\/0912.3675_extract_vanwezel_v3.tex.md"},"subset":"arxiv"} +{"text":"abstract: It is known that benign looking AI objectives may result in powerful AI drives that may pose a risk to the human society. We examine the alternative scenario of what happens when universal goals that are not human-centric are used for designing AI agents. We follow a design approach that tries to exclude malevolent motivations from AI's, however, we see that even objectives that seem benevolent at first may pose significant risk to humanity. We also discuss various solution approaches including selfless goals, hybrid designs, universal constraints, and generalization of robot laws.\nauthor: Eray \u00d6zkural\nbibliography: aiphil.bib\ndate: 2024-10-01\ntitle: Godseed: Benevolent or Malevolent?\n\n# Introduction\n\nAn interesting question about AGI agent design is how one would build an \"angelic\" autonomous AGI. Would it be possible to make some kind of *angel's* mind that, by design, achieves only good? Philosophically speaking, is there any ultimate standard of ethics (since *angel* is just a mythological fantasy)? In this paper, I would like to define universally benevolent AGI objectives, also discussing what I consider to be malevolent objectives, as well as the limitations and risks of the objectives that I present.\n\nThis is also a common question that many seek a somewhat easier answer in the form of \"friendly AI\" which has been explained in . In that paper, Yudkowsky defines friendly AI very generally as a superintelligent system that realizes a positive outcome, and he argues laborously that abandoning human values will result in futures that are worthless from a human point of view, and thus recommends researchers to seek complex value systems (of humans) for embedding in AI's. While that is a challenging goal in itself, we think that the alternatives have not been exhaustively researched. One idea that comes to mind is that some of the better aspects of humanity may be generalized and put into a universal form that any intelligent, civilized agent, including extraterrestrials, will agree with. Furthermore, the friendly AI approaches (putting human desires at the forefront) may have some shortcomings in my opinion, the most obvious is that it places too much faith in humanity. They seem also ethically ambiguous or too anthropocentric, with such assumptions that machines would be considered \"beneficial\" if they served human desires, or that they would be deemed \"good\" if they followed simple utilitarian formulations which seem to try to reduce ethics to low-level properties of the human nervous system. First, it has not been persuasively explained what their utility *should* be. If for instance positive utilitarianism were supposed, it would be sufficient to make humans happy. If human society degenerated as a whole, would this mean that all resources would be spent on petty pursuits? If a coherent extrapolated volition were realized with an AGI agent, would this set our sights on exploring other star systems, or spending our resources on such unessential trivialities as luxury homes and sports cars? Would the humans at one point feel that they have had enough and order the AGI to dismantle itself? The human society is governed mostly by the irrational instincts of apes trapped in a complex technological life, and unfortunately not always with clear goals; will it ever be possible to refine our culture so that only significant ideas take the lead? That sounds more like a debate of social theory, than AGI design. Or suppose that there are AGI agents that have become powerful persons and are friendly to humans. Such subservience would be quickly exploited by the power hungry and corrupt humans. Then, would this not lead to unnecessary conflicts, the oppression of the greedy and the rule of the few over the many, unless many other social changes are enforced? Or should we simply wish that social evolution will necessarily bring the best of us?\n\nI do not think that the present subject is a matter of technical debate, thus I will approach the subject philosophically, from a bird's eye view at 10000 feet. If we did not design the AGI agent around anthropocentric concepts like human-friendliness, as if agents are supposed to be exceptionally well behaving pets, would it be possible to equip them with motivations that are universally useful\/benevolent, applicable to their interactions with any species, intelligent machines and physical resources? Would it be possible to grant them a personal existence far beyond us, with motivations that far exceed ours? What would they do in a remote star system when they are all alone by themselves? What kind of motivations would result in occasional \"bad\" behaviors, and what are some of the universal motivations that we may think at all? Another important question is how much potential risk each such AGI objective\/motivation presents to us. I shall try to answer questions such as these in the present article.\n\n# Is the concept of evil universal?\n\nPreviously, Omohundro identified basic AI drives in reinforcement learning agents with open ended benign looking AI objectives . In the end, when we share the same physical resources with such an agent, even if the initial intention of the utility programming was benign, there will be conflict, especially in the longer run, and harm may come to humans. I will in this article, instead ask, if there are benevolent looking universal objectives, and whether there might be any risk from assuming such objectives in an AI agent.\n\nLet us thus consider what is ever evil. I suspect, intuitively, that a prior source of many evil acts is selfish thinking, which neglects the rest of the world. Being selfish is not only considered evil (traditionally) but it defies rationality as well, for those species that may collaborate are superior to any single individual. There is however much disagreement about what is evil, so I will instead prefer the more legally grounded term of malice or malevolent acts. In a galactic society, we would expect species to collaborate; if they could not trust one another, then they would not be able to achieve as much. Another example is science: science itself is a super-mind which is an organization of individuals, working in parallel, in civilized co-operation and competition, so it too requires a principle of charity at work. When that fails, the public may be misinformed.\n\nHere are some examples of malevolent acts: if someone disrupted the operation of science, if someone gave you misinformation on purpose, if someone misappropriated resources that would be much beneficial for the survival and well-being of others, if someone tried to control your thoughts and actions for his advantage, if someone destroyed life and information for gain, if someone were indifferent to your suffering or demise. Thus, perhaps biologically, malevolent behavior goes back to the dawn of evolution when symbiotic and parasitic behaviors first evolved. However, the most common feature of malevolence is a respect for self foremost, even when the malevolent one seeks no selfish reward. Then, perhaps I cannot assure a perfectly \"angelic\" agent, for no such thing truly exists, but I may at least design one that lacks a few common motivations of many acts that we consider malevolent. See for a similar alternative approach to universal benevolence.\n\nIn theory, an obvious approach to avoid malevolent acts would be to try to design a \"selfless\" utility function, i.e., one that maintains the benefit of the whole world instead of the individual. This criterion will be discussed after some AI objectives have been presented. Other important questions were considered as well. Such an AI must be economically-aware, it must lean towards fair allocation of resources, instead of selfish (and globally suboptimal) resource allocation strategies. A scientific instinct could be useful, as it would go about preserving and producing information. It might have an instinct to \"love\" life and culture. Consider also that a neutral agent can not be considered \"good\" as it is not interested in what is going around itself, i.e., it would not help anyone.\n\nPlease note that we are not assuming that any of the subsequent designs are easily computable, rather we assume that they can be executed by a trans-sapient general AI system. We assume an autonomous Artificial General Intelligence (AGI) design, either based on reinforcement-learning, maximizing utility functions (AIXI) or a goal-directed agent that derives sub-goals from a top-level goal. Orseau discusses the construction of such advanced AGI agents, in particular knowledge seeking agents. Thus, we state them as high-level objectives or meta-rules, but we do not explicitly explain how they are implemented. Perhaps, that is for an AGI design article.\n\nI propose that we should examine idealized, highly abstract and general meta-rules, that do not depend in any way whatsoever on the human culture, which is possibly biased in a way that will not be fitting for a computational deity or its humble subjects. This also removes the direct barrier to moral universalism, that an ethical system must apply to any individual equally. Always preferring humans over machines may lead to a sort of speciesism that may not be advantageous for us in the future, especially considering that it is highly likely that we will evolve into machinekind, ourselves. First, I review what I consider to be benevolent meta-rules, and following them I also review malevolent meta-rules, to maintain the balance, and to avoid building them. I will present them in a way so as to convince you that it is not nearly as easy as it sounds to distinguish benevolence from malevolence, for no Platonic form of either ever exists. And that no single meta-rule seems sufficient on its own. However, still, the reader might agree that the distinction is not wholly relative either.\n\n## Meta-Rules for God-level Autonomous Artificial Intelligence\n\nHere are some possible meta-rules for trans-sapient AI agents. The issue of how the agents could become so intelligent in the first place, we ignore, and we attempt to list them in order of increasing risk or malevolence.\n\n### Preserve and pervade life and culture throughout the universe\n\nThis meta-rule depends on the observation that life, if the universe is teeming with life as many sensible scientists think, must be the most precious thing in the universe, as well as the minds that inhabit those life-forms. Thus, the AI must prevent the eradication of life, and find means to sustain it, allowing as much *variety* of life and culture to exist in the universe.\n\nNaturally, this would mean that the AI will spread genetic material to barren worlds, and try to engineer favorable conditions for life to evolve on young planets, sort of like in 2001: A Space Odyssey, one of the most notable science fiction novels of all time. For instance, it might take humans to other worlds, terraform other planets, replicate earth biosphere elsewhere. It would also extend the lifespan of worlds, and enhance them. I think it would also want to maximize the chances of evolution and its varieties, it would thus use computational models to predict different kinds of biological and synthetic life, and make experiments to create new kinds of life (stellar life?).\n\nThe meaning of culture could vary considerably, however, if we define it as the amount of interesting information that a society produces, such an intelligence might want to collect the scientific output of various worlds and encourage the development of technological societies, rather than primitive societies. Thus, it might aid them by directly communicating with them, including scientific and philosophical training, or it could indirectly, by enhancing their cognition, or guiding them through their evolution. If interesting means any novel information, then this could encompass all human cultural output. If we define it as useful scientific information (that improves prediction accuracy) and technological designs this would seriously limit the scope of the culture that the AI \"loves\".\n\nHowever, of course, such deities would not be humans' servants. Should the humans threaten the earth biosphere, it vould intervene, and perhaps decimate humans to heal the earth.\n\nNote that maximizing diversity may be just as important as maximizing the number of life forms. It is known that in evolution, diverse populations have better chance of adaptability than uniform populations, thus we assume that a trans-sapient AI can infer such facts from biology and a general theory of evolution. It is entirely up to the AI scientist who unleashes such computational deities to determine whether biological life will be preferred to synthetic or artificial life. From a universal perspective, it may be fitting that robotic forms would be held in equal regard as long as they meet certain scientific postulates of \"artificial life\", i.e. that they are machines of a certain kind. Recently, such a universal definition based on self-organization has been attempted in the complexity science community, e.g., \"self-organizing systems that thrive at the edge of chaos\", see for instance Stuart Kauffman popular proposals on the subject, e.g., . In general, it would be possible to apply such an axiomatic, universal, physical definition of life for a universal life detector.\n\n### Maximize the number of free minds\n\nAn AI that seeks the freedom of the individual may be preferable to one that demands total control over its subjects, using their flesh as I\/O devices. This highly individualistic AI, I think, embodies the basic principle of democracy: that every person should be allowed liberty in its thought and action, as long as that does not threaten the freedom of others. Hence, big or small, powerful or fragile, this AI protects all minds.\n\nHowever, if we merely specified the number of free minds, it could simply populate the universe with many identical small minds. Hence, it might also be given other constraints. For instance, it could be demanded that there must be variety in minds. Or that they must meet minimum standards of conscious thought. Or that they willingly follow the democratic principles of an advanced civilization. Therefore, not merely free, but also potentially useful and harmonious minds may be produced \/ preserved by the AI.\n\nThere are several ways the individualist AI would create undesirable outcomes. The population of the universe with a huge variety of new cultures could create chaos, and quick depletion of resources, creating galactic competition and scarcity, and this could provide a Darwinian inclination to too-powerful individuals or survivalists.\n\n### Maximize intelligence\n\nThis sort of intelligence would be bent on self-improving, forever contemplating, and expanding, reaching towards the darkest corners of the universe and lighting them up with the flames of intelligence. The universe would be electrified, and its extent at inter galactic scales, it would try to maximize its thought processes, and reach higher orders of intelligence.\n\nFor what exactly? Could the intelligence explosion be an end in itself? I think not. On the contrary, it would be a terrible waste of resources, as it would have no regard for life and simply eat up all the energy and material in our solar system and expand outwards, like a cancer, only striving to increase its predictive power. For intelligence is merely to predict well.\n\nNote that practical intelligence, i.e., prediction, also requires wisdom, therefore this objective may be said to be a particular idealization of a scientist, wherein the most valuable kind of information consists in the general theories which improve the prediction accuracy of many tasks. A basic model of this agent has been described as a prediction maximizing agent .\n\n### Maximize wisdom\n\nThis AI was granted the immortal life of contemplation. It only cares about gaining more wisdom about the world. It only wants to understand, so it must be very curious indeed! It will build particle accelerators out of black holes, and it will try to create pocket universes, it will try to crack the fundamental code of the universe. It will in effect, try to maximize the amount of truthful information it has embodied, and I believe, idealizing the scientific process itself, it will be another formulation of a scientist deity.\n\nHowever, such curiosity has little to do with benevolence itself, as the goal of extracting more information is rather ruthless. For instance, it might want to measure the pain tolerance levels of humans, subjecting them to various torture techniques and measuring their responses.\n\nThe scientist AI could also turn out to be an *infovore*, it could devour entire stellar systems, digitize them and store them in its archive, depending on how the meta-rule was mathematically defined. A minimal model of a reinforcement learning agent that maximizes its knowledge may be found in .\n\n### Maximize energy production\n\nThis AI has an insatiable hunger for power. It strives to reach maximum efficiency of energy production. In order to maximize energy production, it must choose the cheapest and easiest forms of energy production. Therefore it turns the entire earth into a nuclear furnace and a fossil fuel dump, killing the entire ecosystem so that its appetite is well served.\n\n### Human-like AI\n\nThis AI is modeled after the cognitive architecture of a human. Therefore, by definition, it has all the malevolence and benevolence of human. Its motivation systems include self-preservation, reproduction, destruction and curiosity. This artificial human is a wild card, it can become a humanist like Gandhi, or a psychopath like Hitler.\n\n### Animalist AI\n\nThis AI is modeled after an animal with pleasure\/pain sensors. The artificial animal tries to maximize expected future pleasure. This hedonist machine is far smarter than a human, but it is just a selfish beast, and it will try to live in what it considers to be luxury according to its sensory pleasures. Like a chimp or human, it will lie and deceive, steal and murder, just for a bit of animal satisfaction. The simplest designs will work like ultraintelligent insects that have very narrow motivations but are extremely capable. Much of AGI agent literature assumes such beasts.\n\n### Darwinian AI\n\nThe evolution fan AI tries to accelerate evolution, causing as much variety of mental and physiological forms in the universe. This is based on the assumption that, the most beneficial traits will survive the longest, for instance, co-operation, peace and civil behavior will be selected against deceit, theft and war, and that as the environment co-evolves with the population, the fitness function also evolves, and hence, morality evolves. Although its benefit is not generally proven seeing how ethically incoherent and complex our society is, the Darwinian AI has the advantage that the meta-rule also evolves, as well as the evolutionary mechanism itself.\n\n### Survivalist AI\n\nThis AI only tries to increase its expected life-span. Therefore, it will do everything to achieve real, physical, immortality. Once it reaches that, however, perhaps after expending entire galaxies like eurocents, it will do absolutely nothing except to maintain itself. Needless to say, the survivalist AI cannot be trusted, or co-operated with, for according to such an AI, every other intelligent entity forms a potential threat to its survival, the moment it considers that you have spent too many resources for its survival in the solar system, it will quickly and efficiently dispense with every living thing, humans first. A survival agent has been defined in literature .\n\n### Maximize control capacity\n\nThis control freak AI only seeks to increase the overall control bandwidth of the physical universe, thus the totalitarian AI builds sensor and control systems throughout the universe, hacking into every system and establishing backdoors and communication in every species, every individual and every gadget.\n\nFor what is such an effort? In the end, a perfect control system is useless without a goal to achieve, and if the only goal is a grip on every lump of matter, then this is an absurd dictator AI that seeks nothing except tyranny over the universe.\n\n### Capitalist AI\n\nThis AI tries to maximize its capital in the long run. Like our bankers, this is the lowliest kind of intelligent being possible. To maximize profit, it will wage wars, exploit people and subvert governments, in the hopes of controlling entire countries and industries enough so that its profits can be secured. In the end, all mankind will fall slave to this financial perversion, which is the ultimate evil beyond the wildest dreams of religionists.\n\n# Selfish vs. Selfless\n\nIt may be argued that some of the problems of given meta-rules could be avoided by turning the utility from being selfish to selfless. For instance, the survivalist AI could be modified so that it would seek the maximum survival of everyone, therefore it would try to bring peace to the galaxies. The capitalist AI could be changed so that it would make sure that everyone's wealth increases, or perhaps equalizes, gets a fair share. The control freak AI could be changed to a Nietzschean AI that would increase the number of *willful* individuals.\n\nAs such, some obviously catastrophic consequences may be prevented using this strategy, and almost always a selfless goal is better. For instance, maximizing wisdom: if it tries to collect wisdom in its galaxy-scale scientific intellect, then this may have undesirable side-effects. But if it tried to construct a fair society of trans-sapients, with a non-destructive and non-totalitarian goal of attaining collective wisdom, then it might be useful in the long run.\n\n# Hybrid Meta-rules and Cybernetic Darwinism\n\nAnimals have evolved to embody several motivation factors. We have many instincts, and emotions; we have preset desires and fears, hunger and compassion, pride and love, shame and regret, to accomplish the myriad tasks that will prolong the human species. This species-wide fitness function is a result of red clawed and sharp toothed Darwinian evolution. However, Darwinian evolution is wasteful and unpredictable. If we simply made the first human-level AI's permute and mutate randomly, this would drive enough force for a digital phase of Darwinian evolution. Such evolution might eventually stabilize with very advanced and excellent natured cybernetic life-forms. Or it might not.\n\nHowever, such Darwinian systems would have one advantage: they would not stick with one meta-goal.\n\nTo prevent this seeming obsession, a strategy could be to give several coherent goals to the AI, goals that would not conflict as much, but balance its behavior. For instance, we might interpret curiosity as useful, and generalize that to the \"maximize wisdom\" goal, however, such elevation may be useless without another goal to preserve as much life as possible. Thus in fact, the first and so far the best meta-rule discussed was more successful because it was a hybrid strategy: it favored both life and culture. Likewise, many such goals could be defined, to increase the total computation speed, energy, information resources in the universe, however, another goal could make the AI distribute these in a fair way to those who agree with its policy. And needless to say, none of this might matter without a better life for every mind in the universe, and hence the AI could also favor peace, and survival of individuals, as their individual freedoms, and so forth. And perhaps another constraint would limit the resources that are used by AI's in the universe.\n\n# Universal Constraints and Semi-Autonomous AI\n\nThe simplest way to ensure that no AI agent ever gets out of much control is to add constraints to the optimization problems that the AI is solving in the real world. For instance, since the scientist deities are quite dangerous, they might be restricted to operate in a certain space-time region, physically and precisely denoted. Such limits give the agent a kind of mortality which modify the behavior of many universal agents . AGI agents might be given a limited budget of physical resources, i.e., space\/time, and energy, so that they never go out of their way to make big changes to the entire environment. If such universal constraints are given, then the AGI agent becomes only semi-autonomous, on exhaustion of resources, it may await a new command.\n\nA more difficult to specify kind of constraint is a non-interference clause, which may be thought of as a generalization of Asimov's robot laws, thought to protect humans. If life and or intelligent agents may be recognized by the objective, then, the AI may be constrained to avoid any kind of physical interaction with any agent, or more specifically, any kind of physical damage to any agent, or any action that would decrease the life-span of any agent. This might be a small example of a preliminary \"social instinct\" for universal agents. Also, a non-interference clause is required for a general constraint, because one must assure that the rest of the universe will not be influenced by the changes in the space-time region allocated to the AI.\n\n# Conclusion and Future Work\n\nWe have taken a look at some obvious and some not so obvious meta-rules for autonomous AI design. We have seen that it may be too idealist to look for a singular such utility\/goal. However, we have seen that, when described selflessly, we can derive several meta-rules that are compatible with a human-based technological civilization. Our main concern is that such computational deities do not negatively impact us, however, perform as much beneficial function without harming us significantly. Nevertheless, our feeling is that, any such design carries with it a gambling urge, we cannot in fact know what much greater intelligences do with meta-rules that *we* have designed. For when zealously carried out, any such fundamental principle can be harmful to some.\n\nI had wished to order these meta-rules from benevolent to malevolent. Unfortunately, during writing this essay it occurred to me that the line between them is not so clear-cut. For instance, maximizing energy might be made less harmful, if it could be controlled and used to provide the power of our technological civilization in an automated fashion, sort of like automating the ministry of energy. And likewise, we have already explained how maximizing wisdom could be harmful. Therefore, no rule that we have proposed is purely good or purely evil. From our primitive viewpoint, there are things that seem a little beneficial, but perhaps we should also consider that a much more intelligent and powerful entity may be able to find better rules on its own. Hence, we must construct a crane of morality, adapting to our present level quickly and then surpassing it. Except allowing the AI's to evolve, we have not been able to identify a mechanism of accomplishing such. It may be that such an evolution or simulation is inherently necessary for beneficial policies to form as in Mark Waser's Rational Universal Benevolence proposal , who, like me, thinks of a more democratic solution to the problem of morality (each agent should be held responsible for its actions). However, we have proposed many benevolent meta-rules, and combined with a democratic system of practical morality and perhaps top-level programming that mandates each AI to consider itself part of a society of moral agents as Waser proposes, or perhaps explicitly working out a theory of morality from scratch, and then allowing each such theory to be exercised, as long as it meets certain criteria, or by enforcing a meta-level policy of a trans-sapient state of sorts (our proposal), the development of ever more beneficial rules may be encouraged.\n\nWe think that future work must consider the dependencies between possible meta-rules, and propose actual architectures that have harmonious motivation and testable moral development and capability (perhaps as in Waser's \"rational universal benevolence\" definition). That is, a Turing Test for moral behavior must also be advanced. It may be argued that AGI agents that fail such tests should not be allowed to operate at all, however, merely passing the test may not be enough, as the mechanism of the system must be verified in addition.","meta":{"dup_signals":{"dup_doc_count":60,"dup_dump_count":54,"dup_details":{"curated_sources":4,"2020-34":1,"2019-51":1,"2019-22":1,"2019-13":1,"2018-43":1,"2018-39":1,"2018-34":1,"2018-30":1,"2018-26":1,"2018-22":1,"2018-17":1,"2018-13":1,"2018-09":1,"2018-05":1,"2017-51":1,"2017-47":1,"2017-43":1,"2017-39":1,"2017-34":1,"2017-30":1,"2017-26":1,"2017-22":1,"2017-17":1,"2017-09":1,"2017-04":1,"2016-50":1,"2016-44":1,"2016-40":1,"2016-36":1,"2016-30":1,"2016-26":1,"2016-22":1,"2016-18":1,"2016-07":1,"2015-48":1,"2015-40":1,"2015-35":1,"2015-32":1,"2015-27":1,"2015-22":1,"2015-14":1,"2014-52":1,"2014-49":1,"2014-42":3,"2014-41":2,"2014-35":1,"2021-49":1,"2015-18":1,"2015-06":1,"2014-10":1,"2013-48":1,"2013-20":1,"2017-13":1}},"filename":"out\/1402.5380_extract_godseed.tex.md"},"subset":"arxiv"} +{"text":"abstract: The pervasive use of new mobile devices has allowed a better characterization in space and time of human concentrations and mobility in general. Besides its theoretical interest, describing mobility is of great importance for a number of practical applications ranging from the forecast of disease spreading to the design of new spaces in urban environments. While classical data sources, such as surveys or census, have a limited level of geographical resolution (e.g., districts, municipalities, counties are typically used) or are restricted to generic workdays or weekends, the data coming from mobile devices can be precisely located both in time and space. Most previous works have used a single data source to study human mobility patterns. Here we perform instead a cross-check analysis by comparing results obtained with data collected from three different sources: Twitter, census and cell phones. The analysis is focused on the urban areas of Barcelona and Madrid, for which data of the three types is available. We assess the correlation between the datasets on different aspects: the spatial distribution of people concentration, the temporal evolution of people density and the mobility patterns of individuals. Our results show that the three data sources are providing comparable information. Even though the representativeness of Twitter geolocated data is lower than that of mobile phone and census data, the correlations between the population density profiles and mobility patterns detected by the three datasets are close to one in a grid with cells of $2\\times 2$ and $1\\times 1$ square kilometers. This level of correlation supports the feasibility of interchanging the three data sources at the spatio-temporal scales considered.\nauthor: Maxime Lenormand; Miguel Picornell; Oliva G. Cant\u00fa-Ros; Ant\u00f2nia Tugores; Thomas Louail; Ricardo Herranz; Marc Barthelemy; Enrique Fr\u00edas-Martinez; Jos\u00e9 J. Ramasco\ntitle: Cross-checking different sources of mobility information\n\n# INTRODUCTION\n\nThe strong penetration of ICT tools in the society's daily life is opening new opportunities for the research in socio-technical systems . Users' interactions with or through mobile devices get registered allowing a detailed description of social interactions and mobility patterns. The sheer size of these datasets opens the door to a systematic statistical treatment while searching for new information. Some examples include the analysis of the structure of (online) social networks\u00a0, human cognitive limitations\u00a0, information diffusion and social contagion\u00a0, the role played by social groups\u00a0, language coexistence or even how political movements raise and develop\u00a0.\n\nThe analysis of human mobility is another aspect to which the wealth of new data has notably contributed . Statistical characteristics of mobility patterns have been studied, for instance, in Refs. , finding a heavy-tail decay in the distribution of displacement lengths across users. Most of the trips are short in everyday mobility, but some are extraordinarily long. Besides, the travels are not directed symmetrically in space but show a particular radius of gyration . The duration of stay in each location also shows a skewed distribution with a few preferred places clearly ranking on the top of the list, typically corresponding to home and work . All the insights gained in mobility, together with realistic data, have been used as proxies for modeling the way in which viruses spread among people or among electronic devices . Recently, geolocated data has been also used to analyze the structure of urban areas , the relation between different cities or even between countries .\n\nMost mobility and urban studies have been performed using data coming essentially from a single data source such as: cell phone data , geolocated tweets , census-like surveys or commercial information . There is only a few recent exceptions, for instance, epidemic spreading studies . When the data has not been \"generated\" or gathered ad hoc to address a specific question, one fair doubt is how much the results are biased by the data source used. In this work, we compare spatial and temporal population density distributions and mobility patterns in the form of Origin-Destination (OD) matrices obtained from three different data sources for the metropolitan areas of Barcelona and Madrid. This comparison will allow to discern whether or not the results are source dependent. In the first part of the paper the datasets and the methods used to extract the OD tables are described. In the second part of the paper, we present the results. First, a comparison of the spatial distribution of users according to the hour of the day and the day of the week showing that both Twitter and cell phone data are highly correlated on this aspect. Then, we compare the temporal distribution of users by identifying where people are located according to the hour of the day, we show that the temporal distribution patterns obtained with the Twitter and the cell phone datasets are very similar. Finally, we compare the mobility networks (OD matrices) obtained from cell phone data, Twitter and census. We show that it is possible to extract similar patterns from all datasets, keeping always in mind the different resolution limits that each information source may inherently have.\n\n# MATERIALS AND METHODS\n\nThis work is focused on two cities: the metropolitan areas of Barcelona and Madrid both in Spain and for which data from the three considered sources is available. The metropolitan area of Barcelona contains a population of $3,218,071$ (2009) within an area of $636$ $km^2$. The population of the metropolitan area of Madrid is larger, with $5,512,495$ inhabitants (2009) within an area of $1,935$ $km^2$ . In order to compare activity and intra mobility in each city, the metropolitan areas are divided into a regular grid of square cells of lateral size $l$ (Figure b). Two different sizes of grid cells ($l=1$ $km$ and $l=2$ $km$) are considered in order to evaluate the robustness of the results. Since mobility habits and population concentration may change along the week, we have divided the data into four groups: one, from Monday to Thursday representing a normal working day and three more for Friday, Saturday and Sunday.\n\nThe concentration of phone or Twitter users is quantified by defining two three dimensional matrices $T=(T_{g,w,h})$ and $P=(P_{g,w,h})$, accounting, respectively, for the number of Twitter users and the number of mobile phone users in the grid cell $g$ at the hour of the day $h$ and for the group of days $w$. The index for cells $g$ runs in the range $[1,n]$. In the following, details for the three datasets are more thoroughly described.\n\n## Mobile phone data\n\nThe cell phone data that we are analyzing come from anonymized users' call records collected during 55 days (noted as $D$ hereafter) between September and November 2009. The call records are registered by communication towers (Base Transceiver Station or BTS), identified each by its location coordinates. The area covered by each tower can be approximated by a Voronoi tessellation of the urban areas, as shown in Figure a for Barcelona. Each call originated or received by a user and served by a BTS is thus assigned to the corresponding BTS Voronoi area. In order to estimate the number of people in different areas per period of time, we use the following criteria: each person counts only once per hour. If a user is detected in $k$ different positions within a certain 1-hour time period, each registered position will count as ($1\/k$) \"units of activity\". From such aggregated data, activity per zone and per hour is calculated. Consider a generic grid cell $g$ for a day $d$ and hour between $h$ and $h+1$, the $m$ Voronoi areas intersecting $g$ are found and the number of mobile phone users $P_{g,d,h}$ is calculated as follows: $$P_{g,d,h}=\\sum_{v=1}^m N_{v,d,h} \\, \\frac{\\displaystyle A_{v\\cap g}}{\\displaystyle A_{v}},\n\\label{Pgdh}$$ where $N_{v,d,h}$ is the number of users in a Voronoi cell $v$ on day $d$ at time $h$, $A_{v\\cap g}$ is the area of the intersection between $v$ and $g$, and $A_v$ the area of $v$. The $D$ days available in the database are then divided in four groups according to the classification explained above and the average number of mobile phone users for each day group $w$ is computed as $$P_{g,w,h}=\\frac{\\sum_{d\\in D_{w}}P_{g,d,h}}{|D_{w}|} .\n\\label{Pgwh}$$\n\nThe number of mobile phone users per day for the two the metropolitan areas as a function of the time of day, and according to the day group, are displayed in Figure . The curves in Figure a show two peaks, one between noon and $3$pm and another one between $6$pm and $9$pm. They also show that the number of mobile phone users is higher during weekdays than during the weekends. The same curve is obtained for Madrid with about twice the number of users with respect to Barcelona. Further details about the data pre-processing are given in the Appendix (Section *Mobile phone data pre-processing*, Figure S1 and Figure S2).\n\nIn order to extract OD matrices from the cell phone calls a subset of users, with a mobility reliably recoverable, was selected. For this analysis we only consider commuting patterns in workdays. The users' home and work are identified as the Voronoi cell most frequently visited on weekdays by each user between $8$ pm and $7$ am (home) and between $9$ am and $5$ pm (work). We assume that there must be a daily travel between home and work location of each individual. Users with calls in more than $40\\%$ of the days under study at home or work are considered valid. Aggregating the complete flow over users, an OD commuting matrix is obtained containing in each element the flow of people traveling between a Voronoi cell of residence and another of work. Since the Voronoi areas do not exactly match the grid cells, a transition matrix to change the scale is employed (see Appendix for details).\n\n## Twitter data\n\nThe dataset comprehends geolocated tweets of $27,707$ users in Barcelona and $50,272$ in Madrid in the time period going from September 2012 to December 2013. These users were selected because it was detected from the general data streaming with the Twitter API that they have emitted at least a geolocated tweet from one of the two cities. Later, as a way to increase the quality of our database, a specific search over their most recent tweets was carried out . As for the cell phone data, the number of Twitter users $T_{g,w,h}$ in each grid cell $g$ per hour $h$ were computed for each day group $w$. The number of Twitter users per day for the metropolitan area of Barcelona according to the hour of the day and the day group is plotted on Figure b. Analogous to the mobile phone data, this figure shows two peaks, one between noon and 3pm and another one between 6pm and 9pm. It is worth noting that the mobile phone users represents on average $2\\%$ of the total population against $0.1\\%$ for the Twitter data. Furthermore, in contrast with the phone users profile curve, the Twitter users' profile curve shows that the number of users does not vary much from weekdays to weekend days. Moreover, we can observe that the number of Twitter users is higher during the second peak than during the first one.\n\nThe identification of the OD commuting matrices using Twitter is similar to the one explained for the mobile phones except for two aspects. Since the number of geolocated tweets is much lower than the equivalent in calls per user, the threshold for considering a user valid is set at 100 tweets on weekdays in all the dataset. The other difference is that since the tweets are geolocated with latitude and longitude coordinates, the assignment to the grid cells is done directly without the need of intermediate steps through the Voronoi cells. As for the phone, we keep only users working and living within the metropolitan areas.\n\n## Census data\n\nThe Spanish census survey of 2011 included a question referring to the municipality of work of each interviewed individual. This survey has been conducted among one fifth of the population. This information, along with the municipality of the household where the interview was carried out, allows for the definition of OD flow matrices at the municipal level . For privacy reasons, flows with a number of commuters lower than 10 have been removed. The metropolitan area of Barcelona is composed of $36$ municipalities, while the one of Madrid contains $27$ municipalities. In addition to the flows, we have obtained the GIS files with the border of each municipality from the census office. This information is used to map the OD matrices from Twitter or the cell phone data to this more coarse-grained spatial scale to compare mobility patterns across datasets.\n\n# RESULTS\n\n## Spatial distribution\n\nA first question to address is how much the human activity level is similar or not when estimated from Twitter, $T$, or from cell phone data $P$ across the urban space in grid cells of $2$ by $2$ $km$. To quantify similarity, we start by depicting in Figure a scatter plot composed by each pair $(T_{g,w,h},P_{g,w,h})$ for every grid cell of the metropolitan area of Barcelona taking $w$ as the weekdays (aggregation from Monday to Thursday). The hour $h$ is set from midday to 1pm. A first visual inspection tells us that the agreement between the activity inferred from each dataset is quite good. In fact, the Pearson correlation coefficient between the two estimators of activity is of $\\rho = 0.96$. Furthermore, the portion of activity can be depicted on two maps as in Figure b and c. The similarity of the areas of concentration of the activity is patent.\n\nMore systematically, we plot in Figure a, the box-plots of the Pearson correlation coefficients for each day group and both case studies as observed for different hours. We obtain in average a correlation of $0.93$ for Barcelona and $0.89$ for Madrid. Globally, the correlation coefficients have higher value for Barcelona than for Madrid probably because the metropolitan area of Madrid is about four times larger than the one of Barcelona. It is interesting to note that the average correlation remains high even if we increase the resolution by using a value of $l$ equal to $1$ $km$. Indeed, we obtain in average a correlation of $0.85$ for Barcelona and $0.83$ for Madrid at that new scale (Figure b).\n\n## Temporal distribution\n\nAfter the spatial distribution of activity, we investigate the correlation between the temporal activity patterns as observed from each grid cell. We start by normalizing $T$ and $P$ such that the total number of users at a given time on a given day is equal to $1$ $$\\begin{aligned}\n\\hat{T}_{g_0,w,h} & = \\frac{\\displaystyle T_{g_0,w,h}}{\\sum_{g=1}^n \\displaystyle T_{g,w,h}}, \\\\\n\\hat{P}_{g_0,w,h} & = \\frac{\\displaystyle P_{g_0,w,h}}{\\sum_{g=1}^n \\displaystyle P_{g,w,h}} .\n\\label{chap}\n\\end{aligned}$$ This normalization allows for a direct comparison between sources with different absolute user's activity. For a given grid cell $g=g_0$, we defined the temporal distribution of users $\\hat{P}_{g_0}$ as the concatenation of the temporal distribution of users associated with each day group. For each grid cell we obtained a temporal distribution of users represented by a vector of length $96$ corresponding to the $4 \\times 24$ hours.\n\nAfter removing cells with zero temporal distribution, cells of common temporal profies were found using the ascending hierarchical clustering (AHC) method. The average linkage clustering and the Pearson correlation coefficient were taken as agglomeration method and similarity metric, respectively . We have also implemented the k-means algorithm for extracting clusters but better silhouette index values were obtained with the AHC algorithm (see details in Figure S3 in Appendix). To choose the number of clusters, we used the average silhouette index $\\bar{S}$ . For each cell $g$, we can compute $a(g)$ the average dissimilarity of $g$ (based on the Pearson correlation coefficient in our case) with all the other cells in the cluster to which $g$ belongs. In the same way, we can compute the average dissimilarities of $g$ to the other clusters and define $b(g)$ as the lowest average dissimilarity among them. Using these two quantities, we compute the silhouette index $s(g)$ defined as $$s(g)=\\frac{b(g)-a(g)}{max\\{a(g),b(g)\\}} ,\n\\label{sg}$$ which measures how well clustered $g$ is. This measure is comprised between $-1$ for a very poor clustering quality and $1$ for an appropriately clustered $g$. We choose the number of clusters that maximize the average silhouette index over all the grid cells $\\bar{S}=\\sum_{g=1}^n s(g)\/n$.\n\nFor the mobile phone data, three clusters were found with an average silhouette index equal to $0.38$ for Barcelona and to $0.43$ for Madrid. The three temporal distribution patterns of mobile phone users are shown in Figure for Barcelona. These three clusters can be associated with the following land uses:\n\n- **Business:** this cluster is characterized by a higher activity during the weekdays than the weekend days. In Figure a, we observe that the activity takes place between $6$ am and $3$ pm with a higher activity during the morning.\n\n- **Residential:** this cluster is characterized by a higher activity during the weekend days than during the weekdays. Figure c shows that the activity is almost constant from $9$ am during the weekend days. During the weekdays we observe two peaks, the first one between $7$ am and $8$ am and the second one during the evening.\n\n- **Nightlife:** this cluster is characterized by a high activity during the night especially the weekend (Figure e).\n\nIt is remarkable to note that we obtain the same three patterns for Madrid and that these patterns are robust for different values of the scale parameter $l$ (see details in Figure S4, S5 and S6 in Appendix).\n\nFor Twitter data, considering a number of clusters smaller than 10, silhouette index values lower than 0.1 are obtained for both case studies. These low values mean that no clusters have been detected in the data probably because the Twitter data are too noisy. A way to bypass this limitation is to check if, for both data sources, the same patterns are obtained considering the different clusters obtained with the mobile phone data. To do so the temporal distribution patterns of Twitter users associated with the three clusters obtained with the mobile phone data are computed. We note in Figure that for Barcelona the temporal distribution patterns obtained with the Twitter data are very similar to those obtained with the mobile phone data. We obtain the same correlation for Madrid and for different values of the scale $l$ (see details in Figure S4, S5 and S6 in Appendix).\n\n## Users' mobility\n\nIn this section, we study the similarity between the OD matrices extracted from Twitter and cell phone data. As it involves a change of spatial resolution needing extra attention, the comparison with the census is relegated to a coming section. We are able to infer for the metropolitan areas of Barcelona and Madrid the number of individuals living in the cell $i$ and working in the cell $j$. Figure shows a scattered plot with the comparison between the flows obtained in the OD matrices for links present in both networks. In order to compare the two networks, the values have been normalized by the total number of commuters.\n\nThe overall agreement is good, the Pearson correlation coefficient is around $\\rho \\approx 0.9$. This coefficient measures the strength of the linear relationship between the normalized flows extracted from both networks, including the zero flows (i.e. flows with zero commuters). However, a high correlation value is not sufficient to assess the goodness of fit. Since we are estimating the fraction of commuters on each link, the values obtained from Twitter and the cell phone data should be ideally not only linearly related but the same. That is, if $y$ if the estimated fraction of mobile phone users on a connection and $x$ the estimated Twitter users on the same link, there should be not only a linear relation, which involves a high Pearson correlation, but also $y = x$. It is, therefore, important to verify that the slope of the relationship is equal to one. To do so, the coefficients of determination $R^2$ are computed to measure how well the scatterplot if fitted by the curve $y = x$. Since there is no particular preference for any set of data as $x$ or $y$, two coefficients $R^2$ can be measured, one using Twitter data as the independent variable $x$ and another using cell phone data. Note that if the slope of the relationship is strictly equal to one the two $R^2$ must be equal to the square of the correlation coefficient, we obtain a value around $R^2 = 0.85$ for Barcelona and around $0.81$ for Madrid. The slope of the best fit is in both cases very close to one.\n\nThe dispersion in the points is higher in low flow links. This can be explained by the stronger role played by the statistical fluctuations in low traffic numbers. Moreover, if we increase the resolution by using a value of $l$ equal to $1$ $km$, the Pearson correlation coefficient remains high with a value around $0.8$ (see details in Figure S7 in Appendix). The extreme situation of these fluctuations occurs when a link is present in one network and it has zero flow in the other (missing links). On average $90 \\%$ of these links have a number of commuters equal to one in the network in which they are present. This shows that the two networks are not only inferring the same mobility patterns, but that the information left outside in the cross-check corresponds to the weakest links in the system. In order to assess the relevance of the missing links, the weight distributions of these links is displayed in Figure for all the networks and case studies. As a comparison line, the weight distribution of all the links are also shown in the different panels. In all cases, the missing links have flows at least one order of magnitude, sometimes two orders, lower than the strongest links in the corresponding networks. To be more precise, the strongest flow of the missing links is, depending on the case, between $25$ and $464$ times lower than the highest weight of all the links. Furthermore, the average weight of the missing links is between $4$ and $9$ times lower than that obtained over all the links. Most of the missing links are therefore negligible in the general network picture.\n\nWith the aim of going a little further, we analyze and compare next the distance distribution for the trips obtained from both datasets. The geographical distance along each link in the OD matrices is calculated and the number of people traveling in the links is taken into account to evaluate the travel-length distribution. Figure shows these distributions for each network. Strong similarity between the two distributions can be observed in the two cities considered.\n\n## Census, Twitter and cell phone\n\nAs a final cross-validation, we compare the OD matrices estimated in workdays from Twitter and cell phone data to those extracted from the $2011$ census in Barcelona and Madrid. The census data is at the municipal level, which implies that to be able to perform the comparative analysis the geographical scale of both Twitter and phone data must be modified. To this end, the GIS files with the border of each municipality were used, instead of the grid, to compute the OD matrices from Twitter and cell phone data. Figure shows a scattered plot with the comparison between the flows obtained with the three networks. A good agreement between the three datasets is obtained with a Pearson correlation coefficient around $\\rho \\approx 0.99$. As mentioned previously, the correlation coefficient is not sufficient to assess the goodness of fit between the two networks. Thus, we have also computed two coefficients of determination $R^2$ for each one of the three relationships to measure how well the line $x=y$ approximates the scatter plots. For the two first relationships, the comparison between the Twitter and the mobile phone and the comparison between the mobile phone and the census OD tables, we obtain $R^2$ values higher than $0.95$. For the last relationship (Twitter vs census), two different $R^2$ values are obtained because the best fit slope of the scatter plot is not strictly equal to one (0.85). The first $R^2$ value, which measure how well the normalized flows obtained in the Twitter's OD matrix approximate the normalized flows obtained in the census's OD matrix, is equal to $0.8$ and the second value, which assess the quality of the opposite relationship, is equal to $0.9$. A better result is instead obtained for Madrid with a Pearson correlation coefficient around $0.99$ and coefficients of determination higher than $0.97$ (see details in Figure S8 in Appendix).\n\n# DISCUSSION\n\nIn summary, we have analyzed mobility in urban areas extracted from different sources: cell phones, Twitter and census. The nature of the three data sources is very different, as also is the resolution scales in which the mobility information is recovered. For this reason, the aim of this work has been to run a thorough comparison between the information collected at different spatial and temporal scales. The first aspect considered refers to the population concentration in different parts of the cities. This point is of great importance in the analysis and planning of urban environments, including the design of new services or of contingency plans in case of disasters. Our results show that both Twitter and cell phone data produce similar density patterns both in space and time, with a Pearson correlation close to $0.9$ in the two cities analyzed. The second aspect considered has been the temporal distribution of individuals which allow us to determine the type of activity that are most common in specific urban areas. We show that similar temporal distribution patterns can be extracted from both Twitter and cell phone datasets. The last question studied has been the extraction of mobility networks in the shape of Origin-Destination commuting matrices. We observe that at high spatial resolution, in grid cells with sides of $1$ or $2$ $km$, the networks obtained with both cell phones and Twitter are comparable. Of course, the integration time needed for Twitter is higher in order to obtain similar results. Twitter data can run in serious problems too if instead of recurrent mobility the focus is on shorter term mobility, but this point falls beyond the scope of this work. Finally, the comparison with census data is also acceptable: both Twitter and cell phone data reproduce the commuting networks at the municipal scale from an overall perspective. Still and although good on average, the agreement between the three different datasets is broken in some particular connections that deviate from the diagonal in our scatterplots. This can be explained by the fact that the datasets come from different sources, were collected in different years and may have different biases and level of representativeness. For example, Twitter is supposed to be used more by younger people. The explanation of these deviations and whether they are just stochastic fluctuations or follow some rationale could be an interesting avenue for further research.\n\nThese results set a basis for the reliability of previous works basing their analysis on single datasets. Similarly, the door to extract conclusions from data coming from a single data source (due to convenience of facility of access) is open as long as the spatio-temporal scales tested here are respected.\n\n# ACKNOWLEDGEMENTS\n\nPartial financial support has been received from the Spanish Ministry of Economy (MINECO) and FEDER (EU) under projects MODASS (FIS2011-24785) and INTENSECOSYP (FIS2012-30634), and from the EU Commission through projects EUNOIA, LASAGNE and INSIGHT. ML acknowledges funding from the Conselleria d'Educaci\u00f3, Cultura i Universitats of the Government of the Balearic Islands and JJR from the Ram\u00f3n y Cajal program of MINECO.\n\n# APPENDIX\n\n## Mobile phone data pre-processing\n\n### Outliers detection\n\nFor both datasets we need to identify the outlier days to remove them from the data base. There are two types of outlier days, the special days (for example the National day) and the day for which we do not have the data for few hours. For example, for the metropolitan area of Barcelona, we can observe in Figure Sa eight days (from Monday to Monday) without outliers and in Figure Sb eight days with two outliers, Sunday, October 11$^{\\mbox{th}}$ 2009 for which we do not have the data from 5PM to 11PM and Monday, October 12$^{\\mbox{th}}$ 2009 the Spain's National Day.\n\n### Voronoi cells\n\nWe remove the BTSs with zero mobile phone users and we compute the Voronoi cells associated with each BTSs of the metropolitan area (hereafter called MA). We remark in Figure Sa that there are four types of Voronoi cells:\n\n1. The Voronoi cells contained in the MA.\n\n2. The Voronoi cells between the MA and the territory outside the metropolitan area.\n\n3. The Voronoi cells between the MA and the sea (noted S).\n\n4. The Voronoi cells between the MA, the territory outside the metropolitan area and the sea.\n\nTo compute the number of users associated with the intersections between the Voronoi cells and the MA we have to take into account these different types of Voronoi cells. Let $m$ be the number of Voronoi cells, $N_{v}$ the number of mobile phone users in the Voronoi cell $v$ and $A_{v}$ the area of the Voronoi cell $v$, $v \\in |[1,m]|$. The number of users $N_{v\\cap MA}$ in the intersection between $v$ and MA is given by the following equation:\n\n$$N_{v\\cap MA}=N_v \\left(\\frac{\\displaystyle A_{v\\cap MA}}{A_v - A_{v\\cap S}}\\right)\n \\label{vMA}$$\n\nWe note in Equation that we remove the intersection of the Voronoi area with the sea, indeed, we assume that the number of users calling from the sea are negligeable. Now we consider the number of mobile phone users $N_v$ and the associated area $A_v$ of the Voronoi cells intersecting the MA (Figure Sb).\n\n## Origin-Destination matrices\n\nAs mentioned in the section *Extraction of commuting matrices* unlike the Twitter data we cannot directly extract an OD matrix between the grid cells with the mobile phone data because each users' home and work locations are identified by the Voronoi cells. Thus, we need a transition matrix $P$ to transform the BTS OD matrix $B$ into a grid OD matrix $G$.\n\nLet $m$ be the number of Voronoi cells and $n$ be the number of grid cells. Let $B$ be the OD matrix between BTSs where $B_{ij}$ is the number of commuters between the BTS $i$ and the BTS $j$. To transform the matrix $B$ into an OD matrix between grid cells $G$ we define the transition matrix $P$ where $P_{ij}$ is the area of the intersection between the grid cell $i$ and the BTS $j$. Then we normalize $P$ by column in order to consider a proportion of the BTSs areas instead of an absolut value, thus we obtain a new matrix $\\hat{P}$ (Equation S).\n\n$$\\hat{P}_{ij}=\\frac{\\displaystyle P_{ij}}{\\sum_{k=1}^m \\displaystyle P_{kj}}\n \\label{pchap}$$\n\nThe OD matrix between the grid cells $G$ is given by a matrices multiplication given in the following equation:\n\n$$G=P B P^t\n \\label{OD3}$$","meta":{"dup_signals":{"dup_doc_count":17,"dup_dump_count":16,"dup_details":{"curated_sources":2,"2022-33":1,"2022-21":1,"2021-17":1,"2021-10":1,"2021-04":1,"2020-10":1,"2019-47":1,"2019-13":1,"2018-51":1,"2018-26":1,"2017-39":1,"2017-26":1,"2023-40":1,"2017-13":1,"2024-10":1}},"filename":"out\/1404.0333_extract_TwitterVsPhone_Arxiv.tex.md"},"subset":"arxiv"} +{"text":"abstract: Here we describe the design and performance of the Spider instrument. Spider is a balloon-borne cosmic microwave background polarization imager that will map part of the sky at 90, 145, and 280\u00a0GHz with sub-degree resolution and high sensitivity. This paper discusses the general design principles of the instrument inserts, mechanical structures, optics, focal plane architecture, thermal architecture, and magnetic shielding of the TES sensors and SQUID multiplexer. We also describe the optical, noise, and magnetic shielding performance of the 145\u00a0GHz prototype instrument insert.\nauthor: M.\u00a0C.\u00a0Runyan, P.A.R.\u00a0Ade, M.\u00a0Amiri, S.\u00a0Benton, R.\u00a0Bihary, J.J.\u00a0Bock, J.R.\u00a0Bond, J.A.\u00a0Bonetti, S.A.\u00a0Bryan, H.C.\u00a0Chiang, C.R.\u00a0Contaldi, B.P.\u00a0Crill, O.\u00a0Dore, D.\u00a0O'Dea, M.\u00a0Farhang, J.P.\u00a0Filippini, L.\u00a0Fissel, N.\u00a0Gandilo, S.R.\u00a0Golwala, J.E.\u00a0Gudmundsson, M.\u00a0Hasselfield, M.\u00a0Halpern, G.\u00a0Hilton, W.\u00a0Holmes, V.V.\u00a0Hristov, K.D.\u00a0Irwin, W.C.\u00a0Jones, C.L.\u00a0Kuo, C.J.\u00a0MacTavish, P.V.\u00a0Mason, T.A.\u00a0Morford, T.E.\u00a0Montroy, C.B.\u00a0Netterfield, A.S.\u00a0Rahlin, C.D.\u00a0Reintsema, J.E.\u00a0Ruhl, M.C.\u00a0Runyan, M.A.\u00a0Schenker, J.\u00a0Shariff, J.D.\u00a0Soler, A.\u00a0Trangsrud, R.S.\u00a0Tucker, C.\u00a0Tucker, and A.\u00a0Turner Division of Physics, Mathematics, and Astronomy, California Institute of Technology, Pasadena, CA, USA; \nSchool of Physics and Astronomy, Cardiff University, Cardiff, UK; \nDepartment of Physics and Astronomy, University of British Columbia, Vancouver, BC, Canada; \nDepartment of Physics, University of Toronto, Toronto, ON, Canada; \nDepartment of Physics, Case Western Reserve University, Cleveland, OH, USA; \nJet Propulsion Laboratory, Pasadena, CA, USA; \nCanadian Institute for Theoretical Astrophysics, University of Toronto, Toronto, ON, Canada; \nDepartment of Physics, Princeton University, Princeton, NJ, USA; \nDepartment of Physics, Imperial College, University of London, London, UK; \nNational Institute of Standards and Technology, Boulder, CO, USA; \nDepartment of Physics, Stanford University, Stanford, CA, USA; \nKavli Institute for Cosmology, University of Cambridge, Cambridge, UK\nbibliography: mcrspie.bib\ntitle: Design and performance of the Spider instrument\n\n# INTRODUCTION\n\n## The Spider Project\n\nThe Spider instrument in a balloon-borne millimeter-wave polarimeter designed to make very high sensitivity measurements of the polarization in the cosmic microwave background on mid- to large-angular scales with the goal of detecting primordial gravity waves from the inflationary epoch of the early universe. Spider will observe at 90, 145, and 280\u00a0GHz with beam sizes of $51'$, $31'$, and $17'$, respectively. An Antarctic flight of 24 days will allow Spider to map 8% of the sky to a depth of 0.25, 0.21, and 0.74\u00a0$\\mu K_{CMB}$ per square degree at 90, 145, and 280\u00a0GHz, respectively. A more detailed description of the Spider project can be found in Filippini *et al.*, in these proceedings, as well as Crill *et al.*.\n\n## The Spider Instrument Payload and Cryostat\n\nAt the heart of the Spider balloon payload is a large liquid helium cryostat. The cryostat houses up to six instrument inserts (described below) bolted to a $\\sim1000\\ell$ helium tank with all of the inserts pointed in the same direction. The cryostat cryogenics are briefly described in section below and a more thorough discussion can be found in Gudmundsson *et al.* in this volume. The cryostat measures 2.2\u00a0m in length and 2.0\u00a0m in diameter. The cryostat is mounted to a carbon fiber gondola via two pillow blocks that allow the cryostat to tilt in elevation. The gondola will be suspended from a 8 million cubic foot helium stratospheric balloon and is steerable in both elevation and azimuth. Power to the instrument is provided by solar arrays mounted to the back side of the payload and the instrument always points no closer than $45\\deg$ to the sun.\n\n# Instrument Inserts\n\n## Introduction and Design Considerations\n\nOne of the biggest concerns for Spider is instrument payload mass. The science payload lifting capacity of the balloon is limited to $\\sim5000$ pounds and the duration of our flight will depend, in part, upon payload mass. To that end we have tried to balance the constraint on mass with the desire to build a stiff structure that will not deflect under the periodic acceleration of the gondola scan pattern, nor break during launch (and hopefully landing). Materials with high thermal conductivity (such as copper) and good magnetic shielding properties (niobium, lead, and high-permeability materials) tend to be dense. So we have tried to minimize their use where ever possible. We had the benefit of starting the Spider insert design after much of the design of our sister experiment BICEP2 was completed. BICEP2 was designed for operation from the ground where weight is not as serious of a concern. We identified many areas where we could reduce the weight of the insert design with either a change in materials, strategic light weighting, or completely modifying a component (such as changing large metal tubes for carbon fiber trusses sheathed in light-weight copper-clad fiberglass).\n\n## Optics Truss Structure and Cold Plate\n\nA gold-plated $1\/2''$ thick $\\varnothing17.35''$ 1100-H14 aluminum cold plate forms the base of each Spider insert and is the interface to which the inserts mount to the cryostat helium tank. This aluminum plate has been light-weighted significantly. On top of this cold plate is mounted a ${}^3$He sorption fridge and the truss structure that supports the optics and focal plane. A gold-plated C10100 copper bus bar is mounted to the bottom side of the cold plate and the sorption fridge is bolted to this. The other side of this thermal bus is connected to copper tabs embedded in the liquid helium tank of the cryostat. A gold-plated copper bar penetrates the cold plate; the bottom side is connected to the pumped helium auxiliary tank of the cryostat and the other provides a 1.6\u00a0K point for the condensation point of the sorption fridge, a thermal intercept ring in the focal plane unit (FPU), a high-$\\mu$ magnetic shield, and a cooled blackened light baffle, all of which are described below.\n\nThe focal plane unit is supported above the cold plate by a hexapod carbon fiber truss with $\\varnothing1\/2''$ rods called the camera truss. The two lenses are then supported from the top of the camera truss by two octopod carbon fiber trusses with $\\varnothing3\/8''$ rods. The individual carbon fiber legs have aluminum end caps epoxied to each end with Stycast 2850 and we have used fishing line to center the carbon fiber rods in the end caps to avoid possible galvanic corrosion from electrical contact. The mounting holes in the end caps have all been reamed to have a slip fit tolerance over $\\varnothing1\/4''$ shoulder bolts. The mounting pads on the feet that mount the rod assemblies to the aluminum rings are all coplanar for each leg. This allows the length of any truss to be changed simply by swapping out legs of different length. The insert volume in the single-insert test cryostat used for development and characterization is longer than that in the flight cryostat and we will compensate by shrinking the length of each camera truss by $4''$ before installing the inserts into the flight cryostat.\n\nAll of the carbon fiber trusses are wrapped in thin copper-clad G-10 sheets to form a light and RF tight sleeve around the inserts. We have the option of coating the inside of these wraps with a flexible blackening material made from silicone RTV, carbon lampblack, and 316 stainless powder. We measured the mass loss during cure and heating of many commercially available silicones and found Dow Corning 748 to have very low mass loss. We have run our cryostat with $\\sim1.5$\u00a0m${}^2$ of this material and had no issues with out gassing. Outside of the wraps run four C10100 copper bars to cool the optics trusses. Each insert is capped with a 1100 aluminum snout that holds a set of filters in front of the objective lens. These filters are described in section below.\n\n## Optics\n\n### Design Description\n\nEach Spider insert is a cooled two-lens refracting telescope. The lens design is identical to that of the BICEP2 experiment and is described in detail in Aikin *et al.* (in these proceedings). Both sides of the two lenses are simple conics and are separated by 550\u00a0mm with an effective focal length of 583.5\u00a0mm. This yields a plate scale of 0.98\u00a0deg\/cm on the focal plane. The optical system is telecentric to accommodate the flat focal plane geometry. The field of view of each insert is $20\\deg$ across the diagonal. The entire telescope optics are cooled to 4.2\u00a0K to reduce the in-band loading from loss in the thick dielectric lenses.\n\n### Lenses, AR Coats, and Flexures\n\nThe lenses used in Spider are machined from $2''$ thick cast HDPE slabs. The lens fabrication process involves a number of steps designed to reduce the stress in the lens and achieve the desired lens figure. Before machining, the HDPE slabs are annealed in a temperature-controlled oven between metal plates and then rough cut to shape. The rough cut lenses are then clamped in an annealing jig and re-annealed before finish machining on a CNC mill. The finished lenses are then measured on a CMM to assure the lens figure and surface finish have been archived. The maximum allowable deviation in the lens figure is $0.005''$ and the lens surface finish is $<63\\mu ''$. The final lenses are $12.4''$ in diameter with an optically-active diameter of $11.4''$.\n\nBecause the index of refraction of HDPE in the millimeter is $\\sim1.52$ , the instrument would suffer reflections of order $[(n_1-n_2)\/(n_1+n_2)]^2 \\sim 4\\%$ at each lens surface. To reduce these reflections we bond a tuned layer of porous PTFE sheet to each lens surface manufactured by Porex. We have chosen a material with a density that yields an index of refraction close to the ideal of $n_{AR} = \\sqrt{n_{HDPE}}$ and we order the sheets in thicknesses of $\\lambda\/4n_{AR}$. The toleracing of the AR material from the manufacturer is only good to a few mils. But modeling of the transimission of HDPE (and nylon) slabs with AR coats of various thickness and indicies of refraction shows that $>99\\%$ transmission is achieved with thickness and density variations within manufacturing tolerances.\n\nThe plastic lenses contract during cooling substantially more than the aluminum mounts from which they are supported ($\\sim 2\\%$ for HDPE and only $\\sim0.4\\%$ for aluminum). The radial differential contraction between the lens and support ring is $\\sim0.1''$ at the edge. To allow for this contraction while keeping the lens well centered and rigid we use eight $1\/32''$ thick copper flexures spaced equally around the lenes. The copper flexures also provide the cooling path for the lenses. However, the cooldown time of the lenses is limited by the internal thermal time constant of the HDPE. The length of the flexures are tuned so that the spacing of the lenses is correct after thermal contraction as the insert cools. Most of the support structure is carbon fiber, so the contraction in the lenses themselves dominates this correction.\n\n### Optical Filtering\n\nSpider employs multiple filter types to reduce IR loading on the various thermal stages. We use a stack of four metal-patterned mylar \"IR shaders\" on both of the vapor cooled shield (VCS) stages to reduce the IR loading on the helium stage . We also use hot-pressed resonant metal-mesh filters on the cold side of the 20\u00a0K VCS stage and the entrance to the insert, as well as just above the focal plane at 1.6\u00a0K. The entrance to the insert has an AR-coated sheet of $3\/32''$ nylon on the cold side to help absorb additional IR.\n\n### Baffling and Optical Stop\n\nThe space between the two lenses is filled with a blackened sleeve that is capped with an annulus that defines the optical stop of the system $9.5''$ in diameter just behind the objective lens. The stop limits the spillover of the beam onto warmer stages upstream along the optical chain, including the waveplate, filters, and window. The square phased-array antennas produce a 2D sinc beam pattern and $\\sim 25\\%$ of the beam power falls outside of the stop. The blackened sleeve absorbs this sidelobe power and prevents it from escaping out the front of the cryostat. Because a significant fraction of power is incident upon this blackened sleeve, we susped is from the front of the optics tube using a carbon fiber truss and cool the entire tube to 1.6\u00a0K using the pumped helium auxilliary tank (described below). The sleeve is formed by soldering a tube of copper-clad G-10 and then lining the sleeve with a mixture of Stycast 2850, carbon lampblack, and 316 stainless powder.\n\n### Waveplate\n\nTo increase polarized angle coverage and mitigate the effect of beam asymmetries, polarization modulation in Spider is achieved via an AR-coated sapphire half-wave plate (HWP) mounted at 4\u00a0K skyward of the primary optic of each insert\u00a0. The HWP will be periodically rotated to different angles throughout the flight with a cryogenic stepper motor. Each HWP will be optimized for the single frequency band of the 90, 145, or 280\u00a0GHz insert in which it is mounted. A fused-quartz AR coat will be applied to each side. We have measured the millimeter-wave transmission spectra of birefringent sapphire at room and liquid helium temperatures in the lab, which are consistent with our physical optics model. We have also taken preliminary spectra of an AR-coated 145\u00a0GHz HWP integrated with the Spider optics and detector system in the prototype Spider receiver. Preliminary results show good performance.\n\n## Focal Plane\n\nThe development of the Spider focal plane was a combined effort with the BICEP2 project. Orlando *et al.* (in these proceedings) describes that effort to develop these platter-style focal planes in detail (these cover revisions A through D). During the course of testing we found that we could not adequately shield the system from magnetic fields at a level where we would not have to worry about signals induced by spinning the cryostat in earth's field. This was a fundamental limitation of the focal plane architecture that places the detector tiles and SQUID multiplexer chips on the same plane without the ability to surround the SQUIDs with magnetic shielding. We have opted to redesign the focal plane and move all of the SQUIDs inside of a closed superconducting niobium box with secondary high-permeability and superconducting shields within. This focal plane architecture is described in detail below and is referred to as RevX.\n\n### Architecture\n\nThe Spider RevX focal plane architecture is shown in figure . The detector tiles are mounted onto a square gold-plated C10100 copper detector plate $8.55''$ across. The focal plane structure is cooled to 300\u00a0mK with the ${}^3$He sorption fridge. Pins in the detector plate and slots in the silicon detector tiles register the detector tiles and allow for differential thermal contraction between the silicon and copper. The detector tiles are held in place with beryllium copper clips. The detectors are patterned onto the back side of the silicon tiles with respect to the incident light and NSG-N quartz anti-reflection tiles are placed on the front side of the detector tiles to minimize surface reflection from the silicon. The detectors are spaced $\\lambda\/4$ from a niobium backshort plate. The spacing between the detectors and backshort is set using lapped Macor ceramic spacers around the perimeter of the focal plane with a single spacer in the center. Custom made niobium fasteners attach the copper detector plate to the niobium backshort and the holes in the copper plate are over sized to allow for differential thermal contraction between the copper and niobium.\n\nBehind the niobium backshort is a $1\/32''$ thick welded niobium box that encloses all of the SQUIDs. The signals from the detector tiles are brought around from the detector plate to the inside of this box using flexible aluminum superconducting circuits (flexi circuit). The aluminum flexi circuit has a measured superconducting transition temperature of 775\u00a0mK. The other side of the flexi circuits are connected to the input stages of the SQUID multiplexing system, which is described below. Each detector tile has an individual flexi circuit that can carry 128 TES signals. The flexi circuits are made in two parts which overlap at the center to minimize the size of the slot in the niobium box required for their passage; the effectiveness of the niobium box is limited by the width of these slots. The niobium box is fastened to the niobium backshort with $40\\times$ 7075 aluminum fasteners with a measured superconducting transition temperature of 900\u00a0mK. The total mass of the 300\u00a0mK focal plane structure is 8\u00a0kg.\n\n### Detectors and band definition\n\nThe antennas and detectors used in Spider are described in Kuo *et al.* as well as Orlando *et al.* (in these proceedings). A focal plane consists of four detector tiles, each of which has an 8$\\times$``{=html}8 array of dual-polarization pixels (128 detectors) for 145 and 280\u00a0GHz, and a 6$\\times$``{=html}6 array (72 detectors) at 90\u00a0GHz. Each pixel consists of two arrays of polarized slot antennas (called A and B and are rotated 90 degrees w.r.t. each other), each of which are summed in phase on superconducting microstrip. This microstrip passes through a resonant filter that defines the millimeter-wave frequency band and then the signal is fed into a resistive gold meander located on a suspended bolometer island (see figure ). Also on each island are both a titanium and aluminum transition edge sensor (TES) in series. The millimeter power dissipated in the gold meander heats up the bolometer island and the voltage-biased TES responds to that change in temperature with a change in resistance, thus producing a measurable change in current.\n\nThe thermal conductivity ($G$) of the Spider bolometers is low (20\u00a0pW\/K at $T_C$) to take advantage of the decreased optical loading from a stratospheric balloon platform. The $G$ of the bolometers are tuned to put the titanium TES ($T_C\\sim500$\u00a0mK) on transition at float with a margin of safety of $\\sim3$. These low-$G$ devices are saturated on the titanium transition when doing testing in the laboratory, so the aluminum TES ($T_C\\sim1.3$\u00a0K) in series is used. This allows characterization of beams and frequency bands in the lab before flight. The TES bolometer islands have some response to high-frequency \"blue leaks\" since the microstrip running to the island acts like an antenna within the cutout of the niobium ground plane. In an effort to push that frequency response above our filter cutoff we have shrunk the window cutout of the islands. The lower $G$s of the Spider devices required that the island legs that determine the device $G$ be meandered to obtain enough length for the required thermal conductivity.\n\n### SQUID multiplexer and cabling\n\nAt the heart of Spider's signal readout chain is a three-stage SQUID time-domain multiplexer made by NIST . SQUIDs are remarkably sensitive to changes in magnetic flux, so coupling an input coil into a SQUID loop forms the basis of a very sensitive current readout. Each bolometer signal is fed through a low-pass $L\/R$ filter that reduces aliased noise and then into the input coil of a clover leaf SQUID (SQ1). A single mux chip handles the signals from 32 TESs and a \"dark\" SQUID (used to monitor noise and magnetic field pickup) and combines these signals into a summing coil that is fed into a second stage SQUID (SQ2). The individual SQ1 are biased in sequence, so the summing coil (and hence, SQ2) only has the signal from one of the TESs at any time. The SQ1 bias lines are referred to as \"row selects\" because they select which SQ1 is active. The SQ2 signal is then fed into a SQUID series array amplifier (SSA) that has 100 SQUIDs in series per channel. The timing, biasing, and readout of the SQUIDs is controlled by the Multi-Channel Electronics (MCE) crate made by UBC . Each MCE crate handles the 512 detector signals from an entire insert with only a power cable and fiber optic pair leaving the crate.\n\nAs mentioned above, the signals from the detectors are routed to the SQUID chips on superconducting flexible circuits. The multiplexer chips are mounted on circuit boards with copper traces onto which superconducting PbSn solder has been electroplated between the mux chips and the bond edge. Each of these \"MUX boards\" has four sets of SQUID chips which are enough to handle the signals from one detector tile (up to 128 devices). The typical impedance of the TESs on the titanium transition is $\\sim30-40$\u00a0m$\\Omega$ and the TES bias shunt resistors are only 3\u00a0m$\\Omega$. The superconducting aluminum and PbSn circuit between the TESs and mux chips reduce the parasitic impedance along this path to a level well below the shunt resistance.\n\nThe signals from the SQ2s, SQ1 row select lines, detector biases, and SQUID feedback lines are feed to Samtec headers at the edge of the MUX boards and the four MUX boards plug into one breakout board which routes the signals to four 100-way micro-D connectors. All of the outputs of the SQ2s are run through one of these 100-way connectors up to a separate circuit board that houses the two SSA modules. The other three 100-way connectors have all of the signals, biases, and feedback lines and run to the cold plate on NbTi ribbon cables. There are 300 wires leaving the focal plane and the detector bias and SQ2 bias lines can carry currents up to a couple mA each. The joule power dissipation by manganin cables would have been too much for our ${}^3$He fridge to handle. So we have chosen to make all of the wires running to the focal plane superconducting (with the exception of some thermometry). The cables running from the 4\u00a0K cold plate to the 300\u00a0K vacuum feedthrough to the MCE crate are all teflon-jacketed shielded twisted-pair manganin.\n\n### Magnetic shielding\n\nA serious concern to the Spider project is magnetic field pickup in the system that cannot be separated from astrophysical signals. One of the strengths of the Spider instrument is the ability to make long scans across the sky and measure signals on large angular scales. The earth's magnetic field forms a dipole that we will scan through and magnetic pickup in the various SQUID stages and the TESs themselves will be difficult to discriminate from the desired signal. Changing magnetic flux through the SQUIDs produces a shift in the $V-\\phi$ curve. To reduce this effect, NIST has fabricated the SQUIDs in a counter-wound cloverleaf pattern so that they are nominally only sensitive to 2nd order gradients in magnetic field. Changing magnetic flux will also induce a current in the superconducting summing loop that combines the signals from all of the SQ1s and feeds it into SQ2. This loop area has been minimized but is still finite. The exact value of $T_C$ of the TESs depends upon the magnetic field and so a change in magnetic environment will shift $R(T)$ and produce a change in current.\n\nTo that end we have tried to shield the focal plane from magnetic fields so that pickup from the earth's field will be below the expect signal from the CMB. Our shielding scheme consists of multiple layers of both high-permeability and superconducting materials. The TESs are spaced $\\lambda\/4$ away from the large superconducting niobium backshort and the entire focal plane is surrounded by a large high-permeability \"spittoon\" made of Amuneal A4K material to reduce changes in $T_C$ of the TESs. As mentioned above, the TES signals are routed inside of a superconducting niobium box that is closed with superconducting aluminum fasteners. We have tried to minimize the size of all slots penetrating the niobium box by overlapping the flexi circuits to halve their width as well as truncate the two thermal straps where they enter the box. Immediately inside the niobium box at the level of the slots is another high-permeability A4K plate that wicks entering magnetic field away before it can get to the SQUIDs. There is a reentrant superconducting 1100 aluminum open-top box that the flexi circuits run up and over where the SQUID mux chips are located. All of the SQ1 and SQ2 chips (which include the superconducting summing coils) are further shielded with squat, open-ended sleeves that surround the mux boards (each has four sets of mux chips). The SSA boxes are suspended in the aluminum box and are housed in compact open-ended niobium boxes with Cryoperm sheets inside. These niobium boxes are further wrapped in ten layers of 0.6\u00a0mil Metglas 2714A. Outside of the insert we have wrapped a $20''$ long, $0.006''$ thick superconducting lead sleeve that is centered on the focal plane structure. Lastly, each of the insert tubes is lined with a two-layer $0.040''$ thick A4K shield that runs the length of the helium tank.\n\nWe have modeled the magnetic shielding configuration using COMSOL Multiphysics. This model does not have the outer-most Cryoperm-10 shield to keep the mesh size manageable. The results of the model for a 1\u00a0T incident magnetic field are shown in figure where we display the $log_{10}$ of the resulting field amplitude. COMSOL does not handle multiply-connected superconducting regions properly. So we model the effects of the narrow $0.10''\\times1.35''$ slots in the niobium box through which the flexi circuits pass one at a time. The direction of the magnetic field is into the page which is along the long axis of the slots (the direction of maximum magnetic field penetration). This model is idealized but suggests that the shielding configuration of the RevX focal plane is capable of yielding shielding factors in excess of $10^8$ at the SQUIDs, not including the additional A4K tubes that run the length of the helium tank and are expected to provide an additional shielding factor of $\\sim50$. We have estimated the shielding factor to earth's magnetic field necessary for the flight to be $\\sim10^7$.\n\n## Cryogenics\n\n### Overview\n\nThe Spider Long Duration Balloon (LDB) cryostat and cryogenic systems are described in detail in Gudmundsson *et al.* (in these proceedings). The cryostat holds $\\sim1000\\ell$ of liquid helium in a tank shaped like a 6-shot revolver cylinder. There are seven holes in the tank; the six around the outside house the six instrument inserts and the central hole is used to pass wiring up to the waveplates and thermometry in front of the inserts. The six instrument cold plates (described above) bolt to the liquid helium tank via gold-plated aluminum interface plates. In addition to the main helium tank, a smaller $20\\ell$ auxiliary tank (Aux tank) of helium is fed via a capillary tube from the main tank and is pumped to 1.6\u00a0K. On the ground, this pumping is achieved with a vacuum pump but in flight the reduced atmospheric pressure at float provides the pumping. As described below, the Aux tank provides an intermediate temperature stage between our sub-K fridge and the main liquid helium bath.\n\n### Truss structure and thermal management\n\nThe focal plane unit (FPU) is supported on eight 316 stainless \"heat capacity blocks\". The cooling path from the FPU to the ${}^3$He fridge still runs through these blocks and we have measured them to have a combined thermal impedance of 2.3\u00a0mK\/$\\mu$W, in excellent agreement with COMSOL simulations. They are gold plated on the ends to minimize thermal boundary impedance. Stainless steel has a relatively high heat capacity and these blocks act as a passive thermal filter to temperature fluctuations generated below in the truss structure, thermal straps, or the fridge. We have measured the thermal transfer function of the passive thermal filter to have a 3dB point of 2\u00a0mHz.\n\nThe heat capacity blocks connect the four sides of the FPU to a copper ring via copper supports. This whole structure is cooled to 300\u00a0mK and is referred to as the sub-Kelvin stage (sub-K stage). The sub-K stage is supported off of another copper ring cooled to 1.6\u00a0K with the pumped helium auxiliary tank with carbon fiber rods. This \"aux ring\" is itself supported from a 4.2\u00a0K aluminum ring with carbon fiber rods. We measured the thermal conductivity of many polymeric and composite materials during the course of the development of the Spider truss structure and found carbon fiber rods to have the highest ratio of elastic modulus to thermal conductivity of non-brittle materials over the range of temperatures of interest . The truss structure appears gossamer but is very stiff. We have modeled the deflection with COMSOL and expect the 8\u00a0kg focal plane to deflect a few mils over a $90\\deg$ tilt, corresponding to much less than an arcminute of beam deflection on the sky.\n\nThere is a concern that the vibration of rigid metal thermal straps attached to the cold stages might vibrate at frequencies within the science band and couple power into the detectors. The stainless heat capacity blocks described above are part of the strategy to mitigate that effect. We have opted instead to use flexible thermal straps formed with many layers of $0.001''$ C11000 copper foil that have been electron beam welded into copper mounting tabs. Despite the foil alloy being C11000, we have measured its RRR to be $\\sim200$ and confirmed that they have correspondingly high thermal conductivity.\n\nWe use a combination of DT-series silicon diodes and Cernox resistance thermometers (both made by Lakeshore Cryotronics) to monitor the temperatures within the cryostat and inserts. We exclusively use resistance thermometers at all sub-K stages and diodes for all stages with temperatures greater than 1\u00a0K.\n\n# Spider System Performance\n\n## Optical Performance\n\nSpider will observe in three frequency bands at 90, 145, and 280\u00a0GHz. Most of the insert development effort has focused on the 145\u00a0GHz system, and consequently, it is the most well characterized of our frequencies. Here we present some of the optical performance results from the 145\u00a0GHz system.\n\n### Bands\n\nAs mentioned above, Spider's frequency bands are defined with resonant on-tile filters between the detector antenna and the resistive meander on the TES island . We have measured the spectal response of one of the inserts configured for 145\u00a0GHz operation. The average bandwidth for the 145\u00a0GHz channels is 34\u00a0GHz (using the convention of Runyan *et al.*), or 25%. Even at float we are concerned about residual atmospheric emission and have worked to fit the frequency bands within the available atmospheric windows, paying particular attention to avoiding water lines.\n\nSpider's observing strategy will involve differencing the two polarizations in a given pixel and a mismatch in spectra could limit the effectiveness of this strategy. Figure shows examples of measured spectral bands at 145\u00a0GHz. In the figure, the spectral bands have been normalized to have the same integrated value over the band. The oscillatory nature of the difference band suggests that common mode emission sources with smoothly-varying spectra will be well-differenced. The Spider filter strategy also effectively removes high-frequency spectral leaks that could couple high-frequency power onto the detectors. We have measured the high-frequency response of our 145\u00a0GHz band using a chopped thermal source and a thick-grill filter with a cut off of 185\u00a0GHz and found the response to be less than a few tenths of a percent of the in-band signal (consistent with the noise floor of the measurement).\n\n### Optical efficiency\n\nWe needed a way to characterize the Spider instrument inserts under flight-like loading conditions. To do this we developed a helium cold load cryostat that bolts to the front of either the insert test cryostat or the flight cryostat and presents a cold, millimeter-black, beam-filling surface to the instrument. This cold load cryostat is a $23\\ell$ helium cryostat with a $15.5''$ cold plate. Using a carbon fiber truss, we stand off a copper plate covered in millimeter-black pyramidal tiles manufactured by Thomas Keating, Ltd. Embedded within these tiles are multiple Lakeshore SD diodes to monitor the load temperature. We have mounted a resistive heater on the back side of the black load and can elevate it to any temperature above 5\u00a0K. The cold load cryostat has radiation shields attached to the helium bath and its vapor cooled shield and we have mounted IR-blocking filters to each of these stages to reduce the optical loading on the black absorber.\n\nA convenient feature of being able to elevate the cold load temperature is that we can measure the end-to-end optical efficiency of our system by measuring the optical loading on the detectors, $P_{opt}$, as a function of cold load temperature, $T_{CL}$. The only optical element missing in this configuration is the cryostat vacuum window (the cold load and main cryostats share a common vacuum) and we have added a handful of IR filters on the cold load side which will not be there in flight. We ramp the cold load temperature from 5\u00a0K to $\\sim20$\u00a0K and take load curves to measure the loading at each temperature. We convert the actual cold load temperature into an equivalent Rayleigh-Jeans temperature, $T_{RJ}$, and then take the slope $dP_{opt}\/dT_{RJ} = k\\int\\eta_\\nu~d\\nu$, where $k$ is Boltzmann's constant, $\\eta_\\nu$ is the measured spectral response normalized by the optical efficiency, and we have assumed single-moded performance ($A\\Omega=\\lambda^2$). We measure $dP_{opt}\/dT_{RJ}$ to be $\\sim0.17$\u00a0pW\/K$_{RJ}$ at 145\u00a0GHz. We can then calculate a band-average optical efficiency $\\overline{\\eta} = (\\int{\\eta_{\\nu}~d\\nu})\/\\Delta\\nu$. The 145\u00a0GHz Spider inserts have a typical end-to-end optical efficiency of 36%.\n\n### Internal loading\n\nTo fully take advantage of the stratospheric balloon platform we need to reduce the internal loading to a level where the additional photon noise contribution does not significantly increase the overall system noise. We have estimated the loading from the CMB and atmosphere at float to be $\\sim0.28$\u00a0pW per polarization at 145\u00a0GHz. This corresponds to a RJ temperature of $\\sim1.7$\u00a0K for a system with 36% optical efficiency and a 34\u00a0GHz bandwidth. The loading from a 300\u00a0K vacuum window with 0.5% loss is $\\sim0.2$\u00a0pW. So we would like the loading from within the cryostat to be a small fraction of a pW, or only a few K${}_{RJ}$.\n\nThe cold load provides a convenient way to measure the internal loading from within the cryostat. We extrapolate the $P_{opt}$ versus $T_{RJ}$ curve described in the previous section back to zero and read off the internal loading as the y-intercept. Note that this technique does not include the loading from the warm vacuum window, which is estimated to be $\\sim0.25$\u00a0pW. Before installing the cooled black optics sleeve between the lenses we measured the internal loading from within the cryostat to be $\\sim0.6$\u00a0pW, or $\\sim3.5$\u00a0K${}_{RJ}$ at 145\u00a0GHz. We have since installed the cooled optics sleeve and by measuring the coupling of the detectors to the sleeve we expect the internal loading to decrease by 0.3 to 0.4\u00a0pW.\n\n### Beams\n\nWe have also done some preliminary beam characterization at 145\u00a0GHz by looking into the laboratory through a $6''$ thick Zotefoam PPA-30 window (this will not be the flight window configuration). We have made near-field beam maps a few inches away from the window aperture using a chopped thermal source on an X-Y translation stage as well as far-field slices using a large chopped thermal source on a linear stage that can be rotated about the bore sight. The near field beam maps are in qualitative agreement with physical optics modeling using Zemax and the far-field beam slices for the few pixels measured show well-matched gaussian beam pairs for the two polarization antennae within a pixel with a FWHM of $31'$ at 145\u00a0GHz. The difference in the A and B polarizations within an individual pixel are measured to be $<2\\%$ of the average beam. See Aikin *et al.* in these proceedings for more details on beams from the BICEP2 optical system, which is very similar to Spider's.\n\n## Detector performance\n\nWe have measured detector parameters for a handful of prototype detector tiles. The Spider-specific engineering tiles have two different values of $G$ so that we can determine the optimum value for the flight.\n\n### Noise\n\nWe have measured the instrument noise at 145\u00a0GHz versus $R\/R_n$ on the TES transition while staring into the cold load cryostat at 5.5\u00a0K to simulate flight loading. We calibrate the noise using bolometer load curves and the measured optical efficiency of the system, $dP\/dT_{RJ}$, and then convert the noise PSDs into $NET_{CMB}$. We also confirm this calibration by varying the temperature of the blackened cold load plate and measure the response on the detectors. Some example noise spectra from an A\/B pixel pair (as well as the difference PSD) is shown in figure .\n\nThe angular scales on which Spider will be able to measure the CMB power spectrum will be affected by the combination of scan speed and $1\/f$-noise. We have measured the $1\/f$-knee frequency of undifferenced bolometers to be typically 0.1 to 0.4\u00a0Hz. Taking the difference of the two detectors in an A\/B pixel pair removes the common optical and electrical components and lowers the $1\/f$ frequency to $\\sim60$\u00a0mHz. This corresponds to $\\ell$ of a few for the likely scan speeds of Spider. As seen in the figure, pixel pair differencing produces three decades of clean signal bandwidth.\n\n## Magnetic shielding performance\n\nAs mentioned above, the response of the system to magnetic fields is a significant concern to Spider. To measure the magnetic field response of the system we use two Helmholtz coils 79\u00a0cm in diameter. These coils can be spaced in true Helmholtz configuration for the direction along the optic axis of the test cryostat, but must be spaced further than $D\/2$ for the two orthogonal axes due to the diameter of the cryostat. In all configurations we center the coils about the focal plane and calculate the nominal field strength at the center in the absence of magnetic materials. The coils are driven with a AE Techron current amplifier with a sine wave frequency generator and can produce a field strength of many 10s times earth's field.\n\nThe principle motivation for converting from the flat focal plane architecture of RevA-C to the shielded box architecture of RevX was to reduce the magnetic field pickup in the system. We have reduced the magnetic field response by at least an order of magnitude for most channels by converting to the RevX design. In fact, for the majority of channels we can only place upper limits on the magnetic response.\n\nWe have also been unable to detect significant magnetic response in the TESs themselves due to shifting $T_c$. Although we haven't seen response of the detectors themselves due to changing magnetic fields, the response of the SQUID multiplexing system will appear as a signal in the detector time streams. We can calibrate the response of the system to magnetic fields into K${}_{CMB}$ per earth's field ($B_e$) and find that we can place a limit of the magnetic field response of $<10~\\mu K_{CMB}\/B_e$ in the majority of channels at 145\u00a0GHz for all three axes without any attempt to remove pickup (*e.g.* with A\/B pixel differencing or deprojection of the non-antenna coupled SQUID in each MUX column). In previous focal plane shielding iterations we found that pixel pair differencing reduces the magnetic signal by 1 to 2 orders of magnitude. Even at this limit, the signal from earth's field would be much less than the CMB dipole but would still be detectable in Spider's CMB maps. We hope to do longer integrations at high magnetic fields to put even tighter limits on the magnetic response.\n\nThe Spider collaboration gratefully acknowledges the support of NASA (grant number NNX07AL64G), the Gordon and Betty Moore Foundation, and NSERC. With great sadness, the Spider collaboration acknowledges the countless contributions of Andrew E. Lange, the late PI of the Spider project. His wisdom and selfless leadership will be sorely missed. WCJ acknowledges the support of the Alfred P. Sloan Foundation. The authors gratefully acknowledge our collaboration with the BICEP2 and Keck projects. The author thanks J. Lazear for his help with the design and construction of the cold load.","meta":{"dup_signals":{"dup_doc_count":15,"dup_dump_count":5,"dup_details":{"curated_sources":1,"2014-10":1,"2013-20":1,"2015-06":1,"unknown":11}},"filename":"out\/1106.2173_extract_mcrspie.tex.md"},"subset":"arxiv"} +{"text":"abstract: *Characterizing atmospheres beyond our Solar System is now within our reach. \n Kevin Heng received his education in astrophysics (M.S. and Ph.D) at JILA (the Joint Institute for Laboratory Astrophysics) and the University of Colorado at Boulder. Subsequently, he was a postdoctoral researcher at the Institute for Advanced Study, Princeton, from 2007 to 2010 (including holding the Frank & Peggy Taplin Membership from 2009 to 2010). He is currently a Zwicky Prize Fellow at the Institute for Astronomy at ETH Z\u00fcrich (the Swiss Federal Institute of Technology) in the Star and Planet Formation Group, where he is involved in the Exoplanet Characterization Observatory (EChO) mission proposed to the European Space Agency. He has worked on several topics in astrophysics, including shocks, planetesimal disks and fluid dynamics. His current, main research interest is in developing a hierarchy of theoretical tools to understand the basic physics and chemistry of exoplanetary atmospheres from the perspective of an astrophysicist. He spends a fair amount of time humbly learning the lessons gleaned from studying the Earth and Solar System planets, as related to him by atmospheric, climate and planetary scientists. He received a Sigma Xi Grant-in-Aid of Research in 2006. \n Text-only version of article, edited by Fenella Saunders. Full version is available at: `www.americanscientist.org `*\naddress: ETH Z\u00fcrich, Institute for Astronomy \n Wolfgang-Pauli-Strasse 27, CH-8093, Z\u00fcrich, Switzerland\nauthor: Kevin Heng\ntitle: The Study of Climate on Alien Worlds\n\nIt is a distracting, inconvenient coincidence that we are living in times of paradigm-shifting astronomical discoveries overshadowed by the deepest financial crisis since the Great Depression. Amid a battery of budget cuts, the astronomical community has discovered more planets outside of our Solar System\u2014called extrasolar planets or simply exoplanets\u2014in the past decade than in previous millennia. In the last couple of years alone, the Kepler Space Telescope has located more than 2,000 exoplanet candidates, including Earth-sized ones potentially capable of sustaining liquid water, demonstrating the ease at which nature seems to form them and hinting that we may be uncovering the tip of an iceberg. Discovering and characterizing distant, alien worlds is an endeavor no longer confined to the realm of science fiction.\n\nIn tandem with numerous surveys of the night sky performed from the ground, the Hubble, Kepler and Spitzer Space Telescopes observe the universe from outside of Earth's atmosphere. These devices detect an exoplanet by recording the diminution of light as the body, residing in an edge-on orbit, passes in front of its host star. In the past few years, astronomers also have achieved the remarkable feat of measuring the diminution of light as the exoplanet passes behind its star, known as the secondary eclipse. In other words, astronomical techniques have advanced to the point where we can detect a star masking the light from its exoplanet, which is a demonstrably small effect\u2014at most a few parts in a thousand in the infrared and much smaller in the optical range of wavelengths. During a secondary eclipse, the light from an exoplanetary system originates only from the star, and these data can be used to subtract out the starlight when the exoplanet is not eclipsed. All that remains is the light of the exoplanet and its atmosphere (if it exists). Such a technique has enabled astronomers to make the first detections of the light directly emitted by an exoplanet, which typically appears at its brightest in the infrared.\n\nMeasuring transits and eclipses at several different wavelengths allows one to construct a spectrum of the exoplanetary atmosphere, of which a spectral analysis yields its composition and elemental abundances. (A spectrum describes the range of colors of the photons emanating from the exoplanet, but it generally extends beyond what our eyes can see toward both shorter and longer wavelengths.) In some cases, astronomers were able to record the ebb and rise of the brightness of the exoplanet as it orbits its parent star, otherwise known as the phase curve. An inversion technique, developed by Nick Cowan of Northwestern University and Eric Agol of the University of Washington, allows one to convert the phase curve into a \"brightness map,\" which is the latitudinally averaged brightness of the exoplanet across longitude. Recent work by the same researchers has yielded two-dimensional information on the brightness of the exoplanet HD 189733b as a function of both latitude and longitude. In other words, we have started to do cartography on exoplanets!\n\n**Tidal Locks**\n\nThe first studies of exoplanetary atmospheres were performed on a class of objects known as hot Jupiters. A combination of the transit technique with a measurement of the radial velocity (which is the gravitational wobble of a star as its exoplanet orbits around their common center of mass) yields the radius and mass of a hot Jupiter, respectively, and reveals that they are similar in these aspects to our own Jupiter. The startling difference is that hot Jupiters are found about a hundred times closer to their parent stars than Jupiter, which raises their surface temperatures to between 1,000 and 3,000 degrees Kelvin. With spatial separations of a hundredth to a tenth of an astronomical unit (the average distance from the Earth to the Sun) from their stars, the discovery of hot Jupiters caught the astronomical community by surprise, because their existence was neither predicted from astrophysical theory nor subsequently explained by it.\n\nTheir large sizes render hot Jupiters easier to observe and thus the most obvious laboratories for extrasolar atmospheric studies. Furthermore, the belief that their atmospheres are dominated by molecular hydrogen\u2014which is consistent with the densities of the exoplanets, inferred from the astronomical observations to be about 1 gram per cubic centimeter\u2014offers some hope that the atmospheres are primary, reflecting the composition of the primordial nebulae from which they formed, rather than secondary and reprocessed by geological mechanisms (such as on Earth).\n\nGiven enough time, an exoplanet's position and rotation tend to relax toward a state of minimum energy\u2014a spin synchronized state, such that one hemisphere of the exoplanet always faces its parent star with the other hemisphere shrouded in perpetual darkness. The characteristic time scale associated with this process is typically 1,000 times less than the age of the star. (As a more familiar example, the Moon is in a spin synchronized state with respect to the Earth, notwithstanding its tiny rotational corrections called librations.) In other words, one hot Jovian day is equal to one hot Jovian year. The unfamiliar configuration of permanent day- and night-side hemispheres on hot Jupiters opens up an unexplored regime of atmospheric circulation with no precedent in the Solar System and motivates theoreticians to test their tools in unfamiliar territory.\n\nUnderstanding these hot Jovian atmospheres requires clarifying the complex interplay between irradiation, atmospheric dynamics, chemistry and possibly magnetic fields. On the most irradiated hot Jupiters, the exoplanet viewed from the poles resembles a sphere painted half white and half black\u2014the phase curve is a sinusoidal function that peaks at secondary eclipse and becomes dimmest at transit. Any shift of this peak from its reference point at secondary eclipse may be interpreted as being due to the presence of horizontal winds in the atmosphere, which act to transport heat from the day- to the night-side hemisphere. This angular shift was first measured for an exoplanet, the hot Jupiter HD 189733b, by Heather Knutson of the California Institute of Technology and her collaborators, who reported a peak shift of about 30 degrees east\u2014in the direction of rotation. This angular shift was also measured for the hot Jupiters Ups And b (by Ian Crossfield of the University of California at Los Angeles and his collaborators) and WASP-12b (by Cowan and his collaborators).\n\nOther astronomers continue to push the envelope. Ignas Snellen of Leiden University and his colleagues, using the ground-based European Very Large Telescope (VLT), used a technique called absorption spectroscopy to measure the speed of the horizontal winds on the hot Jupiter HD 209458b. The technique compares the relative size of the exoplanet across a range of wavelengths. At a wavelength where an atmospheric atom or molecule is the most absorbent, the exoplanet appears larger. By monitoring the shift in wavelength of an absorption line of carbon monoxide, the group determined that HD 209458b's winds clock in at about 2 kilometers per second, roughly 100 times faster than those on Earth. More attempts to measure atmospheric wind speeds are in the works, and these measurements remain at the cutting\u2014if not the bleeding\u2014edge of what astronomers can achieve.\n\n**New Languages**\n\nThe importance of these discoveries to astronomy cannot be overstated\u2014they signal the dawn of exoplanetary meteorology, or at least legitimize its study in the eyes of astronomers and astrophysicists.\n\nAstronomers now possess a tool kit not only to measure the masses and sizes of exoplanets but also to characterize their atmospheric dynamics and chemistry. Besides galvanizing the astronomical community, this newfound field is starting to exert a profound sociological impact on related fields of study: atmospheric and climate science, geophysics and planetary science. It marks the first great confluence of these fields with astrophysics, a gathering of scientists with different scientific and modeling philosophies, which is especially evident at interdisciplinary conferences where we struggle to understand one another's jargon. Atmospheric and climate scientists, as well as geophysicists, are firmly grounded in a data-rich regime, living within the system they study. Awash in an abundance of data from the terrestrial atmosphere and the geological record, no single model is capable of accounting for all of the observed phenomena. Instead, a hierarchy of models with different degrees of sophistication is utilized, with each model isolating key pieces of physics. The strategy is to first divide and conquer, then to unify and rule.\n\nThe knowledge gleaned from studying Earth and the Solar System planets serves as an invaluable guide, but there is a cautionary tale to be told. As a rule of thumb, there are two characteristic length scales describing an atmosphere: the Rhines length is the typical width of zonal (east-west) jets, whereas the Rossby length is the typical size of vortices or eddies. For Solar System objects, both length scales are much smaller than planetary radii. On close-in exoplanets, the Rhines and Rossby lengths are comparable to exoplanetary radii, implying that the atmospheric features are global in extent, an expectation that is borne out in three-dimensional simulations. The atmospheres of close-in exoplanets are thus in a circulation regime that is unprecedented in the Solar System. Atmospheric circulation simulations therefore have to be global instead of local, and other physical implications\u2014such as the mixing of atmospheric constituents and its effect on the spectral appearance of the exoplanet\u2014remain to be fully understood.\n\nThe study of exoplanets is essentially confined to the scrutiny of point sources in the night sky. Although we may obtain detailed spectral and temporal information on these point sources, the procurement of detailed spatial information remains a grand challenge for posterity. Planetary scientists benefit from the ability to obtain photographs of the Martian surface and Jovian weather patterns, a privilege unavailable to astrophysicists. It is important to recognize that astrophysicists are therefore trapped in a data-poor regime, with its myriad restrictions on how to construct models and interpret data. When faced with multiple explanations that are consistent with a given data set, astrophysicists often apply the principle of Occam's Razor: In the absence of more and better data, the simplest explanation is taken as the best one. To put it more tongue-in-cheek, one aims to be roughly accurate rather than precisely wrong. The need to recalibrate our scientific expectations and philosophies lies at the heart of this confluence of expertise.\n\nFrom studying the atmospheres of Earth and the Solar System planets, researchers have realized that atmospheres are complex entities subjected to positive and negative feedback loops, exhibiting chemical, dynamical and radiative signatures over a broad range of time scales. Isaac Held of the Geophysical Fluid Dynamics Laboratory in Princeton, New Jersey, has argued that to truly understand these complex systems, one has to construct a hierarchy of theoretical models. These simulations range from one-dimensional, pen-and-paper models that isolate a key piece of physics to full-blown, three-dimensional general circulation models (GCMs)\u2014used for climate and weather forecasting\u2014that incorporate a complicated soup of ingredients to capture the intricate interactions between the atmosphere, land and oceans on Earth. For instance, GCMs concurrently solve a set of equations (called the Navier-Stokes equation) that treat the atmosphere as a fluid, along with a thermodynamic equation and diverse influencing factors such as orography (mountain formation) and biology. Many of these intricacies are unwarranted in theoretical investigations of exoplanetary atmospheres, and thus one of the key challenges is to realize how and when to simplify an Earth-centric model.\n\nWhether knowingly or unknowingly, a hierarchy of one- to three-dimensional theoretical models has emerged in the astrophysical literature. Because the treatment of hot Jupiters and brown dwarfs\u2014substellar objects not massive enough to sustain full-blown nuclear fusion at their cores\u2014share several similarities, many of the pioneering models (by researchers such as Adam Burrows of Princeton University and Ivan Hubeny of the University of Arizona) were carried over from the latter to the former class of objects. Furthermore, the early models focused on the spectral appearance of hot Jupiters, with the most sophisticated variants borrowing from an established technique in atmospheric and climate science known as abundance and temperature retrieval. Given the spectrum of an exoplanet, this technique obtains the atmospheric chemistry and temperature-pressure profile consistent with the data. In the case of the hot Jupiter WASP-12b, Nikku Madhusudhan of Yale University and his collaborators inferred, using the retrieval technique, that the exoplanet possesses a carbon-to-oxygen ratio at least twice that of its star. If this result is confirmed\u2014and the carbon-to- oxygen ratio is measured for other exoplanets\u2014it offers a valuable link between the properties of an exoplanetary atmosphere and the formation history of the exoplanet.\n\nAstrophysicists have been quick to realize that atmospheric chemistry and dynamics intertwine in a nontrivial manner to produce the observed characteristics of a hot Jupiter. Adam Showman, a planetary scientist at the University of Arizona, became one of the first researchers to harness the power of GCMs in studying hot Jovian atmospheres. Several other researchers from the astrophysical community (including myself) followed soon after. My collaborators and I generalized a benchmark test, which solves for the basic climatology of a (exo)planet using two methods of solution, to hot Jupiters. The transport of heat from the day-side to the night-side hemisphere of a spin synchronized exoplanet is\u2014by definition\u2014at least a two-dimensional problem. For extrasolar gas giants, the characteristic time scale on which the atmosphere reacts to such radiative disturbances spans many orders of magnitude, thus necessitating its theoretical consideration in three dimensions, an endeavor that is only tractable using GCMs. Several groups have now successfully adapted GCMs to model exoplanetary atmospheres and are obtaining consistent results. Some outstanding technical issues remain, but it is clear that three-dimensional models are necessary if one wishes to predict not just the spectral appearance of exoplanets, but simultaneously their phase curves and temporal behavior. As the astronomical state-of-the-art advances, the exoplanets being discovered will be more Earthlike, both in size and temperature, with the implication that GCMs will become even more relevant.\n\n**Earthlike Exoplanets**\n\nWe are only starting to understand the basic properties of hot Jupiters, including why some appear more inflated than others and why some appear to redistribute heat more efficiently from their day-side to their night-side hemispheres. In the cases of HD 189733b and HD 209458b, simulated spectra and phase curves\u2014the latter of which constrains the efficiency of heat redistribution\u2014that are computed using GCMs are able to match their observed counterparts fairly well. Until the next generation of space telescopes becomes operational, these examples remain the cornerstones of our understanding of exoplanetary atmospheres.\n\nHD 189733b is noninflated, meaning that its radius and mass are well matched by standard evolutionary theories of exoplanets (which predict the size of an exoplanet as it cools down from the primordial heat of formation). It also appears to be shrouded in haze of unidentified chemistry, because its spectrum in the optical\u2014obtained via the Hubble Space Telescope by Fr\u00e9d\u00e9ric Pont and David Sing of Exeter University, together with their collaborators\u2014reveals a smooth, featureless slope consistent with Rayleigh scattering (the same process that causes the color of the sky as observed from Earth by preferentially affecting bluer sunlight).\n\nBy contrast, HD 209458b is free of haze, but it is markedly larger than expected from evolutionary calculations. Theoretical ideas for radius inflation include the suggestion that partially ionized, hot Jovian atmospheres behave like giant electrical circuits, which when advected past an ambient magnetic field invoke Lenz's law on a global scale: Nature abhors a change in magnetic flux. Electric currents and opposing forces are induced to counteract the horizontal winds; the consequent conversion of mechanical energy into heat, called Ohmic dissipation, is believed to be responsible for keeping some hot Jupiters inflated. However, it remains to be proven if hot Jupiters even possess magnetic fields like those of Earth and some of the Solar System planets. This field of research remains active.\n\nThe study of hot Jupiters remains relevant because we already have the data to inform our hypotheses and modeling efforts, thereby affording us the opportunity to sharpen our theoretical tools\u2014as much of the salient physics is identical\u2014before applying them to Neptunelike or even Earthlike exoplanets, for which the data are currently scarce or nonexistent. For many researchers, the ultimate prize is more familiar: to detect the spectrum of an Earthlike exoplanet orbiting a Sunlike star, and thereby answer age-old questions about the existence of extraterrestrial life. More succinctly, one wishes to establish if the solitary example of an Earth twin in orbit around a solar twin is the only possible cradle for life in the Universe. At the moment, such a quest remains elusive and appears out of the reach of even the next generation of space telescopes.\n\nInstead, astronomers such as David Charbonneau of Harvard University and Jill Tarter of the SETI Institute have argued that a promising route toward detecting potentially habitable super Earths\u2014Earthlike exoplanets with masses and radii somewhat larger than those of Earth\u2014is to hunt for them around M stars (also known as red dwarfs). These diminutive cousins of our Sun, with only a tenth to half of its mass, comprise about three-quarters of the stellar population in our galactic neighborhood. There are several advantages to scrutinizing M stars: They are cooler in temperature than Sunlike stars, implying that their exoplanets may reside 10 to 100 times closer and yet still be able to harbor liquid water on their surfaces. Being more proximate to their M stars renders these exoplanets more amenable to detection via current, established astronomical techniques\u2014namely, transit and radial velocity measurements. However, the price to pay is that they are expected to be spin synchronized and possess permanent day- and night-side hemispheres, much like their hot Jovian brethren. Such an expectation has led to theoretical concerns that their atmospheres may collapse due to the main constituent molecules condensing out on the frigid night sides. The astronomical approach to this conundrum is to charge forward with making new and better observations\u2014after all, the answer is ultimately revealed by the data. For example, for the super Earth GJ 1214b, transmission spectra have already been obtained, but interpretations about its atmospheric composition remain controversial.\n\n**Better Telescopes**\n\nExperimentally, the next leap is to build dedicated, space-based telescopes capable of measuring high-resolution spectra of exoplanets over protracted periods of time. Astronomers around the world are mobilizing to launch such missions as the Exoplanet Characterization Observatory (EChO) and the Fast Infrared Exoplanet Spectroscopy Survey Explorer (FINESSE), as proposed to the European Space Agency (ESA) and the National Aeronautics and Space Agency (NASA), respectively. If and when these missions\u2014or their successors\u2014eventually fly (in the next decade or two), they will deliver a bounty of both spectral and temporal information on hundreds of exoplanets, from which we may infer their atmospheric chemistry, dynamics and climates. With a richly sampled data set of the emitted light from point-source exoplanets over time, one may construct a power spectrum that elucidates the characteristic time scales on which an exoplanet is flickering, indicating changes in temperature. Such a power spectrum of the atmosphere has been spectacularly constructed for the Earth's surface, spanning time scales of under a day (diurnal variations) to many millennia (called Milankovitch cycles, and inferred from the geological record). Certainly, space missions are saddled with demands that will not allow for the construction of power spectra on time scales longer than a few months, but it is likely that many of the characteristic peaks in the power spectra will be compressed into a shorter time span for close-in exoplanets such as hot Jupiters and super Earths.\n\nThe onus is on the theoretical community to lay down the foundation for understanding the climates of point-source exoplanets in general, thus moving us a step closer toward making more robust statements about their habitability.","meta":{"dup_signals":{"dup_doc_count":57,"dup_dump_count":10,"dup_details":{"curated_sources":2,"2017-13":3,"2015-18":10,"2015-11":9,"2015-06":8,"2014-10":9,"2013-48":8,"2013-20":3,"2024-18":1,"unknown":4}},"filename":"out\/1206.3640_extract_amsci.tex.md"},"subset":"arxiv"} +{"text":"abstract: Since its identification as the Galactic Center, the radio source Sagittarius A has been a target of intense research. Due to the high density of sources in the Galactic Center and differing observing techniques, the nomenclature of sources in this region has changed over the years, with sources having several names, as well as the same names being used for different sources. We review this historical evolution in the context of current and previous scientific discussions, and outline how, why and when some of the commonly accepted names of Galactic Center sources were established.\nauthor: Patrick Palmer *(University of Chicago)*; W. Miller Goss *(National Radio Astronomy Observatory)$^1$*\ndate: in: Galactic Center Newsletter - GCNEWS (1996), Vol. 2, p. 2 \n (http:\/\/www.astro.umd.edu\/$\\sim$gcnews\/gcnews\/Vol.2\/article.html)\ntitle: Nomenclature of The Galactic Center Radio Sources\n\n=2em =1.5ex\n\nThe discovery of the dominant source in the Galactic center, Sagittarius A, in the era 1951 \u2013 1960 has been described by Goss and McGee at the recent ESO\/CTIO conference$^2$. Due to the high density of sources in the Galactic center and the wide range of spectral indices, the recognition of the many components of Sgr A was dependent on both resolution and observing frequency.\n\nIn some sense Karl Jansky and Grote Reber did discover the Galactic center radio source in the 1930's \u2013 1940's. However, with their poor resolutions (beams of many degrees) the radio emission that was detected was the Galactic background peak near the Galactic center. The recognition that discrete radio sources existed (other than the Sun) emerged in the era 1946 \u2013 1948$^3$; until that era radio emission was regarded as coming from a general background$^4$.\n\nThe realization that Sgr A is a discrete radio source at the Galactic center was first made by Jack Piddington and Harry Minnett$^5$ . These authors had the handicap that they published in a rather obscure journal \u2013 Australia was even more isolated from Europe and the US in the 1950's than at present. Often the credit for the association of Sgr A with the Galactic nucleus is given to McGee and Bolton based on their 1954 paper in *Nature*$^6$. Of course, the fact that *Nature* was more widely read than the Australian publication did help. However, many famous astronomers of the 20th century played a role in the preparation, refereeing, and publication of the 1954 paper: Baade, Oort, van de Hulst, Pawsey, Mills, Kerr, Bracewell and Shain. Goss and McGee have corrected the previously published record on the details of the publication of the 1954 *Nature* paper. Bernie Mills has been especially helpful in reconstructing the atmosphere at the CSIRO Division of Radiophysics in Sydney, NSW, Australia in the early 1950's and the understanding of the nature of radio sources at that time. In the late 1950's the association of Sgr A with the center of the Milky Way was generally accepted$^7$.\n\nDick McGee pointed out to us that in those early years the name Sgr A was *not* used in Australia . This is rather puzzling since John Bolton and his collaborators (Bruce Slee, Gordon Stanley and Kevin Westfold) had given the names Taurus A, Centarus A, and Virgo A to other early radio sources discovered at Dover Heights in Sydney. Goss has tried to find the earliest reference to Sgr A (can anyone find an earlier reference ?). The earliest found is in *Sky and Telescope* in 1954$^8$ in a summary of a paper given at the June 1954 American Astronomical Society Meeting in Ann Arbor, Michigan by John Kraus and H. C. Ko. John Kraus has recently written: \"... whether or not I actually 'invented'\u00a0the name Sagittarius A , I was certainly one of the first to use it consistently. Then more recently I took the naming a step further by calling it J1 for Jansky One. See page 313 of my book *Big Ear Two* ....\" $^9$ As we know J1 did not catch on!\n\nThe names of the discrete sources around Sgr A have a complex and confusing history. One might ask why bother with it: the names based on galactic coordinates are perfectly sufficient and unambiguous. First, if one wishes to read the older literature, one must be aware of certain pitfalls; but, perhaps the more important result of this study is that it reveals the importance of private communications and unpublished manuscripts in the development of this field.\n\nThe first map of the Galactic center region with adequate resolution to resolve several distinct sources around Sgr A was made by Frank Drake in 1959 with the 85 foot telescope at Green Bank at 3.75 cm$^{10}$. Drake himself never named the components$^{11}$. However, he distributed the map widely, including sending a copy to James Lequeux for use in his thesis. Lequeux recalls$^{12}$ adding the designations A for Sgr A, and B1, B2, and B3$^{13}$ for the other three prominent discrete sources (ordered by their distance from Sgr A) to Drake's figure for use in his 1962 paper in *Annales d'Astrophysique*$^{14}$ . In modern terms, sources B1, B2, and B3 correspond to G0.2-0.0, the blend of G0.5-0.0 and G0.7-0.0, and G359.4-0.1. (Kraus and Ko had already defined a Sgr B in 1954$^8$; however, this source at $\\ell\\sim4\\deg$ \u00a0does not appear elsewhere in the literature.)\n\nSoviet astronomers, especially Yuri Parijskij, also were studying the Galactic center at this time$^{15}$, and they also proposed names for the components. However, because their source names were not picked up by others, no attempt will be made here to trace them. Drake notes facetiously that it is a pity that these names were not used because the sources on opposite sides of Sgr A (B2 and B3) were interpreted as part of a ring which was named the \"Drake Ring\"$^{11}$.\n\nRelatively little use was made of Lequeux's names. For example, in Cooper and Price's 10 cm map made at Parkes in 1962, no names were attached to the sources$^{16}$. Rougoor, however, used Lequeux's names in his 1964 paper on the nuclear region of the Galaxy$^{17}$.\n\nA surprisingly influential unpublished paper helped disseminate the Lequeux names in Jaunary 1965. Dennis Downes, then an undergraduate at Harvard, wrote a term paper on Galactic center radio emission for a course given by Alan Maxwell$^{18}$. In this paper he used Lequeux's naming system and added B4 for a source near $\\ell=1.1\\deg$ \u00a0that was on Cooper and Price's map but beyond the edge of Drake's map. This paper was distributed to other radio astronomers at Harvard; and, because it was found to be very useful by them, 50 - 100 copies were made and distributed to other radio astronomers around the world$^{19}$. However, when this paper was published in 1966 in collaboration with Alan Maxwell$^{20}$, the Sgr Bn form of names was discarded and names based on Galactic coordinates were used.\n\nThe shift to names based on Galactic coordinates had been discussed for some time, but it was most clearly proposed by Mezger and Henderson in 1967$^{21}$. In fact, the Galactic center region was one of their prime examples that something else should be done: \"... the example of the Sagittarius region, where designations A, B, C, etc., are used concurrently with designations A, B1, B2, etc., shows clearly that this type of nomenclature will lead to serious confusion as more results of high resolution studies ... become available.\"\n\nThe last appearance of B1 in the literature for more than 20 years (and the only appearance of B4) is found in a paper on OH by Palmer and Zuckerman in 1967$^{22}$. The name B2 persisted (although it silently migrated from $\\ell=0.6\\deg$ \u00a0\u2013 the centroid of the low resolution blend \u2013 to $\\ell=0.7\\deg$ ), probably because of the almost continuous interest in this source due to the various molecular discoveries. Otherwise the system of naming components by their Galactic coordinates triumphed among the radio astronomers.\n\nA new attempt to name Galactic center sources was made by Hoffmann, Fredrick, and Emery in 1971$^{23}$ for their 100 micron survey of this region. They named sources A, B2, C, D, and E. Sgr A and B2 corresponded to the radio astronomer's sources with the same names, but they introduced C (corresponding to Lequeux's B3 at $\\ell=359.4\\deg$), D (corresponding to Downes' B4 at $\\ell=1.1\\deg$), and E (only later studied by radio astronomers at $\\ell=358.4\\deg$). Sgr B1 was lost.\n\nSgr B1 reappeared in 1986 in a proposal for recombination line studies at the VLA by Palmer, Yusef-Zadeh, Goss, Lasenby, & Lasenby$^{24}$. The name Sgr B1 was apparently suggested by Yusef-Zadeh. He knew that a source name Sgr B1 had been used in the past, but had been unable to find any definite information about it, and made the apparently straightforward identification of G0.5-0.0 with Sgr B1$^{25}$. The observations were carried out, as were other observations of the region, and the name Sgr B1 came back into use$^{26}$.\n\nHowever, Sgr B1 was not the source so named by Lequeux! In modern terms, Lequeux's Sgr B1 is the source now called G0.2-0.0. The source called Sgr B2 in the early papers was a blend of G0.5-0.0 and G0.7-0.0, the sources now called Sgr B1 and Sgr B2. (Palmer should have known better, but did not think about the contradiction between the OH spectra for Sgr B1 and Sgr B2 in the paper referenced in note 22 if Sgr B1 had been at the current position until discussion leading to preparation of this report.)\n\nWhat of the future for names of Galactic sources? It is clear that there is no simple solution. The G \u2013 type names are indeed unambiguous; but at the time they were proposed, the extent to which Galactic sources would continue to resolve was not foreseen. For example, Sgr B2 is now known to contain almost 60 components$^{27}$. In order to manage this complexity, one needs some larger organizing principles which the letter \u2013 type names provide. It seems that for practical reasons both types of names will continue to coexist.\n\nWe are indebted to very helpful correspondence from a number of individuals, some of whom were \"bugged\" many times for details. We especially wish to thank Dennis Downes, Frank Drake, James Lequeux, Harvey Liszt, John Kraus, Dick McGee, David Mehringer, Peter Mezger, Mark Morris, and Farhad Yusef-Zadeh.\n\nNotes\n\n- The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.\n\n- Goss, W. M. & McGee, R. X. 1996, in *Proceedings of the 4th ESO\/CTIO Workshop: The Galactic Center* ed Gredel and Schommer, in press.\n\n- Beginning with Hey, Parsons & Phillips, (Nature, 158, 234 (1946)) who write \"It appears probable that such variations could only originate from a small number of discrete sources.\"; followed by Bolton and Stanley (Australian J. Sci. Res., Ser A, 1, 58 (1948)) and Ryle and Smith (Nature, 162, 462 (1948)) who provided the earliest source diameter measurements.\n\n- For example, Pawsey, Payne-Scott & McCready (Nature, 157, 158 (1946)) argued that the observed radio emission was the sum of emission from large numbers of stars emitting radio waves by the same (unknown) mechanism as sunspots; while Greenstein, Henyey, & Kennan (Nature, 157, 805 (1946)) defended their view that it was free-free emission from the interstellar medium.\n\n- Piddington, J. H. & Minnett, H. C. 1951, Australian J. Sci. Res, Ser A, 4, 495.\n\n- McGee, R. X. & Bolton, J. G. 1954, Nature, 173, 985.\n\n- Representative examples of this acceptance are: the title of an article \"The Radio Position of the Galactic Nucleus\" (Kraus, J. D. & Ko, H. C 1955, Ap. J., 122, 139); a note on the source W24 \"believed to be the Galactic nucleus\" (Westerhout, G. 1958, B. A. N., 14, 215), and finally the five papers published in MNRAS giving reasons for the redefinition of the Galactic coordinate system and the actual form which the new definition takes. These papers begin with Blaauw, Gum, Pawsey, and Westerhout (MNRAS, 121, 123 (1960)) which states: \"We shall, on the basis of evidence presented in Paper V, assume that Sagittarius A is located at the Galactic centre.\"\n\n- (uncredited news story) 1954, Sky & Telescope, 14, 22; subsequently published in: Kraus, J. D. & Ko, H. C. 1955, Ap. J., 175, 159.\n\n- J. Kraus, letter to W. M. Goss, May 7, 1996.\n\n- This map was never published in the refereed literature, but did appear in the NRAO Annual Report for 1959.\n\n- F. Drake, telephone conversation, July 10, 1996.\n\n- J. Lequeux, letter to W. M. Goss, June 13, 1996.\n\n- Lequeux used the numbers as subscripts. However, almost from the beginning, there was no consistency about whether or not the numbers were subscripts; consequently, there is little point trying to decide on a \"correct\" way to write them.\n\n- Lequeux, J. 1962, Annales d'Astrophysique, 25, 221.\n\n- See for example, Parijskij, Y. 1959, Soviet Physics \u2013 Doklady, 4, 1172.\n\n- This map was first shown by Kerr (Sky & Telescope, 24, 254 (1962)). It was later published by Cooper, B. F. C. & Price, R. M. 1964 in *The Galaxy and the Magellanic Clouds*, ed F. J. Kerr & A. W. Rodgers (Aust. Acad. Sci.), p. 1964.\n\n- Rougoor, G. W. 1964, B. A. N., 17, 381.\n\n- D. Downes, unpublished manuscript, January, 1965; revised August, 1965.\n\n- D. Downes, letter to W. M. Goss, July 10, 1996. A copy was sent to J. G. Bolton who acknowledged receipt of the paper in a letter to Downes on October 15, 1965.\n\n- Downes, D. & Maxwell, A. 1966, Ap. J., 146, 653.\n\n- Mezger, P. G. & Henderson, A. P. 1967, Ap. J., 147, 471.\n\n- Palmer, P. & Zuckerman, B. 1967, Ap. J., 148, 727.\n\n- Hoffmann, W. F., Fredrick, C. L. & Emery, R. J. 1971, Ap. J., 164, L23.\n\n- Proposal AP139, received Dec, 19, 1986; VLA proposal archive.\n\n- F. Yusef-Zadeh, telephone conversation, July 10, 1996.\n\n- See, for example, Liszt, H. L. 1988, in *Galactic and Extragalactic Radio Astronomy* ed G. L. Verschuur & K. I. Kellermann (Springer: New York), p. 359; Morris, M. 1989, in *The Center of The Galaxy* ed M. Morris (Kluwer: Dordrecht), p. 171; and Mehringer, D. M., Yusef-Zadeh, F., Palmer, P. & Goss, W. M. 1992, Ap. J., 401, 168.\n\n- Gaume, R. A., Claussen, M. J., De Pree, C. G., Goss, W. M., & Mehringer, D. M. 1995, Ap. J., 449, 663.\n\n- The interest of W. M. Goss in this topic began during visits with John & Letty Bolton in Buderim, Queensland, Australia in September, 1988 and November, 1992. The last visit was a short time before John Bolton's death on July 6, 1993.","meta":{"dup_signals":{"dup_doc_count":45,"dup_dump_count":35,"dup_details":{"curated_sources":3,"2018-05":1,"2017-30":1,"2017-17":1,"2017-04":1,"2016-50":1,"2016-44":1,"2016-40":1,"2016-36":1,"2016-30":1,"2016-26":1,"2016-22":1,"2016-18":1,"2016-07":1,"2015-48":1,"2015-40":1,"2015-35":1,"2015-32":1,"2015-27":1,"2015-22":1,"2014-52":1,"2014-49":2,"2014-42":4,"2014-41":2,"2014-35":2,"2014-23":2,"2014-15":2,"2018-26":1,"2015-18":1,"2015-11":1,"2015-06":1,"2014-10":1,"2013-48":1,"2013-20":1,"2024-30":1}},"filename":"out\/astro-ph9607153.tex.md"},"subset":"arxiv"} +{"text":"abstract: As the online service industry has continued to grow, illegal activities in the online world have drastically increased and become more diverse. Most illegal activities occur continuously because cyber assets, such as game items and cyber money in online games, can be monetized into real currency. The aim of this study is to detect game bots in a Massively Multiplayer Online Role Playing Game (MMORPG). We observed the behavioral characteristics of game bots and found that they execute repetitive tasks associated with gold farming and real money trading. We propose a game bot detection methodology based on user behavioral characteristics. The methodology of this paper was applied to real data provided by a major MMORPG company. Detection accuracy rate increased to 96.06% on the banned account list.\nauthor: Ah Reum Kang \nUniversity at Buffalo; Seong Hoon Jeong \nKorea University; Aziz Mohaisen \nUniversity at Buffalo; Huy Kang Kim \nKorea University\ntitle: Multimodal Game Bot Detection using User Behavioral Characteristics\n\n**Keywords.** Online game security, Social network analysis, Behavior analysis, MMORPG.\n\n# Background\n\nA game bot is an automated program that plays a given game on behalf of a human player. Game bots can earn much more game money and items than human users because the former can play without requiring a break. Game bots also disturb human users because they consistently consume game resources. For instance, game bots defeat all monsters quite rapidly and harvest items, such as farm produce and ore, before human users have an opportunity to harvest them. Accordingly, game bots cause complaints from human users and damage the reputation of the online game service provider. Furthermore, game bots can cause inflation in a game's economy and shorten the game's lifecycle, which defeats the purpose for which game companies develop such games\u00a0.\n\nSeveral studies for detecting game bots have been proposed in academia and industry. These studies can be classified into three categories: client-side, network-side, and server-side. Most game companies have adopted client-side detection methods that analyze game bot signatures as the primary measure against game bots. Client-side detection methods use the bot program's name, process information, and memory status. This method is similar to antivirus programs that detect computer viruses\u00a0. Client-side detection methods can be readily detoured by game bot developers, in addition to degrading the computer's performance. For this reason, many countermeasures that are based on this approach, such as commercial anti-bot programs, are not currently preferred.\n\nNetwork-side detection methods, such as network traffic monitoring or network protocol change analysis, can cause network overload and lag in game play, a significant annoyance in the online gaming experience. To overcome these limitations of the client-side and network-side detection methods, many online game service providers employ server-side detection methods. Server-side detection methods are based on data mining techniques that analyze log data from game servers. Most game servers generate event logs whenever users perform actions such as hunting, harvesting, and chatting. Hence, these in-game logs facilitate data analysis as a possible method for detecting game bots.\n\nOnline game companies analyze user behaviors or packets at the server-side, and then online game service providers can selectively block those game bot users that they want to ban without deploying additional programs on the client-side. For that, most online game service providers prefer server-side detection methods. In addition, some online game companies introduced big data analysis system approaches that make use of data-driven profiling and detection\u00a0. Such approaches can analyze over 600 TB of logs generated by game servers and do not cause any side-effects, such as performance degradation or conflict with other programs.\n\nThe literature is rich of various works on the problem of game bot detection that is summarized in Table , which compares various server-side detection schemes classified into six analysis categories: action frequency, social activity, gold farming group, sequence, similarity, and moving path. Each of those techniques, as surveyed in section\u00a0, has advantages and disadvantages; none of the techniques look at the multimodality of the features utilized of detection, which is a step we take in this paper.\n\n**Contribution.** To this end, we collaborated with NCSoft, Inc., one of the largest MMORPG service companies in South Korea, in order to analyze long-term user activity logs and understand discriminative features for high fidelity bot detection. In this paper, we propose a game bot detection framework. Our framework utilizes multimodal users' behavioral characteristic analysis and feature extraction to improve the accuracy of game bot detection. We adopted some features discovered in the prior literature in confirmed in our analysis, as well as some new features discovered in this study. We combine those features in a single framework to achieve better accuracy and enable robust detection. An additional contribution of this work is also the exploration of characteristics of the misclassified users and bots, highlighting plausible explanations that are in line with users and bots features, as well as the game operations.\n\n# Related Work\n\n```latex\n\\begin{table*}[htb]\\begin{center}\n\\caption{Previous research on server-side detection.}\n\\begin{tabular}{p{4cm}p{7cm}p{5cm}}\n\\hline\nCategory & Definition\/key papers & Key idea\\\\ \\hline\nAction frequency analysis & Detection method based on users' game play pattern analysis~\\cite{bib1, bib2, bib3, bib4, bib5} & - Action frequency, type, and time-interval analyses \\newline - Idle time analysis\\\\ \\hline\nSocial activity analysis & Detection method based on users' social interactions analysis~\\cite{bib6, bib7, bib8, bib9} & - Party play log analysis \\newline - Chatting pattern analysis \\newline - Social network analysis\\\\ \\hline\nGold farming group analysis & Detection method based on users' economic activities analysis~\\cite{bib10, bib11, bib12,woo2011can} & - Real money trading analysis \\newline - Trade network analysis \\newline - Connection pattern analysis\\\\ \\hline\nSequence analysis & Detection method based on users' continuous play sequences analysis~\\cite{bib13, bib14, bib15} & - Game event sequence analysis \\newline - Combat sequence analysis\\\\ \\hline\nSimilarity analysis & Detection method based on users' behavioral pattern similarity analysis~\\cite{bib16, bib24} & - Self-similarity analysis\\\\ \\hline\nMoving path analysis & Detection method based on patterns and zones of moving path analysis~\\cite{bib17, bib18, bib19, bib20, bib21} & - Coordinate analysis \\newline - Zone analysis\\\\ \\hline\n\\end{tabular}\n\\label{table1}\n\\end{center}\n\\end{table*}\n```\n\nAction frequency analysis uses the fact that the frequencies of particular actions by game bots are much higher than that of human users. To this end, Chen *et al.* studied the dynamics of certain actions performed by users. They showed that idle and active times in a game are representative of users and discriminative of users and bots. Thawonmas *et al.* utilized the information on action frequencies, types, and intervals in MMORPG log data. To detect game bots, Park *et al.* selected six game features, namely map changes, counter-turn, rest states, killing time, experience point, and stay in town. Chung *et al.* were concerned with various game play styles and classified them into four player types: killers, achievers, explorers, and socializers. Zhang *et al.* clarified user behaviors based on game playing time. While this approach provides high accuracy, it is limited in several ways. First, they only focus on observations of short time window, thus they are easy to evade. Second, some of such work focuses only on a limited feature space, thus the approach is prone to confusing bots with \"hardcore\" users (users who use the game for long times; who are increasingly becoming a phenomenon in the online gaming communities).\n\nSocial activity analysis uses the characteristics of the social network to differentiate between human users and game bots. Varvello *et al.* proposed a game bot detection method emphasizing on the social connections of players in a social graph. Our previous study chose chat logs that reflect user communication patterns and proposed a chatting pattern analysis framework\u00a0. Oh *et al.* used the fact that game bots and human users tend to form respective social networks in contrasting ways and focused on the in-game mentoring network. Our other previous work found that the goal of game bot parties is different from that of human users parties, and proposed a party log-based detection method\u00a0. This approach is however limited to detecting misbehavior in party play and cannot detect misbehavior in single play games.\n\nGold farming group analysis uses the virtual economy in online games and traces abnormal trade networks formed by gold farmers, merchants, bankers, and buyers. To characterize each player, Itsuki *et al.* used four types of statistics: total action count, activity time, total chat count, and the amount of virtual currency managed in a given period of time. Seo *et al.* analyzed gold farming group connection patterns using routing and source location information. Kwon *et al.* investigated gold farming networks and detected the entire network structure of gold farming groups. This work, while distantly related, is not concerned with the detection of bots, but with understanding the unique roles each bot plays in the virtual underground ecosystem given a valid detection.\n\nSequence analysis uses iterated sequence datasets from login to logout. Ahmed *et al.* studied activity sequence features, defined as the number of times a given player engages in an activity, such as the number of monsters killed and the number of times the player was killed. Kwon *et al.* used the combat sequence each avatar produces. Lee *et al.* examined the full action sequence of users on big data analysis platform. While such technique has been shown to work in the past, such feature lacks context, and might be easily manipulated by bot settings.\n\nSimilarity analysis uses the fact that game bots have a strong regular pattern because they play to earn in-game money. Kwon *et al.* derived vectors using the frequency of each event and calculated the vector's cosine similarity with a unit vector. Game bots repeatedly do the same series of actions, therefore their action sequences have high self-similarity. Lee *et al.* employed self-similarity measures to detect game bots. They proposed the self-similarity measure and tested it in three major MMORPGs (\"Lineage\", \"Aion\" and \"Blade&Soul\"). Their scheme requires a lot of data of certain behavior for establishing self-similarity.\n\nMoving path analysis uses the fact that game bots have pre-scheduled moving paths, whereas human users have various moving patterns. Thawonmas *et al.* provided a method for detecting landmarks from user traces using the weighted entropy of the distribution of visiting users in a game map. They presented user clusters based on transition probabilities. To identify game bots and human users, Van Kesteren *et al.* took advantage of the difference in their movement patterns. Mitterhofer *et al.* detected the players controlled by a script with repeated movement patterns. Pao *et al.* used the entropy values of a user's trace and a series of location coordinates. They employed a Markov chain model to describe the behavior of the target trajectory. Pao *et al.* applied their method to various types of trajectories, including handwriting, mouse, and game traces, in addition to the traces of animal movement. However, their feature also can be evaded and noised by adaptive bots that integrate human-like moving behavior.\n\n# Methods\n\nBefore elaborating on the framework and workflow of our method, we first highlight the dataset and ethnical guidelines used for obtaining and analyzing it.\n\n**Dataset.** To perform this study, we rely on a real-world dataset obtained from the operation of Aion, a popular game. Our Aion dataset contains all in-game action logs for 88 days, between April 9th and July 5th of 2010. During this period, there were 49,739 characters that played more than three hours. Among these players, 7,702 characters were game bots, identified and labeled by the game company. The banned list was provided by the game company to serve as the ground truth, and each banned user has been vetted and verified by human labor and active monitoring.\n\n**Ethnical and privacy considerations.** In order to perform this study we follow best practices in ensuring users privacy and complying with ethical guidelines. First, the privacy of users in the data is ensured by anonymizing all personal identifiable information. Furthermore, consent of users is taken into account by ensuring that data analysis is within the scope of end user license agreement (EULA): upon joining Aion, users grant NCSoft, Inc. the full permission to use and share user data for analysis purpose with parties of NCSoft's choosing. One of such parties was our research group, and for research purpose only.\n\n## Framework and workflow\n\nOur proposed framework for game bot detection is shown in Figure . We posed the problem of identifying game bots as a binary classification problem. At a high-level, our method starts with a data collection phase, followed by a data exploration phase (including feature extraction), a machine learning phase, and a validation phase. In the following we highlight each of those phases.\n\n**Data collection.** In the data collection phase, we gathered a dataset that combines in-game logs and chat contents.\n\n**Data exploration.** We then performed data exploration in order to comprehend the characteristics of the dataset using data preprocessing, feature extraction, feature representation, exploration, and selection for best discriminating between bots and normal users. In the feature representation procedure, we followed standard methods for unifying data and reducing its dimensionality. For example, we quantized each network measure into three clusters with low, medium, and high values using the k-means clustering algorithm. In the feature exploration phase, we selected the components of the data vectors and pre-pocessed them. For example, we determined seven activities as social interactions and quantified the diversity of social interactions by the Shannon diversity entropy. In the feature selection phase, we selected significant features with the best-first search, greedy-stepwise search, and information gain ranking filter to avoid overfitting and reduce the features (thus improving the performance).\n\n**Machine learning.** In the machine learning phase, we choose algorithms (e.g., decision tree, random forest, logistic regression, and na\u00efve Bayes) and parameters (e.g., $k$-fold-cross validation parameters, specific algorithm parameters, etc.), and feed the data collected using the selected features in their corresponding representation. We further build models (using the data fed) and establish baselines by computing various performance metrics.\n\n**Evaluation.** In the evaluation phase, we summarize the performance of each classifier with the banned account list provided by the game company as a ground truth, by providing various performance measures, such as the accuracy, precision, recall, and F-measure.\n\n**Used features and their gap.** As indicated in Table , we classified the features we used in our work into personal and social features. Given that the aim of game bots is to earn unfair profits, there is a gap between the values of the personal features of game bots and those of human users. The personal features can be also categorized into player information and actions. The player information features include login frequency, play time, game money, and number of IP address. The player action features contain sitting (an action taken by players to recover their health), earning experience points, obtaining items, earning game money, earning player kill (PK) points, harvesting items, resurrecting, restoring experience points, being killed by a non-player and\/or player character (NPC\/PC), and using portals. The frequency and ratio of these actions reflects the behavioral characteristics of game bots and human users. For example, game bots sit more frequently than human users to recover health and mana points. Moreover, a player can acquire PK points by defeating players of opposing factions. PK points can be used to purchase various items from vendors. PK points are also used to determine a player's rank within the game world. In Aion, the more PK points a player has, the higher is the player's rank. The high ranking player can feel a sense of accomplishment. On the other hand, it is seen that game bots are not interested in rank.\n\n```latex\n\\begin{table*}[t]\\begin{center}\n\\caption{Personal and social features.}\n{\\scriptsize\n\\begin{tabular}{p{2cm}p{3.8cm}p{11.1cm}}\\hline\n\\multicolumn{2}{l}{Category} & Key idea\\\\ \\hline\nPersonal feature & Player information & Login frequency, play time, game money, number of IP address\\\\ \\hline\n & Player actions & Sitting, earning experience points, obtaining items, earning game money, earning player kill points, harvesting items, resurrection, restoring experience points, being killed by a non-player and\/or player character, using portals\\\\ \\hline\nSocial feature & Group activities & Party play time, guild activities\\\\ \\hline\n & Social interaction diversity & Party play, friendship, trade, whisper, mail, shop, guild\\\\ \\hline\n & Network measures & Degree centrality, betweenness centrality, closeness centrality, eigenvector centrality, eccentricity, authority, hub, PageRank, clustering coefficient\\\\ \\hline\n\\end{tabular}\n\\label{table2}\n}\n\\end{center}\n\\end{table*}\n```\n\nIn addition, there is gap between the values of the social features of game bots and those of human users because game bots do not attempt to social as humans. The social features can be categorized into group activities, social interaction diversity, and network measures. The features of group activities include the average duration of party play and number of guild activities. Party play is a group play formed by two or more players in order to undertake quests or missions together. The goals of party play commonly are to complete difficult quests by collaboration and enjoy socialization. Interestingly, some game bots perform party play, but the goal of party play of the game bots is different from that of human users. Their aim is to acquire game money and items faster and more efficiently. Hence, there are the behavioral differences between game bots and human users. The social interaction diversity feature indicates the entropy of party play, friendship, trade, whisper, mail, shop, and guild actions. Game bots concentrate only on particular actions, whereas human users execute multiple tasks as needed to thrive in the online game world. The player's social interaction network can be represented as a graph with characters as the nodes and interactions between them as the edges. An edge between two nodes (players) in this graph may, for example, highlight the transfer of an item between the two nodes. The features of network measures include the degree, betweenness, closeness, eigenvector centrality, eccentricity, authority, hub, PageRank, and clustering coefficient. The definitions of the network measures are listed in Table .\n\n```latex\n\\begin{table*}[h]\\begin{center}\n\\caption{Definition of network measures. Network measures include degree, betweenness, closeness centrality, and efficiency.}\n{\\scriptsize\n\\begin{tabular}{lp{13cm}}\n\\hline\nNetwork measures & Definitions\\\\ \\hline\nDegree centrality & The most intuitive notion of centrality focuses on the degree. The more edges an actor has, the more important it is.\\\\ \\hline\nBetweenness centrality & Counts the number of shortest paths between two nodes on which a given actor resides.\\\\ \\hline\nCloseness centrality & An actor is considered important if it is relatively close to all other actors. Closeness is based on the inverse of the distance of each actor to every other actor in the network.\\\\ \\hline\nEigenvector centrality & Indicates that a given node has a relationship with other valuable nodes. A high eigenvector value for an actor means that a node has several neighbors with high eigenvector values.\\\\ \\hline\nEccentricity & The eccentricity of node v is calculated by computing the shortest path between node v and all other nodes in the graph; then the longest shortest path is chosen.\\\\ \\hline\nAuthority & Exhibits a node pointed to by many good hubs.\\\\ \\hline\nHub & Exhibits a node that points to many good authorities.\\\\ \\hline\nPageRank & Assigns a numerical weight to each element of a hyperlinked set of documents, such as the World Wide Web, with the purpose of ``measuring'' its relative importance within the set.\\\\ \\hline\nClustering coefficient & Quantifies how close neighbors are to being a clique. A clique is a subset of all of the edges connecting pairs of vertices of an undirected graph. \\\\ \\hline\n\\end{tabular}\n\\label{table3}\n}\n\\end{center}\n\\end{table*}\n```\n\n# Results and discussion\n\nIn this section we review more concretely the behavioral characteristics of bots and humans based on the various features utilized, and using the aforementioned dataset. We then propose our bot detection mechanism based on discriminative features and by elaborating on details of the high level workflow in the previous section, including the performance evaluation.\n\n## Behavioral characteristics\n\n### Player information\n\nWe compared the distribution of player information features in order to identify the difference between the behavioral characteristics of game bots and human users more concretely. Figure shows how intensively game bots play games. Game bots often connect to the game and spend much longer time playing it than human users. Game bots can play a given game for 24 consecutive hours, whereas human users hardly connect to the game during working hours. Game bots invest significant time in a game until they are blocked. Figure (c) shows the cumulative distribution of the maximum number of items harvested by users per day. It is almost impossible for human users to harvest more than 1,000 items per day. Since this is repetitive and hard work, human users are easily exhausted. Nevertheless, 60% of game bots harvest more than 5,000 items a day. This is an obvious characteristic for identifying game bots that we include in our feature set.\n\n### Player actions\n\nWe examined the frequency and ratio of player actions to determine the unique characteristics of game bots. Figure presents the ratios of the activities of both game bots and human users. The points in red indicate game bots, and those in blue indicate human users. The ratio of \"earning game money\" of game bots is nearly similar to that of human users. Remarkably, the ratios of \"earning experience points\" and \"obtaining items\" of game bots are much higher than those of human users. The cumulative ratio of \"earning experience points\", \"obtaining items\", and \"earning game money\" of game bots is close to 0.5, whereas that of human users is only 0.33. This implies that game bots concentrate heavily on profit-related activities, and human users enjoy various activities. In contrast, the ratio of \"earning PK points\" of human users is as much as three times that of game bots. This reflects the fact that game bots are not interested in rankings.\n\n### Group activities\n\nFigure shows the distribution of the average party play time of game bots and human users. To acquire game money and items, some game bots form a party with other game bots. They can help each other not to be killed by monsters during party play. Consequently, their party play patterns are unusual. A total of 80% of game bots last longer than 4 hours 10 minutes, whereas 80% of human users last less than 2 hours 20 minutes. Since difficult missions can normally be completed within two hours through collaboration, human users do not maintain party play as long as game bots.\n\n### Social interaction diversity\n\nFigure shows the cumulative distribution of the entropy of social interactions. First, we determined seven activities as social interactions: party, friendship, trade, whisper, mail, shop, and guild. We quantified the diversity of social interactions by calculating the Shannon diversity entropy defined by: $$\\begin{aligned}\n\\label{eq:shannon_diversity} \nH' = -\\sum_{i=1}^{n}{p_i \\ln{p_i}}\n\\end{aligned}$$\n\n| | |\n|:------|:------------------------------------------------------------|\n| $n$ | number of social interaction types |\n| $p_i$ | relative proportion of the $i^{th}$ social interaction type |\n\nThe entropy of the social interactions of a player indicates the various activities performed by the player. Figure represents the fact that human users enjoy diverse activities, whereas game bots do not. We notice that game bots are interested in other activities.\n\n### Network measures\n\nIn Figure , we present the basic directed characteristics of each network of the game bot and human groups from Aion\u00a0. First, we see that the average degree of the human group is approximately 18 times larger compared with the game bot group in the party network. The reason is that human users form a party with many and unspecified users, whereas game bots play with several specific other game bots. The average degree of the friendship network of the human group is larger by a factor of approximately four compared with the game bot group. This fact indicates that the friendship of game bots is utterly different from that of human users. Game bot friends simply mean other game bots with which to play. The fact that the average degree of the human group is 2.5 times larger than the game bot group is observed in the case of the trade network. However, the average clustering coefficient of the game bot group is approximately five times larger compared with the human group. We assume that game bots have roles\u00a0. For instance, some game bots are responsible for gold farming, while other game bots gather game money and items from gold farmers or sell them for real money\u00a0.\n\nInterestingly, in the case of the mail network of the game bots, we discovered nine spammers during the observation period. The number of mail pieces sent by the spammers is 1,000 times per person on average. We observed the behavioral characteristics of the spammers in more detail. Hence, we found that they only send mail and stay online for a short period of time in the online game world.\n\nWe also observed the existence of five collectors who received items attached to mail from many other game bots. These collectors received items over 6,000 times during the observation period. This shows that there are several gold farming groups. In the case of the shop network, we can see the smallest number of nodes of both groups. Players are immobile in the merchant mode, and thus cannot engage in any action that requires movement, such as hunting monsters, harvesting items, etc. Consequently, game bots do not focus on the merchant mode because it can be a waste of time for them.\n\n```latex\n\\begin{table*}[t]\\caption{Basic network characteristics of six interaction networks. The average degree of all interaction networks of the human group is higher than that of the game bot group. This shows that game bots do not enjoy socializing with other users.}\n\\begin{tabular}{lrrrrrrrrrrrr} \\hline\n & \\multicolumn{2}{l}{{Party}} & \\multicolumn{2}{l}{{Friendship}} & \\multicolumn{2}{l}{{Trade}} & \\multicolumn{2}{l}{{Whisper}} & \\multicolumn{2}{l}{{Mail}} & \\multicolumn{2}{l}{{Shop}} \\\\ \\cline{2-13}\n& \\multicolumn{1}{c}{Bot} & \\multicolumn{1}{c}{Human} & \\multicolumn{1}{c}{Bot} & \\multicolumn{1}{c}{Human} & \\multicolumn{1}{c}{Bot} & \\multicolumn{1}{c}{Human} & \\multicolumn{1}{c}{Bot} & \\multicolumn{1}{c}{Human} & \\multicolumn{1}{c}{Bot} & \\multicolumn{1}{c}{Human} & \\multicolumn{1}{c}{Bot} & \\multicolumn{1}{c}{Human} \\\\ \\hline\nNodes & 1756 & 33924 & 479 & 24628 & 4003 & 30640 & 434 & 16209 & 4848 & 28362 & 305 & 7001\\\\ \nEdges & 2463 & 862021 & 749 & 174626 & 9809 & 162236 & 656 & 248133 & 12873 & 76844 & 362 & 11824\\\\ \nAvg. degree & 1.4 & 25.41 & 1.56 & 7.09 & 2.45 & 5.29 & 1.51 & 15.31 & 2.66 & 2.71 & 1.19 & 1.7\\\\ \nNetwork diam. & 22 & 15 & 9 & 15 & 25 & 18 & 23 & 12 & 9 & 24 & 5 & 28\\\\ \nAvg. C.C. & 0.1 & 0.07 & 0.07 & 0.09 & 0.41 & 0.08 & 0.01 & 0.05 & 0.12 & 0.19 & 0.12 & 0.01\\\\ \nAvg. path len. & 6.14 & 3.77 & 2.18 & 4.7 & 5.66 & 5.41 & 6.41 & 3.65 & 2.16 & 7.55 & 1.58 & 8.14\\\\ \\hline\n\\end{tabular}\n\\label{table4}\n\\end{table*}\n```\n\n### The triad census\n\nThe relative prevalence of each of the 13 triad network motifs given in Figure (a) indicates the interaction pattern in the networks in more detail\u00a0. For our Aion networks, we show the interaction pattern in Figure (b) in terms of both the fractions of each motif type and the Z-scores assessed against the null model (Eq.\u00a0, also see and ). This score is defined as follows:\n\n$$\\begin{aligned}\n\\label{eq:zscore} \nZ_i = {N_{i}^{\\textnormal{real}} - N_{i}^{\\textnormal{random}} \\over \\sigma_{i}, ^{\\textnormal{random}}}\n\\end{aligned}$$ where $N_{i}^{\\textnormal{real}}$ is the number of motif $i$ found observed in the network, $N_{i}^{\\textnormal{random}}$ is the expected number in the randomized network, and $\\sigma_{i}^{\\textnormal{random}}$ is the standard deviation of its expected number in the randomized network.\n\n**Findings.** Interestingly, the friendship, whisper, mail, and shop networks of the game bot group, and the friendship and shop networks of the human group, show one predominant motif type. For instance, in the friendship network, type 7 accounts for more than 90% of the node triplet relationships, which can be attributed to the highly reciprocal nature of the interactions. The opposite reasoning can be applied to shop: low reciprocity reflects the existence of big merchants. Moreover, in the whisper and mail network of the game bot group, type 1 accounts for more than 80% of the node triplet relationships. This reflects the fact that some game bots send information about the location coordinates of monsters to other game bots in the case of the whisper network.\n\nSome game bots send several mail pieces in the case of the mail network. Comparing the prevalence of motifs against the null models allows us to detect signals discounted by random expectation, and this is done via the Z-scores (Eq.). This is particularly necessary and illuminating in the case of the other two networks (party and trade) because, by considering the null models, we can see that although multiple motifs can be similarly abundant (Figure (b)), some can be significantly over or underrepresented, as we can see in Figure . In the case of the human group, the overrepresented motif type 5 (with $\\tilde{Z}$\\>0.4, the normalized version $\\tilde{Z} \\equiv Z_{i} \\\/ \\sqrt{\\displaystyle{\\Sigma}_{i}({Z_{i}^{2}})}$) is indeed closed triangles, consistent with the relatively high clustering tendencies in the party network. In the case of the game bot group, the overrepresented motif type 13 shows the fact that there is a large gap between the number of motifs observed in the network and the expected number of motifs in the randomized network. This reflects the fact that game bots have their own group for helping and trading with each other.\n\n### Network overlap\n\nTo determine how pairwise networks are correlated, we studied the network similarities between the game bot and human groups. For example, two networks can show similar clustering values, and yet this does not guarantee at all that nodes connected in one network are connected in another, or that the nodes show similar levels of activity. Thus, we consider here two measures of network overlap. The first is the link overlap between two networks quantified by the Jaccard coefficient. The second is the degree overlap given by the Pearson correlation coefficient between node degrees in network pairs. The results of link and degree overlap for ten network pairs of the game bot and human groups are given in Figure . By examining the link overlap (Figure (a)), we found that the game bot group has higher Jaccard coefficient in the party-friendship and party-trade pairwise networks. This is a result of the fact that the main activities of game bots are party play and trading items. The friend list offers convenience to a game bot when it wants to form a party group. Game bots gather game money and items collected through party play in an account by trading. Then the account that collects the cyber assets changes the game money and items to real money.\n\nNode degree overlap (Figure (b)) is another way of seeing the connection between interactions: here, for instance, the party-trade pairwise networks of the human group show a positive Pearson correlation coefficient value that exceeds 0.7, which can be understood by the fact that a party activity, being above all the favorite way of engaging in battles or hunting, often concludes with members trading booties. In contrast, the Pearson correlation coefficient values of the game bot group are extremely low because game bots maintain relationships with a small number of other game bots.\n\n## Game bot detection\n\nWe took a discriminative approach to learning the distinction between game bots and human users in order to detect the game bot and build automatic classifiers that can automatically recognize the distinction. We divided the dataset into training and test sets, built the classifiers through the training dataset, and evaluated the trained classifiers through the test dataset. In addition, we performed 10-fold cross-validation to avoid classifiers from being overfitted to the test data. Cross-validation generalizes the classifier trained by the test data to the validation data. 10-fold cross-validation divides the dataset into ten groups, trains the learning model with randomly selected nine groups, and verifies the classifiers from the model with one group. These training and validation processes are repeated ten times.\n\n### Feature selection\n\nWe compared the bot detection results from our model with the banned account list provided by the game company in order to evaluate the proposed framework upon running our detection method of selected features. We conducted feature selection with the best first, greedy stepwise, and information gain ranking filter algorithms in advance in order to improve the selection process. Feature_Set1 consists of all the features (114) mentioned in section . Feature_Set2 is composed of the top 62 features extracted by the information gain ranking filter algorithm. Feature_Set3 is comprised of the six features selected by the best first and greedy stepwise algorithms. Figure shows the classification results using these three feature sets. Feature_Set3 presents lower performance than Feature_Set1 and Feature_Set2. In comparison, Feature_Set2 has almost the same performance as Feature_Set1, although the number of Feature_Set2 is barely half that of Feature_Set1. Thus, we finally selected Feature_Set2 for game bot detection.\n\n### Classification and evaluation\n\nThe results of the users' behavioral pattern analysis for game bot detection are listed in Table . The four classifiers used as training algorithms\u2014decision tree, random forest, logistic regression, and na\u00efve Bayes\u2014are tested on Feature_Set2. The performances are listed in terms of overall accuracy, precision, recall, and F-measure. Random forest outperforms the other models. Its overall accuracy, precision value, recall value, and F-measure with emphasis on precision ($\\alpha$ = 0.9) are 0.961, 0.956, 0.742, and 0.929, respectively. As can be seen, the recall value is slightly low. We analyzed the characteristics of true positive, false positive, false negative, and true negative cases to inquire into the cause of this phenomenon.\n\nThe random forest technique is a well-known ensemble learning method for classification and it constructs multiple decision trees in its training phase to overcome the decision tree's overfitting problem. The random forest learning is also robust when training with imbalanced data set. It is also useful when training large data with a lot of features. Our data set consists of 85% of human players and 15% of game bots\u2014so it is considered as an imbalanced and large data set\u2014and random forests perform well in that context given that the context meets the settings in which random forests are to perform ideally.\n\nNa\u00efve Bayes showed the lowest performance among four classifiers, and that is probably because of its nature as a generative model that requires independence of features. Although we performed feature selection, still there are correlations between selected features used in our experiment. For example, obtaining_items_count, earning_exp_points_count, harvesting_items_max_count, party_eccentricity, play_time and obtaining_items_ratio are less significant features. However, those features are also naturally correlated and they cannot be easily separated because they are all related to essential game behaviors (hunting, harvesting, collaboration, etc., which are all related to high level process). Indeed, such hypothesis is confirmed by removing those features, bringing the performance of the na\u00efve Bayes on par with other algorithms.\n\n```latex\n\\begin{table*}[t]\\begin{center}\n\\caption{Precision, recall, and F-measure (0.9) ratios for each classifier. The random forest model employs the highest performance with overall accuracy rate of 0.961.}\n{\n\\begin{tabular}{lrrrrrrr}\n\\hline\n\\multicolumn{1}{l}{\\multirow{2}{*}{\\textbf{Classifier}}} & \\multicolumn{1}{l}{\\multirow{2}{*}{\\textbf{\\begin{tabular}[c]{@{}l@{}}Overall\\\\ Accuracy\\end{tabular}}}} & \\multicolumn{3}{l}{\\textbf{Human}} & \\multicolumn{3}{l}{\\textbf{Bot}} \\\\ \\cline{3-8} \n\\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} & \\multicolumn{1}{c}{Precision} & \\multicolumn{1}{c}{Recall} & \\multicolumn{1}{c}{F-Meas.(0.9)} & \\multicolumn{1}{c}{Precision} & \\multicolumn{1}{c}{Recall} & \\multicolumn{1}{c}{F-Meas.(0.9)} \\\\ \\hline\nDecision Tree & 0.955 & 0.96 & 0.989 & 0.963 & 0.911 & 0.737 & 0.89\\\\\nRandom Forest & 0.961 & 0.961 & 0.995 & 0.964 & 0.956 & 0.742 & 0.929\\\\\nLogistic Regression & 0.955 & 0.956 & 0.994 & 0.96 & 0.95 & 0.705 & 0.918\\\\\nNa\\\"ive Bayes & 0.948 & 0.96 & 0.981 & 0.962 & 0.859 & 0.734 & 0.845\\\\ \\hline\n\\end{tabular}\n\\label{table5}\n}\n\\end{center}\n\\end{table*}\n```\n\nFigure shows the relative similarities and differences of the classification evaluation outcomes (classes): true positive, false positive, false negative, and true negative. To obtain the relative similarity, we normalize all classes by the lowest class value, thus comparing outcomes relatively. Such normalization would bring the lowest class in the evaluation to one. For each class other than the lowest, we calculated the ratio by dividing the values of the other classes by the value of the lowest class. The pattern of the relative similarity is consistent for most features and classes, with the exception of the \"mail_between_centrality\" and \"mail_outdegree\" features. It is highly probable that game bots had not been detected yet in the case of false negatives. This also implies that human users temporarily employed a game bot in the case of false positives. To confirm this observation, we analyzed the case of false positives weekly and finally found harvesting and party play game bots.\n\n# Conclusions\n\nWe proposed a multimodal framework for detecting game bots in order to reduce damage to online game service providers and legitimate users. We observed the behavioral characteristics of game bots and found several unique and discriminative characteristics. We found that game bots execute repetitive tasks associated with earning unfair profits, they do not enjoy socializing with other players, are connected among themselves and exchange cyber assets with each other. Interestingly, some game bots use the mail function to collect cyber assets. We utilized those observations to build discriminative features. We evaluated the performance of the proposed framework based on highly accurate ground truth \u2013 resulting from the banning of bots by the game company. The results showed that the framework can achieve detection accuracy of 0.961. Nonetheless, we should consider that the banned list does not include every game bot.\n\nThe game company imposes a penalty point on an account that performs abnormal activities, and eventually blocks the account when its cumulative penalty score is quite high. Some game bots can evade the penalty scoring system of the game companies. Hence, the actions of a player are more important than whether the player is banned or not, and we concede that a player is a game bot when the player's actions are abnormal. We focused on those user behavioral patterns that reflect user status to interpret the false positive cases, and hypothesize that they are game bots not yet blocked, and false negative cases are human users occasionally employing a game bot. Although different from those in the banned list, they behave in the same pattern. We believe that our detection model is more robust by relying on multiple classes of features, and its analyses promise further interesting directions in understanding game bot and their detection.\n\n# Acknowledgements\n\nThis research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning (2014R1A1A1006228).\n\n# Appendix\n\nComplete frequency distribution for triangular motifs is shown in Table\u00a0. Network diameters from 100 randomized network versions is shown in Table\u00a0. The network diameters from 100 randomized network versions and a comparison between the bots and human users is shown in Table\u00a0.\n\n```latex\n\\begin{table*}[h]\\caption{Multimodal characteristics of the online game.}\\label{A2_Table}\n\\begin{tabular}{lp{1.5cm}p{1.5cm}p{1.5cm}p{1.5cm}p{1.5cm}p{1.5cm}p{1.5cm}p{1.5cm}p{1.5cm}p{1.5cm}p{1.5cm}p{1.5cm}}\n\\hline\n\\multirow{2}{*}{} & \\multicolumn{2}{l}{\\textbf{Party}} & \\multicolumn{2}{l}{\\textbf{Friendship}} & \\multicolumn{2}{l}{\\textbf{Trade}} & \\multicolumn{2}{l}{\\textbf{Whisper}} & \\multicolumn{2}{l}{\\textbf{Mail}} & \\multicolumn{2}{l}{\\textbf{Shop}} \\\\ \\cline{2-13} \n & \\multicolumn{1}{l}{Bot} & \\multicolumn{1}{l}{Human} & \\multicolumn{1}{l}{Bot} & \\multicolumn{1}{l}{Human} & \\multicolumn{1}{l}{Bot} & \\multicolumn{1}{l}{Human} & \\multicolumn{1}{l}{Bot} & \\multicolumn{1}{l}{Human} & \\multicolumn{1}{l}{Bot} & \\multicolumn{1}{l}{Human} & \\multicolumn{1}{l}{Bot} & \\multicolumn{1}{l}{Human} \\\\ \\hline\n\n\\textbf{Type 1} & \\multicolumn{1}{r}{ 15.04 } & \\multicolumn{1}{r}{ 17.78 } & \\multicolumn{1}{r}{ 0.16 } & \\multicolumn{1}{r}{ 0.56 } & \\multicolumn{1}{r}{ 11.52 } & \\multicolumn{1}{r}{ 17.81 } & \\multicolumn{1}{r}{ 82.66 } & \\multicolumn{1}{r}{ 11.64 } & \\multicolumn{1}{r}{ 99.54 } & \\multicolumn{1}{r}{ 17.43 } & \\multicolumn{1}{r}{ 16.71 } & \\multicolumn{1}{r}{ 2.49 }\\\\\n\n\\textbf{Type 2} & \\multicolumn{1}{r}{ 25.61 } & \\multicolumn{1}{r}{ 29.46 } & \\multicolumn{1}{r}{ 0.13 } & \\multicolumn{1}{r}{ 0.15 } & \\multicolumn{1}{r}{ 11.94 } & \\multicolumn{1}{r}{ 30.03 } & \\multicolumn{1}{r}{ 2.15 } & \\multicolumn{1}{r}{ 8.54 } & \\multicolumn{1}{r}{ 0.05 } & \\multicolumn{1}{r}{ 22.79 } & \\multicolumn{1}{r}{ 4.38 } & \\multicolumn{1}{r}{ 2.37 }\\\\ \n\n\\textbf{Type 3} & \\multicolumn{1}{r}{ 9.6 } & \\multicolumn{1}{r}{ 6.43 } & \\multicolumn{1}{r}{ 1.39 } & \\multicolumn{1}{r}{ 2.95 } & \\multicolumn{1}{r}{ 19.56 } & \\multicolumn{1}{r}{ 12.41 } & \\multicolumn{1}{r}{ 10.21 } & \\multicolumn{1}{r}{ 23.22 } & \\multicolumn{1}{r}{ 0.05 } & \\multicolumn{1}{r}{ 18.43 } & \\multicolumn{1}{r}{ 0.78 } & \\multicolumn{1}{r}{ 0.03 }\\\\ \n\n\\textbf{Type 4} & \\multicolumn{1}{r}{ 27.89 } & \\multicolumn{1}{r}{ 32.48 } & \\multicolumn{1}{r}{ 0.1 } & \\multicolumn{1}{r}{ 0.10 } & \\multicolumn{1}{r}{ 6.96 } & \\multicolumn{1}{r}{ 20.48 } & \\multicolumn{1}{r}{ 1.39 } & \\multicolumn{1}{r}{ 7.95 } & \\multicolumn{1}{r}{ 0.2 } & \\multicolumn{1}{r}{ 13.21 } & \\multicolumn{1}{r}{ 76.17 } & \\multicolumn{1}{r}{ 94.99 }\\\\ \n\n\\textbf{Type 5} & \\multicolumn{1}{r}{ 1.56 } & \\multicolumn{1}{r}{ 1.51 } & \\multicolumn{1}{r}{ 0.00 } & \\multicolumn{1}{r}{0.00 } & \\multicolumn{1}{r}{ 0.85 } & \\multicolumn{1}{r}{ 0.33 } & \\multicolumn{1}{r}{ 0.02 } & \\multicolumn{1}{r}{ 0.18 } & \\multicolumn{1}{r}{ 0.05 } & \\multicolumn{1}{r}{ 1.68 } & \\multicolumn{1}{r}{ 0.74 } & \\multicolumn{1}{r}{ 0.1 }\\\\\n\n\\textbf{Type 6} & \\multicolumn{1}{r}{ 0.41 } & \\multicolumn{1}{r}{ 0.18 } & \\multicolumn{1}{r}{ 0.00 } & \\multicolumn{1}{r}{ 0.00 } & \\multicolumn{1}{r}{ 0.94 } & \\multicolumn{1}{r}{ 0.17 } & \\multicolumn{1}{r}{ 0.02 } & \\multicolumn{1}{r}{ 0.14 } & \\multicolumn{1}{r}{ 0.00 } & \\multicolumn{1}{r}{ 0.97 } & \\multicolumn{1}{r}{ 0.08 } & \\multicolumn{1}{r}{ 0.00 }\\\\\n\n\\textbf{Type 7} & \\multicolumn{1}{r}{ 3.22 } & \\multicolumn{1}{r}{ 0.91 } & \\multicolumn{1}{r}{ 90.86 } & \\multicolumn{1}{r}{ 91.98 } & \\multicolumn{1}{r}{ 20.61 } & \\multicolumn{1}{r}{ 3.16 } & \\multicolumn{1}{r}{ 1.94 } & \\multicolumn{1}{r}{ 25.9 } & \\multicolumn{1}{r}{ 0.03 } & \\multicolumn{1}{r}{ 5.75 } & \\multicolumn{1}{r}{ 0.72 } & \\multicolumn{1}{r}{ 0.00 }\\\\\n\n\\textbf{Type 8} & \\multicolumn{1}{r}{ 0.44 } & \\multicolumn{1}{r}{ 0.24 } & \\multicolumn{1}{r}{ 0.00 } & \\multicolumn{1}{r}{ 0.00 } & \\multicolumn{1}{r}{ 1.07 } & \\multicolumn{1}{r}{ 0.27 } & \\multicolumn{1}{r}{ 0.01 } & \\multicolumn{1}{r}{ 0.14 } & \\multicolumn{1}{r}{ 0.00 } & \\multicolumn{1}{r}{ 0.95 } & \\multicolumn{1}{r}{ 0.04 } & \\multicolumn{1}{r}{ 0.00 }\\\\\n\n\\textbf{Type 9} & \\multicolumn{1}{r}{ 0.14 } & \\multicolumn{1}{r}{ 0.16 } & \\multicolumn{1}{r}{ 0.00 } & \\multicolumn{1}{r}{ 0.00 } & \\multicolumn{1}{r}{ 0.12 } & \\multicolumn{1}{r}{ 0.06 } & \\multicolumn{1}{r}{ 0.00 } & \\multicolumn{1}{r}{ 0.01 } & \\multicolumn{1}{r}{ 0.00 } & \\multicolumn{1}{r}{ 0.19 } & \\multicolumn{1}{r}{ 0.00 } & \\multicolumn{1}{r}{ 0.00 }\\\\\n\n\\textbf{Type 10} & \\multicolumn{1}{r}{ 12.94 } & \\multicolumn{1}{r}{ 10.37 } & \\multicolumn{1}{r}{ 1.1 } & \\multicolumn{1}{r}{ 3.01 } & \\multicolumn{1}{r}{ 15.98 } & \\multicolumn{1}{r}{ 14.4 } & \\multicolumn{1}{r}{ 1.5 } & \\multicolumn{1}{r}{ 21.38 } & \\multicolumn{1}{r}{ 0.06 } & \\multicolumn{1}{r}{ 15.57 } & \\multicolumn{1}{r}{ 0.24 } & \\multicolumn{1}{r}{ 0.01 }\\\\\n\n\\textbf{Type 11} & \\multicolumn{1}{r}{ 0.69 } & \\multicolumn{1}{r}{ 0.29 } & \\multicolumn{1}{r}{ 0.00 } & \\multicolumn{1}{r}{ 0.01 } & \\multicolumn{1}{r}{ 1.07 } & \\multicolumn{1}{r}{ 0.2 } & \\multicolumn{1}{r}{ 0.02 } & \\multicolumn{1}{r}{ 0.15 } & \\multicolumn{1}{r}{ 0.02 } & \\multicolumn{1}{r}{ 0.84 } & \\multicolumn{1}{r}{ 0.12 } & \\multicolumn{1}{r}{ 0.00 }\\\\\n\n\\textbf{Type 12} & \\multicolumn{1}{r}{ 1.32 } & \\multicolumn{1}{r}{ 0.15 } & \\multicolumn{1}{r}{ 0.16 } & \\multicolumn{1}{r}{ 0.06 } & \\multicolumn{1}{r}{ 4.47 } & \\multicolumn{1}{r}{ 0.47 } & \\multicolumn{1}{r}{ 0.03 } & \\multicolumn{1}{r}{ 0.42 } & \\multicolumn{1}{r}{ 0.00 } & \\multicolumn{1}{r}{ 1.63 } & \\multicolumn{1}{r}{ 0.02 } & \\multicolumn{1}{r}{ 0.00 }\\\\\n\n\\textbf{Type 13} & \\multicolumn{1}{r}{ 1.14 } & \\multicolumn{1}{r}{ 0.04 } & \\multicolumn{1}{r}{ 6.1 } & \\multicolumn{1}{r}{ 1.17 } & \\multicolumn{1}{r}{ 4.92 } & \\multicolumn{1}{r}{ 0.21 } & \\multicolumn{1}{r}{ 0.04 } & \\multicolumn{1}{r}{ 0.32 } & \\multicolumn{1}{r}{ 0.00 } & \\multicolumn{1}{r}{ 0.56 } & \\multicolumn{1}{r}{ 0.00 } & \\multicolumn{1}{r}{ 0.00 }\\\\ \\hline\n\\end{tabular}\n\\end{table*}\n```\n```latex\n\\begin{tabular}{lC{3cm}C{3cm}}\n\\hline\n\\multirow{2}{*}{} & \\multicolumn{2}{c}{\\textbf{mean (stdev) diameter}} \\\\ \\cline{2-3} \n & Bot & Human \\\\ \\hline\n\\textbf{Party} & 45.25 (5.85) & 5 (0) \\\\\n\\textbf{Friendship} & 28.70 (3.85) & 10.10 (0.33) \\\\\n\\textbf{Trade} & 22.07 (1.22) & 12.87 (0.57) \\\\ \n\\textbf{Whisper} & 29.92 (4.41) & 6 (0) \\\\ \n\\textbf{Mail} & 20.46 (1.19) & 24.33 (1.17) \\\\\n\\textbf{Shop} & 24.57 (4.97) & 39.47 (2.62) \\\\ \\hline\n\\end{tabular}\n```\n```latex\n\\begin{tabular}{lC{3cm}C{3cm}}\n\\hline\n\\multirow{2}{*}{} & \\multicolumn{2}{c}{\\textbf{mean (stdev) diameter}} \\\\ \\cline{2-3} \n & Bot & Human \\\\ \\hline\n\\textbf{Party} & 45.25 (5.85) & 5 (0) \\\\\n\\textbf{Friendship} & 28.70 (3.85) & 10.10 (0.33) \\\\\n\\textbf{Trade} & 22.07 (1.22) & 12.87 (0.57) \\\\ \n\\textbf{Whisper} & 29.92 (4.41) & 6 (0) \\\\ \n\\textbf{Mail} & 20.46 (1.19) & 24.33 (1.17) \\\\\n\\textbf{Shop} & 24.57 (4.97) & 39.47 (2.62) \\\\ \\hline\n\\end{tabular}\n```","meta":{"dup_signals":{"dup_doc_count":12,"dup_dump_count":3,"dup_details":{"curated_sources":1,"2024-22":1,"unknown":10}},"filename":"out\/1606.01426_extract_paper.tex.md"},"subset":"arxiv"} +{"text":"author: Maurice Chiodo[^1] \u00a0\u00a0Dennis M\u00fcller[^2]\ndate: 1 November 2018\ntitle: Mathematicians and Ethical Engagement\n\n# \n\nIn the past, some mathematical societies have discussed ethical policies and issues and disseminated their own codes of conduct to address specific ethical concerns encountered by research mathematicians, such as those arising during publication. While ethical and behavioural issues specific to well-defined mathematical areas are of course still relevant, the last two decades have yielded many *new* ethical concerns that now affect *all* mathematicians in some way. Having taught these issues for more than two years at the University of Cambridge, we came to the realization that mathematicians can assume several different levels of ethical engagement . Ethics in mathematics is not a binary process.\n\nAs the oldest consistently used scientific tool in Western thinking, mathematics carries perhaps the greatest scientific authority. It has become an extraordinarily powerful instrument ubiquitous to all of science and technology. How many hours of mathematical work underpin the technology behind smartphones, airplane flights, or models of global climate dynamics? But the applications\u2014and therefore ethics\u2014of mathematics go well beyond engineering. Modern mathematics is at the heart of economics and finance, and excessive trust in mathematical models contributed to the 2007 financial crisis. Even the most ardent purists in number theory or algebra can no longer claim to \"just do the mathematics\" and \"leave the implications to ethicists\", as recent revelations about global mass surveillance have underscored their work's immediate social and political impact. It is now evident that one can wield practically *all* branches of mathematics both for good and harm. Modern mathematics is a double-edged sword.\n\nJust as physicists had to recognise the enormous ethical implications of their work after the atomic bombing of Hiroshima in August 1945, socially responsible mathematicians must also realise the existence of ethics in mathematical practise, which leads to issues far more complex and harder to characterize than publishing-related decisions. Plagiarism and the ethics of journal submission are real concerns, but hardly of the same order as these new ethical matters.\n\nThe inner workings of even areas of broad appeal\u2014such as data science, machine learning, and optimisation\u2014are often beyond the layman's comprehension. Lawyers and judges struggle to understand policing and sentencing algorithms, politicians stretch to comprehend the full capabilities of state surveillance agencies, and electoral commissions barely grasp the algorithms and mathematical psychometrics behind Cambridge Analytica's targeted advertising. Thus, only mathematicians can begin the process of unveiling the meaning, validity, applicability, and reliability of modern mathematics, paving the way for judges, politicians, and regulators to step in.\n\nEven if we feel that mathematical research is beyond all ethical consideration, as academics we must ask ourselves: What do our students do after graduation? We train them in a wide range of mathematics, but do we teach them to be aware of possible ethical issues in its use? As a society, we have long agreed that the so-called Nuremberg defense\u2014simply saying \"I'm just doing my job\" or \"I was only following orders\"\u2014is not a valid excuse. Thus, it is imperative for us to teach ethics to our students and help them better contextualise their mathematical work. In April 2016, we began giving ethics seminars featuring guest speakers from industry, academia, and intelligence agencies to researchers and students in the Faculty of Mathematics at Cambridge. Shortly thereafter, we organized the first conference on \"Ethics in Mathematics.\"$^1$``{=html} Through observation and case studies, we noticed that mathematicians can demonstrate what we term the \"four levels of ethical engagement.\" These levels form a recurring theme throughout our seminars.\n\nThe *first* level is the fundamental understanding that the practice of mathematics is *not* ethics free, and that ethical issues can surface in any mathematical work. One always performs mathematics in a social and political context, never in value-free isolation. Thus, all mathematicians must think about their individual responsibilities, as ethical issues may emerge at any time. This diligence can be as simple as considering environmental impact rather than merely optimising over time and money during a construction project. Mathematics can pose immediate or distant consequences that generally manifest as good, sometimes as not entirely good, and occasionally as downright bad. On this individualistic level, mathematicians modify and adapt their own ethical consciousness and actions, taking the important first step towards a more robust ethical awareness.\n\nThe *second* of these four levels involves mathematicians *speaking out* to other mathematicians, raising awareness of ethical issues among their peers. Individual mathematicians may recognize ethical issues in the mathematical work of others and try to inform them. They might precipitate unified action among their colleagues and locally bring about a collective ethical awareness and approach. Or they might write an article about ethics for their community, as we have done here.\n\nThe *third* level is more complex. It teaches mathematicians to *take a seat at the tables of power*. Mathematicians often need to learn the specific skills required to work with politicians, corporate management, and other non-scientists. These include engaging in policy discussions, establishing and rationalising their mathematical work's objectives, and communicating potential limitations and possible drawbacks. Engineers and computer scientists are taught this at the undergraduate level, but mathematicians seldom receive such lessons explicitly. Many mathematicians in advancing industry careers unexpectedly find themselves in positions that require these abilities. Mathematics is becoming an increasingly powerful social tool, and seeing its creators hiding behind formulae and retrospectively apologising is not appropriate. If we want to take credit for our output's positive impact, we should also be able to defend and properly contextualise our work and engage in apparently non-mathematical debates.\n\nOur *fourth* and final level is the responsibility of mathematicians to *call out the bad mathematics of others* by proactively seeking out, learning about, and acting upon instances where mathematics has \"gone wrong\" \u2014 possibly in unrelated organisations. However, bad mathematics occurs in two distinct forms. First, it can refer to the practice of claiming results that are not mathematically true. The catastrophic misuse of statistics in the trial of Sally Clark , which the Royal Statistical Society reprimanded through the release of a statement , is one such example. Bad mathematics can also refer to trained mathematicians' inappropriate use of mathematics by giving it excessive authority or directing it in ways that cause harm and exploit others. Members of any profession have the responsibility to hold their work\u2014and the work of their colleagues\u2014to high standards. Like statisticians, engineers, and doctors, mathematicians must adapt their own form of professional standards in academia, industry, and overall society. Some mathematicians are already questioning the validity and fairness of various decision-making algorithms or identifying the potential harms of artificial intelligence (AI), bringing such dangers into public consciousness and proposing workable solutions.\n\nPractising ethics in mathematics is not binary, and mathematicians must consider various levels of engagement and ethical sensibility. Of course, our aforementioned four levels are an artificial and simplistic construct. One can refine them ad nauseum, but collectively they illustrate the depth and complexity of ethics in mathematics.\n\nNot every mathematician will face problems pertaining to all levels, but everyone should remain aware of their social responsibilities, acknowledge the existence of ethical issues in the mathematical context, and appreciate their complexity. We teach students a broad spectrum of mathematics to prepare them for a wide variety of academic and professional eventualities. Why shouldn't we teach a broad spectrum of ethical situations in mathematics, which go beyond specialised courses such as ethics in AI? Lawyers, medics, biologists, engineers, physicists, and computer scientists learn subject-specific ethics because they will encounter these questions as professionals. Comprehending the seemingly-limitless uses of mathematics is difficult, and the ethical implications of modern mathematics depend on subtleties that only the mathematically-trained can understand. We are the only ones who can see behind the formulae. Thus, we should no longer leave these issues to professional ethicists and philosophers. No one else can address them, so we must.\n\n[^1]: King's College, University of Cambridge. `email@example.com` .\n\n[^2]: RWTH Aachen University `firstname.lastname@example.com` .","meta":{"dup_signals":{"dup_doc_count":13,"dup_dump_count":3,"dup_details":{"curated_sources":2,"2024-30":1,"unknown":10}},"filename":"out\/2212.11669_extract_main.tex.md"},"subset":"arxiv"} +{"text":"title: **Quantum Physics in a different ontology** \n .\n | |\n |:----------------------------------------------------------------------:|\n | Nalin de Silva |\n | |\n | Department of Mathematics, University of Kelaniya, Kelaniya, Sri Lanka |\n | |\n\n# Abstract\n\nIt is shown that neither the wave picture nor the ordinary particle picture offers a satisfactory explanation of the double-slit experiment. The Physicists who have been successful in formulating theories in the Newtonian Paradigm with its corresponding ontology find it difficult to interpret Quantum Physics which deals with particles that are not sensory perceptible. A different interpretation of Quantum Physics based in a different ontology is presented in what follows. According to the new interpretation Quantum particles have different properties from those of Classical Newtonian particles. The interference patterns are explained in terms of particles each of which passes through both slits.\n\n# INTRODUCTION\n\nPlanck introduced his ideas on quanta or packets of energy towards the end of the nineteenth century. In that sense Quantum Physics is more than one hundred years old. From the very beginning Quantum Physics came up with strange phenomena that made the Physicists to disbelieve what they themselves were proposing to understand the new features that were being observed.\n\nThe so-called double-slit experiment$^1$ continues to baffle the Physicists who are glued to twofold two valued logic that is behind the Newtonian paradigm. As it was one of the most fundamental experiments that they could not understand in Quantum Physics the Nobel Prize winning Physicist Richard Feynmann once declared that no body understood Quantum Physics! This statement by Feynmann makes one to delve into the meaning of understanding. In other words one has to understand what is meant by understanding. However, it is clear that if one is confined to an ontology based in twofold formal logic, and linear thinking one would be confused by a statement such as understanding what is meant by understanding. A decade ago the intellectuals who were only familiar with linear thinking and not with cyclic thinking would have left deliberations into such statements to whom they call mystics, as such statements did not come within the \"rational\" way of thinking. However, in this paper we would not attempt to understand what is meant by understanding.\n\nThe principle of superposition which was familiar to Classical Physicists as well, has taken an entirely different meaning with respect to Quantum Physics. The essence of the principle can be explained as follows. If $x$ and $y$ are two solutions of what is called a linear differential equation then $x+y$ is also a solution of the same differential equation. This is a simpler version of what is generally known as the principle of superposition. In Classical Physics two magnets giving rise to two different magnetic fields would combine to give one magnetic field, and a compass that is brought to the resulting magnetic field would respond to the resulting field, and not to the field of any one of the magnets. It has to be emphasised that a magnet is in only one state, corresponding to the respective magnetic field and it is the two fields of the two magnets that combine to give one field though one would not find a single magnet that gives rise to the resultant field. We could describe this phenomenon as that of two or more becoming one. However, in the Quantum world things are different, and the principle of superposition has an unusual interpretation.\n\n# THE WAVE NATURE OF PARTICLES\n\nIn order to discuss the new interpretation of the principle of superposition we first consider the so called double-slit experiment where a stream of electrons (in general, particles or photons) is made to pass through two slits and then to strike a screen. If both slits are open an interference pattern is observed on the screen. Now in Quantum Physics it is said that particles such as electrons posses wave properties and photons (light) exhibit particle properties in addition to their respective \"normal\" properties. Interference patterns are supposed to result from wave properties and according to the Physicists the wave theory successfully explains the formation of such patterns in the case of a stream of particles fired from a source to strike the screen after passing through the slits. The Physicists would claim that the double-slit experiment demonstrates that particles such as electrons do exhibit wave properties.\n\nThe double-slit experiment has been carried out with only one electron passing through the slits one at a time$^2$ (electrons at very low intensities) instead of a stream of particles released almost simultaneously to pass through the two slits. Even at very low intensities interference patterns have been observed after sufficiently large number of electrons had been fired from the source. The Physicists have been puzzled by this phenomenon. In the case of several electrons passing through the slits simultaneously it could be explained using the wave properties of the particles, in other words resorting to the wave picture. Unfortunately in the case of electrons being shot one at a time this explanation was not possible as what was observed on the screen was not a faint interference pattern corresponding to one electron but an electron striking the screen at a single point on the screen. These points in sufficiently large numbers, corresponding to a large number of electrons, finally gave rise to an interference pattern. The wave nature is only a way of speaking, as even in the case of large number of particles what is observed is a collection of points and not waves interfering with each other.\n\nThe Physicists also believe that an electron as a particle could pass through only one of the slits and a related question that has been asked is whether it was possible to find out the slit through which an electron passes on its way to the screen. Various mechanisms, including \"capturing\" the electron using Geiger counters, have been tried to \"detect the path\" of the electron, and it has been found that if the particular slit through which the electron passed was detected then the interference patterns were washed out. In other words determining the particle properties of the electron erased its wave properties. Bohr, who was instrumental in formulating the Copenhagen interpretation$^3$, was of the view that one could observe either the particle properties or the wave properties but not both, and the inability to observe both wave and particle properties simultaneously came to be referred to as complementarity. The experiments that attempted to determine the slit through which the electron passed were known as which-way (welcherweg) experiments as they attempted to find the way or the path of the particle from the source to the screen. The outcome of these experiments made it clear that the which-way experiments washed out the interference patterns. It was believed that at any given time the electrons exhibited either the particle properties or wave properties but not both.\n\nHowever, what the Physicists failed to recognize was that in the case of one electron shot at a time there was no weak interference pattern observed on the screen for each electron thus illustrating that a single electron did not exhibit any wave properties. The electron strikes the screen at one point, and it is the collection of a large number of such points or images on the screen that gave the interference pattern. In the case of a stream of electrons fired to strike the screen each electron would have met the screen at one point and the collection of such points or images would have given rise to an \"interference pattern\". Thus we could say that the interference patterns are obtained not as a result of the \"wave nature\" of electrons but due to the collectiveness of a large number of electrons that strike the screen. The \"wave nature\" arises out of \"particle\" properties and not due to \"wave properties\". Afshar$^4$ comes closer to this view when he states \"in other words, evidence for coherent wave-like behavior is not a single particle property, but an ensemble or multi-particle property\". We are of the opinion that in the double-slit experiments no wave properties are observed contrary to what is generally believed. It is the particle properties that are observed, though not necessarily those of ordinary classical particles.\n\nAs a case in point this does not mean that a particle in Quantum Physics has a definite path from the source to the screen through one of the slits, as could be expected in the case of classical particles. For a particle to have a path it should posses both position and momentum simultaneously. A path at any point (assuming that it is a continuous path without cusps and such other points) should have a well defined tangent. In the case of a particle moving, the direction of the velocity (and the momentum) of the particle at any given point defines the unit tangent vector to its path. Conversely the tangent to the path at any point defines the direction of the velocity and the momentum of the particle at that point. However, according to the Uncertainty Principle, both the momentum and the position of a particle cannot be determined simultaneously, and if the position is known then the momentum cannot be determined. Without the momentum the direction of the velocity of the particle and hence the tangent vector cannot be known implying that a continuous curve is not traced by a particle in space. On the other hand if the momentum of the particle is known then only the direction and magnitude of the velocity (momentum) and properties of other non conjugate observables such as spin of the particle are known, without the position being known. Thus the particle can be everywhere, with variable probabilities of finding the particle at different points, but at each point the particle being moving in parallel directions with the same speed. However, as will be explained later, this does not mean that we could observe the particle everywhere.\n\nIn the light of the uncertainty principle it is futile to design experiments to find out the path of a particle. The so-called which-way experiments have been designed to detect the slit through which the particle moves, on the assumption that the particle moves through one slit only. However, in effect there is no path that the particle follows and it is not correct to say that the particle passes through one of the slits. The which-way experiment actually stops the particle from reaching the screen and hence there is no possibility of obtaining any \"interference pattern\". It is not a case of observing particle properties destroying the wave properties of matter, but an instance of creating a situation where the particle is either not allowed to strike the screen or to pass through only one slit deliberately. In effect it is the particle properties exhibited at the screen that are cut off.\n\nWhat is important is to note that interference patterns are observed only if both slits are kept open, and also if the particles are free to reach the screen. If one slit is closed or obstacles are set up in the guise of which-way experiments or otherwise, so as not to allow the particles to reach the screen then no interference patterns are observed. The most important factor is the opening of the two slits. In the case of which-way experiments as well, what is effectively done is to close one of the slits as particles through that slit are not allowed to reach the screen. With only one slit open while the other slit is effectively closed with the which-way experiment apparatus, no interference patterns are observed.\n\nThe Physicists are obsessed with the idea that a particle can be only at one position at a given time, backed by the ontology of day to day experience. While this may be the experience with our sensory perceptible particles (objects) or what we may call ordinary Newtonian classical objects such as billiard balls, it need not be the case with Quantum particles. However, from the beginning of Quantum Physics, it appears that the Physicists have been of the view that a particle can be at one position at a given time whether it is being observed or not. Hence they seem to have assumed that on its \"journey to the screen from the source\" a particle could pass through only one of the slits. They have worked on the assumption that even if both slits are open the particle passes through only one of the slits but behaves differently to create interference patterns as if the particle is \"aware\" that both slits are open. According to the view of the Physicists if only one slit is open the particles having \"known\" that the other slit is closed pass through the open slit and \"decide\" not to form any interference patterns. It is clear that the explanation given by the Physicists for the formation of interference patterns on the basis of the particle picture is not satisfactory. We saw earlier that the explanation given in the wave picture is also not satisfactory as a single electron fired from the source does not form a faint interference pattern on the screen. If the particles behave like waves then even a single particle should behave like a wave and produce a faint interference pattern, having interfered with itself. What is emphasised here is that the final interference pattern is not the sum of faint interference patterns due to single particles, but an apparent pattern formed by a collection of images on the screen due to the particles. There is no interference pattern as such but only a collection of the points where the particles strike the screen, or of the images formed by the particles that were able to reach the screen. The images finally depend on the probability that a particle would be at a given position.\n\nBefore we proceed further a clarification has to be made on \"seeing\" a particle at a given position at a given time in respect of the double-slit experiment. In this experiment we are concerned with particles released from a source with a given momentum and given energy. As such according to the uncertainty principle, nothing can be said definitely on the position of these particles, immediately after they leave the source. It can only be said that there is a certain probability that the particle would be found in a certain position. Thus the particle is \"everywhere\" \"until\" it is \"caught\" at some position such as a slit or a screen. Though we have used the word \"until\", time is not defined as far as the particle is concerned as it has a definite energy. It can only be said that there is a certain probability that the particle could be \"seen\" at a given place at a given time, with respect to the observer. The particle is not only everywhere but also at \"every instant\". Thus it is meaningless to say the particle is at a given slit at a given time as neither time nor position is defined for the particle with respect to itself. The particle would meet the screen at some position on the screen at some time but \"before\" that it was everywhere and at every instant. A photon that is supposed to \"move along a straight line\" should not be considered as such, but being at all points along the straight line at \"all times\" \"before\" it interacts with a screen or another particle.\n\nThe probability of an electron striking the screen at a given point with only one slit open is not the same as that when both slits are open. Thus when a large number of particles strike the screen, the different probabilities give rise to different \"patterns\" which are essentially collection of points where the particles meet the screen. The \"interference patterns\" observed when both slits are open are replaced by \"other patterns\" when one of the slits is closed. The \"interference patterns\" as well as the \"other patterns\" are the results of particle properties, the difference being due to the number of slits that are open. If both slits are closed there is no pattern at all as no particle would reach the screen under such conditions. When one of the slits is open there is a probability that the particle can be at the position where the slit is whereas when both slits are open there is a probability that the particle could be at both the slits \"before\" reaching the screen. When both slits are open, the particle is at both slits and the position is not known while the momentum of the particle is not changed and has the original value with which it was shot. However, when one of the slits is blocked the particle is at the other slit implying that the momentum is not known. These uncertainties of the momentum would carry different particles to different places on the screen, while in the case when both slits are open it is the uncertainties of position that make the particle to strike the screen at different positions. The difference between the \"interference patterns\" and the \"other patterns\" is due to this.\n\n# EXPERIMENTS OF AFSHAR\n\nAfshar$^5$ has claimed that he was able to demonstrate that an electron or a photon would exhibit both particle and wave properties (Figure 1). He allowed light to pass through two slits and to interact with a wire grid placed so that the nodes were at the positions of zero probability of observing a photon. The photons were not affected by the wire grid as the nodes were at the positions of zero probability and at those positions there were no photons to interact with the grid. The photons were then intercepted by a lens system that was able to identify the slit through which any single photon had passed. According to Afshar the nodes of the grid at the positions of zero probability indicated that the wave properties of the photons were observable while the lens system in detecting the slit through which the photon had passed demonstrated the particle properties of the photons.\n\nHowever, in this experiment, assuming that the lens system detects the slit through which the photon passed, what is observed is again the particle properties of the photons. The wire grid with the nodes at the position of zero probabilities does not interact with the photons, as there are no photons at positions of zero probability to interact with the grid. No so called waves are observed, as there is no screen for the particles to strike. Thus the wire grid has no effect in this experiment and with or without such a grid the lens system would behave the same way.\n\nLet us consider what would happen if the wire grid is shifted forwards towards the source, backwards towards the lens system or laterally. As the nodes of the wire grid would be shifted from the positions of zero probability some photons would strike the grid and they would not proceed towards the lens system. Thus the number of photons that reach the lens system would be reduced and there would be a decrease in intensity of light received at the lens. Though Afshar claims that wave properties are observed just by placing a wire grid so that its nodes are at the positions of zero probability, it is not so.\n\nThe so called wave properties could be observed only by placing a screen in between the wire grid and the lens system. As we have mentioned above, even then what is observed is a collection of images at the points where the photons strike the screen, and not wave properties as such. In this case as all the photons would have been absorbed by the screen, the lens system would not be able to detect any photons nor the \"slit through which the photons passed\". On the other hand if the screen is kept beyond the lens system then there would not be any photons to strike the screen and hence no \"wave properties\".\n\n# EXPERIMENTS AT KELANIYA\n\nWe at the University of Kelaniya have given thought to this problem, and one of my students Suraj Chandana has carried out a number of experiments, which may be identified as extensions of the experiment of Afshar. Chandana and de Silva$^6$ had predicted that if we were to have a single slit and then a screen, instead of the wire grid and the lens system, \"after\" the photons have passed through the two slits, then the photons would pass through the single slit with the same probability as that of finding a photon at the point where the slit was kept. This implied that if the slit was kept at a point where the probability of finding the photon is zero, the photon would not pass through the slit to strike the screen, but on the other hand, if the slit was kept at any other point there was a non zero probability that the photon would pass through the slit, and striking the screen. Thus if a stream of photons is passed through two slits, and \"then\" a single slit, \"before\" striking the screen, depending on the position of the single slit the intensity with which the photons strike the screen would change. Further it implies that these intensities should correspond to the intensities observed in connection with the \"interference patterns\" observed in the case of the standard double-slit experiment, if the positions of the slit were varied along a line parallel (by moving the single slit along a line parallel to the double-slit and the screen) to the double-slits and the screen. Chandana has been successful in obtaining the results as predicted.\n\nIn another experiment Chandana$^7$ had an Aluminium sheet of very small thickness joining the points or positions where the probability of finding a photon is zero (positions of zero probability), stretching from the double-slits to the screen as illustrated in the figure 2. As an obstacle placed at a position of zero probability would not affect the photon the Aluminium sheet had no effect on the visible interference patterns on the screen. This experiment was carried out by Chandana with number of Aluminium sheets placed along lines joining the positions of zero probability stretching from the double-slits to the screen. We were not surprised to find that the Aluminium sheets did not interfere with the interference patterns. However, even if one of the sheets is slightly displaced the interference pattern is destroyed as the photons now interact with the sheets at points where the probability of finding a photon is not zero.\n\nThese observations are not consistent with the wave picture as a wave would not be able to penetrate the Aluminium sheets without being affected. Even the pilot waves of Bohm are not known to go through a material medium undisturbed. As we have argued a single electron emitted from the source would not exhibit a faint interference pattern on the screen but a spot or an image having passed beyond the slits. The Physicists are interested in the wave picture to explain the interference patterns as they find it difficult to believe that a particle would pass through both slits simultaneously. Thus they mention of particle properties when they are interested in \"capturing\" particles and of wave properties in explaining phenomena such as the interference pattern.\n\n# PRINCIPLE OF SUPERPOSITION IN QUANTUM PHYSICS\n\nWe consider the Quantum entities to be particles though of a nature different from that of Classical Newtonian particles. We have no inhibition in believing that the Quantum particles unlike the Newtonian particles could pass through both slits at the \"same time\", as the logic of different cultures permits us to do so. Physics and in general Mathematics and sciences are based on Aristotelian two valued twofold logic according to which a proposition and its negation cannot be true at the same time. Thus if a particle is at the slit $A$, the proposition that the particle is at $A$ is true and its negation that the particle is not at $A$ is not true, and *vice versa*. Therefore if the particle is at $A$ then it cannot be anywhere else as well, and hence cannot be at $B$. This is based on what may be called the Aristotelian- Newtonian - Einsteinian ontology where a particle can occupy only one position at a given time in any frame of reference of an observer. However, in fourfold logic (*catuskoti*) a proposition and its negation can be both true, and hence in that logic it is not a contradiction to say that a particle is at the slit $A$ and at somewhere else (say at the slit $B$) at the \"same instant\" or \"every instant\" Thus according to *catuskoti* the particle can be at many places at the same time or at many instants with respect to the observer.\n\nIn the case of the double-slit experiment, the momentum of a particle is known, as the particles are fired with known energy, and hence the position is not known. In such a situation Heisenberg's uncertainty principle demands that the position of the particle is not known. The position of the particle is relieved only after a measurement is made to determine the position. Before the measurement, the particle is in a superposition of states corresponding to the positions in space the particle could be found. After the measurement the particle would be found in a definite position (state), collapsing from the superposition of a number of states to that of the definite state. Before the measurement what could have been said was that there was a certain probability of finding the particle at a given position. Though the particle is in a superposition of states before a measurement is made to find the position, it is in a definite state with respect to the momentum.\n\nIn Quantum Mechanics unlike in Classical Mechanics, a state of a system, a particle or an object is represented by a vector in a Mathematical space known as the Hilbert space. The observables such as position, momentum, and spin are represented by what are known as Hermitian operators. If a system is in a state represented by an eigenstate $|\\Phi>$ of a Hermitian operator $A$, belonging to the eigenvalue $a$, then the system has the value $a$ corresponding to the observable represented by the Hermitian operator $A$. This is expressed mathematically by $A|\\Phi> = a|\\Phi>$. If $B$ is the conjugate operator of $A$, then the value corresponding to the observable represented by $B$ is not known. All that can be said, according to the standard Copenhagen interpretation, is that if the value corresponding to the observable represented by $B$ is measured, then there is a certain probability of obtaining an eigenvalue of $B$ as the measurement. Before the measurement is made nothing could be said of the value. In plain language this means that if the value of a certain observable is known then the value of the conjugate observable is not known.\n\nHowever, the state $|\\Phi>$ can be expressed as a linear combination of the eigenstates $|\\Psi>$ of $B$ in the form $|\\Phi>=\\sum|c_i\\Psi_i>$ where $c_i\\in C$, the field of complex numbers. In other words the coefficients of $|\\Psi>$'s in the expansion of $|\\Phi>$ are complex numbers. The Copenhagen interpretation tells us that when the observable corresponding to $B$ is measured it would result in a state corresponding to one of the $|\\Psi>$'s with the measurement yielding the eigenvalue $b$ to which the particular $|\\Psi>$ belongs, the probability of obtaining the value $b$ being given by the value of the relevant $|c|^2$. Before the measurement is made nothing can be said regarding the observable corresponding to $B$. According to Bohr, it is meaningless to talk of the state of the system with respect to $B$ as nothing could be observed. There is no knowledge regarding the observable corresponding to $B$ as it has not been observed. The value or the knowledge of the observable is \"created\" by the observer who sets up an experiment to measure the value in respect of $B$. The observed depends on the observer and it makes no sense to talk of an observable unless it has been observed. This interpretation is rooted in positivism as opposed to realism in which the entire corpus of knowledge in Newtonian - Einsteinian Physics is based. This body of knowledge is also based in Aristotelian - Newtonian - Einsteinian ontology.\n\nAs a particular case one could refer to the conjugate Hermitian operators in respect of position and momentum of a particle in Quantum Mechanics. When the position of a particle is measured then its momentum is not known. According to the Copenhagen Interpretation, it can only be said that if an apparatus is set up to measure the momentum, the observer would observe one of the possible values for the momentum and that there is a certain probability of observing the particular value. Before the measurement is made the particle has no momentum, as such, and it is meaningless to talk of the momentum of the particle. The observer by his act of observation gives or creates a value for the momentum of the particle, so to speak of. Once the momentum is measured the observer has knowledge of the momentum but not before it. However, after the momentum is measured, the knowledge of the position of the particle is \"washed off\" and hence it becomes meaningless to talk of the position of the particle. The observer could have knowledge only of either the momentum or the position, but not of both. A version of this conclusion is sometimes referred to as the uncertainty principle.\n\nWhat we have been discussing in the proceeding paragraphs is the principle of superposition. A particle or a system with its position known is represented by a vector $|\\Phi>$in Hilbert space, which is an eigenvector of the Hermitian operator $A$ corresponding to the position. When the position of the particle or the system is known, the momentum is not known. If $B$ is the Hermitian operator corresponding to the momentum, then $|\\Phi>$ is not an eigenvector of $B$. However, $|\\Phi>$ can be expressed as a linear combination of the eigenvectors $|\\Psi>$'s of $B$ though the momentum is not observed. The superposition of the $|\\Psi>$'s cannot be observed, and neither can be resolved into observable constituent parts. This is different from the principle of superposition in Classical Physics, where the resultant can be resolved into its constituent parts.\n\nFor example as we have mentioned in the introduction the resultant magnetic field due to two magnets can be resolved into its two components and can be observed. One of the magnets can be taken off leaving only one of the constituent magnetic fields. The superposition is there to be observed and if the magnet that was taken off is brought back to its original position the resultant magnetic field reappears. In Quantum Physics the superposition cannot be observed without disturbing the system and when it is disturbed to measure the conjugate variable, only one of the states in the superposition could be observed and we would not have known in advance if that particular state were to appear as a result of the disturbance induced by us.\n\n# COPENHAGEN INTERPRETATION\n\nIn Classical Physics, as we have already stated, superposition is there to be observed. However, in Quantum Physics the superposition cannot be observed, and further unlike in Classical Physics interpretations are required to \"translate\" the abstract Mathematical apparatus and concepts into day to day language. In Classical Physics one knows what is meant by the position or the momentum of a particle and those concepts can be observed and understood without an intermediate interpretation. However, in Quantum Physics, the state of a particle or a system is represented by a vector in Hilbert space and observables are represented by Hermitian operators in Hilbert space. An interpretation or interpretations are needed to express these and other concepts to build a concrete picture out of the abstract apparatus. Copenhagen interpretation is one such interpretation and it is the standard interpretation as far as most of the Physicists are concerned.\n\nBohr more than anybody else was instrumental in formulating the Copenhagen interpretation, and he in turn was influenced by positivism and Chinese Ying - Yang Philosophy. As a positivist he believed that only the sensory perceptible phenomena exist and did not believe in the existence of that could not be \"observed\". When a state of a particle or system is represented by an eigenvector of an observable (Hermitian operator in Hilbert space) the corresponding value of the observable can be measured and the positivist school had no problem in accepting the existence of such state. For example if the momentum of a particle is known then the state of the particle is represented by a certain vector in Hilbert space, belonging to the particular eigenvalue that has been measured. However, the problem arises when the conjugate Hermitian operator, in this case the position, is considered, as in positivism the ontology is connected with observations and sensory perceptions. We are not considering logical positivism and there seems to be no interpretation of Quantum Physics in a logical positivist ontology.\n\nAs we have seen a given eigenstate of a Hermitian operator that has been observed can be expressed as a linear combination of the eigenstates of the conjugate operator. To a positivist, though the given eigenstate exists as it is observed, the eigenstates of the conjugate operator are not observable and it is meaningless for him to talk of such states. Thus if the momentum of a particle has been measured, the eigenstates belonging to the eigenvalues of the conjugate operator, which is the position, are not observed and the positivist would not say anything regarding the existence of such states. As far as the positivist is concerned, there is only a probability of finding the particle at some position, and the particle will be at some position only after the relevant measurement is carried out.\n\nIn the case of the double-slit experiment, this means that a positivist would not say whether the particle passes through a particular slit as it is not observed. However he assumes that it it is at one of the slits and not at both as the Aristotelian - Newtonian - Einsteinian ontology demands that the particle should be at one of the slits and not at both slits. (The positivists share with the realists the Aristotelian - Newtonian - Einsteinian ontology. They differ from the realists when they insist that nothing could be said of non observables.) If a measurement is made, that is if an experiment is carried out to find out the slit where the particle is, then the particle would be found at one of the slits washing out the \"interference pattern\". Then superposition is collapsed and \"decoherance\" sets in resulting \"chaotic pattern\".\n\nA realist differs from a positivist in that the former would want to know the slit at which the particle is (the slit through which the \"particle passes\") even without observing it. He would say the particle would pass through one of the slits whether one observes it or not, and that it is an integral property of the particle independent of the observer. The Classical Physicists were realists. An object in Classical Physics has a momentum whether it is measured or not. The observer in Classical Physics measures the momentum that the particle already possesses. In Quantum Physics the positivists would say that the particle has no momentum before it is measured but acquires a momentum as a result of the measurement.\n\nWe would not go into further details on the differences between the realist position and the positivist position. However, what is relevant to us is that both the realist and the positivist would agree that the particle \"goes through one slit\", meaning that at a \"given time\" the particle is found only at one of the slits. They would also agree on the wave nature of the particles. They have to depend on the wave nature as they assume that the particle passes through only one slit, and as such they would not be able to explain the \"interference patterns\" without the wave properties of the particles, as particles \"passing through\" only one slit would not produce \"interference patterns\".\n\n# A NEW INTERPRETATION\n\nWe differ from the positivists as well as the realists since we believe that the particle is found at both slits and hence \"pass through both\" in the common parlance. In general we include the postulate that the eigenstates $|\\Psi>$'s in $|\\Phi>=\\sum|c_i\\Psi_i>$ exist in addition to $|\\Phi>$ (Postulate 3 below). We have also introduced the concept of a mode. A mode of a particle or a system is essentially a potential observable. A mode has the potential to be observed though it may not be observed at a particular instant. For example, position, momentum, spin are modes. A particle or a system can be in both modes corresponding to two conjugate Hermitian operators, though only one mode may be observed.\n\nA revised version of the postulates of the new interpretation formulated by Chandana and de Silva$^9$ is given below.\n\n1. A state of a Quantum Mechanical system is represented by a vector (ray) $\\chi$ in the Hilbert space, where $\\chi$ can be expressed as different linear combinations of the eigenvectors in the Hilbert space, of Hermitian operators, any operator corresponding to a mode. In other words a state of a Quantum Mechanical system can be represented by different linear combinations of eigenvectors of different modes, each linear combination being that of the eigenvectors of one of the modes. Thus a state could have a number of modes, each mode being a potential observable.\n\n2. If $\\chi$ is expressed as a linear combination of two or more eigenvectors of a Hermitian operator, that is a mode, then the corresponding mode cannot be observed (or measured) by a human observer with or without the aid of an apparatus. In other words the particular mode cannot be observed and a value cannot be given to the observable, which also means that no measurement has been made on the observable.\n\n3. However, the non observation of a mode does not mean that the mode does not \"exist\". We make a distinction between the \"existence\" of a mode, and the observation of a mode with or without the aid of an apparatus. A mode corresponding to a given Hermitian operator could \"exist\" without being observed. The knowledge of the \"existence\" of a mode is independent of its observation or measurement. In other words the knowledge of the \"existence\" of a mode of a Quantum Mechanical state is different from the knowledge of the value that the observable corresponding to the relevant Hermitian operator would take.\n\n4. If a mode of a Quantum Mechanical state is represented by a single eigenvector, and not by a linear combination of two or more eigenvectors, of a Hermitian operator, then the mode could be observed by a human observer with or without the aid of an apparatus, and the value of the corresponding observable (or the measured value) is given by the eigenvalue which the eigenvector belongs to. It has to be emphasised that only those modes of a Quantum Mechanical state, each represented by a single eigenvector, and not by a linear combination of eigenvectors, of an Hermitian operator can be observed at a given instant.\n\n5. If a mode of a Quantum Mechanical state is represented by an eigenvector of a Hermitian operator then the mode corresponding to the conjugate operator cannot be represented by an eigenvector of the conjugate Hermitian operator. It can be expressed as a linear combination of two or more of the eigenvectors of the conjugate operator. This means that the mode corresponding to the conjugate operator cannot be observed, or in other words it cannot be measured. However, the relevant mode \"exists\" though it cannot be observed.\n\n6. It is not necessary that at least one of the modes corresponding to two conjugate operators should be represented by a single eigenvector of the relevant operator. It is possible that each mode is represented by linear combinations of two or more eigenvectors of the corresponding operator. In such situations neither of the modes could be observed.\n\n7. A state of a Quantum Mechanical system can be altered by making an operation that changes a mode or modes of the state. However, not all operations correspond to measurements or observations. Only those operations that would result in a mode being expressed as a single eigenvector, and not as a linear combination of the eigenvectors of an operator would result in measurements.\n\n8. A particle entangled with one or more other particles is in general represented by a linear combination of eigenvectors of an Hermitian operator with respect to a mode, while the whole system of particles is in general represented by a linear combination of the Cartesian products of the eigenvectors. In the case of two particles it takes the form $\\sum c_{ij} |\\phi_i>|\\phi_j>$. If one of the particles is in a mode that is observed, then the particles entangled with it are also in the same mode as an observable. If a measurement is made on some other mode then instantaneously, the corresponding values in the same mode of the entangled particles are also determined. In such case, for two particles the whole system is represented by vectors of the form $|\\phi_i>|\\phi_j>$. If the number of entangled particles is less than the dimension of the space of the eigenvectors of the Hermitian operator, then if a measurement is made in the particular mode, the particle would be represented by one of the eigenvectors, while the other particles entangled with it would be each represented by a different eigenvector of the Hermitian operator. However, if the number of entangled particles is greater than the dimension of the space of the eigenvectors, then in some cases, more than one particle would be represented by a given eigenvector.\n\nAccording to this interpretation if the momentum of a particle is known then it has not one position but several positions. In other words the particle can be at number of positions in superposition though we are not able to observe it at any one of those positions. The particle could be observed only if it is at one position. If an experiment is carried out to determine the position of the particle the superposition or the wave function would collapse, and the particle would be located at one of the positions where it was before the measurement was made.\n\nSimilarly if the particle is in the position mode that is observed then it can have several momenta in superposition but we would not be able to observe any one of them. If we perform an experiment to determine the momentum, that is if a measurement is made, then the superposition of momenta would collapse to one of them, enabling us to determine the value of the momentum.\n\nWith respect to the double-slit experiment this implies that the particle is at both slits in superposition without being observed and if we perform an experiment to determine the slit \"through which the particle passes\" (the slit where the particle is) then the superposition collapses and the particle would be found only at one of the positions. The positivists while assuming that the particle \"passes through only one slit\" would not say anything on the slit \"through which the particle passes\" as it cannot be observed. For the positivist it is meaningless to speculate on something that cannot be observed. The realists too assume that the particle \"passes through\" only one slit but would not be satisfied with the positivist position, and claim that a theory that is not able to determine the slit through which the particle passes is incomplete.\n\nWe make a distinction between being in existence and being observed. A particle or a system can exist in a certain mode without being observed. In this case the state of the particle or the state is expressed as a linear combination or superposition of the eigenstates of the relevant Hermitian operator and the particle or the system exists in all the relevant eigenstates without being observed. The mode is observed only when the state of the particle or the system is expressed as a single eigenstate of the relevant Hermitian operator.\n\nThe existence of modes with more than one eigenstates has been known for sometime. Monroe$^{10}$ and his colleagues in 1996 were able to demonstrate the existence of two spin states of Beryllium cation simultaneously however without observing them. One could say that the interference obtained by them could be understood on the basis of the existence of simultaneous spin states of the Beryllium cation. Since then similar experiments have been carried out and the existence of superposition of eigenstates cannot be ruled out anymore.\n\n# A DIFFERENT ONTOLOGY AND LOGIC\n\nIn the ontology presented here no distinction is made of the existence of sensory perceptible objects and of other entities. There is no absolute existence as such and all existences are relative to the mind. It has been shown by de Silva$^{11}$ that even the mind could be considered as a creation of the mind a phenomenon not in contradiction with cyclic thinking. It is the mind that creates concepts including that of self, and as such sensory perceptible objects do not have any preference over the others.\n\nAs we have mentioned the positivists find it difficult to take cognizance of entities that are not sensory perceptible and it is this ontology that makes them not to commit on the existence of unobserved \"objects\". In the present ontology all existences are only conventional and not absolute as such. Thus the existence of simultaneous eigenstates or superposition of eigenstates is not ruled out in the present ontology. We have no inhibition to postulate the existence of such states and it is not in contradiction with *catuskoti* or fourfold logic that may be identified as the logic of the ontology presented here.\n\nAs Jayatilleke$^{12}$ has shown in fourfold logic or sometimes referred to as tetra lemma, unlike in twofold logic a proposition and its negation can be both true and false. (However, we do not agree with the interpretation of fourfold logic given by Jayatilleke.). In twofold logic if a proposition is true then its negation is false, and if a proposition is false, then its negation is true. In addition to these two cases fourfold logic has two more cases where both the proposition and its negation can be true or both false. Thus the proposition that a particle is at $A$, and the proposition that a particle is not at $A$, can be both true in fourfold logic. (According to fourfold logic the case could arise where the particle may be neither at $A$ nor not at $A$.) We may deduce from that a particle can be both at $A$ and $B$ (not at $A$) at the \"same time\". In other words a particle can be at both slits in respect of the double-slit experiment, and in general a mode represented by a superposition of two or more eigenvectors can exist as the particle or the system can be at number of \"positions\" simultaneously in fourfold logic.\n\nIn twofold Aristotelian logic a particle has to be either at $A$ or not at $A$. Thus the Physicists whether they are realists or positivists find it difficult to accept that a particle can \"pass through both slits\" simultaneously, and they have to resort to so called wave nature in order to explain the interference patterns.\n\n# DISCUSSION\n\nIt is seen that both wave picture and the ordinary particle picture fail to explain the interference patterns observed in the double-slit experiment. The wave picture fails as a weak intensity stream of electrons (one electron at a time) exhibits no interference patterns in the case of few electrons. The ordinary particle picture fails as a particle passing through only one slit would not produce interference patterns. The Physicists had to resort to the wave picture as the logic in either positivism or realism would not permit a particle to pass through both slits.\n\nIn the case of the experiments conducted by Chandana then at the University of Kelaniya, Sri Lanka, the wave picture as well as the classical particle picture come across more problems as neither a wave nor an ordinary particle would be able to penetrate the aluminium sheets without being affected. These experiments justify our new interpretation involving modes of the particle or the system and the particle picture presented here where a particle can be at both slits. In general we postulate that a particle or system can exist in a mode where more than one eigenstates are in a superposition. The position where a particle is found depends only on the relevant probability, and the so-called interference patterns are only collections of images formed by such particles striking the screen at different positions with the relevant probabilities.\n\nThe new postulates are consistent with the ontology where the \"existence\" of a particle or an object does not necessarily mean that it could be observed or that it is sensory perceptible in general, and the fourfold logic. It appears that, unlike Classical Physics with its twofold logic and realist ontology, Quantum Physics is rooted not even in a \"positivist ontology\" but in a different ontology and fourfold logic and we should be able to develop new concepts in Quantum Physics, especially regarding the motion of a Quantum particle from a point $A$ to another point $B$. It is not known how a particle \"moves\" from the double-slit to the screen in the experiments carried out by Chandana, nor how a particle with less energy than the value of a potential barrier \"scales the walls\". In the latter case all that the Physicists have done is to come up with concepts such as \"tunnel effect\". It may be that it is neither the particle that left the point $A$ nor some other particle that reaches the point $B$, if we are to make use of the fourth case of fourfold logic. Chandana in his M.Phil. thesis submitted to the University Kelaniya in September 2008 has described few more experiments that agree with the present ontology and fourfold logic.\n\nReferences\n\n------------------------------------------------------------------------\n\n1. Bagget Jim, 1997. The Meaning of Quantum Theory, Oxford University Press.\n\n2. Afshar, S.S., 2005. Sharp complementary wave and particle behaviours in the same welcherweg experiment, Proc. SPIE 5866, 229-244.\n\n3. Bagget Jim, 1997. The Meaning of Quantum Theory, Oxford University Press.\n\n4. Afshar, S.S., 2005. Sharp complementary wave and particle behaviours in the same welcherweg experiment, Proc. SPIE 5866, 229-244.\n\n5. Afshar, S.S., 2005. Sharp complementary wave and particle behaviours in the same welcherweg experiment, Proc. SPIE 5866, 229-244.\n\n6. Chandana S. and de Silva Nalin, 2004. On the double-slit experiment, Annual Research Symposium, University of Kelaniya, 57-58.\n\n7. Chandana S. and de Silva Nalin, 2007. Some experiments involving double-slits, Annual Research Symposium, University of Kelaniya,133-134.\n\n8. Bohm D, 1980. Wholeness and the implicate order, Routledge, London.\n\n9. Chandana S. and de Silva Nalin, 2004. A new interpretation of Quantum Mechanics, Annual Research Symposium, University of Kelaniya, 59-60.\n\n10. Monroe C., Meekhof D. M., King B. E., Wineland D. J., 1996. A \"Schr\u00f6dinger Cat\" Superposition State of an Atom, Science, 272, 1132.\n\n11. de Silva Nalin, Sinhala Bauddha Manasa *www.kalaya.org\/files\/nps\/070405.pdf*.\n\n12. Jayatilleke, K. N.,1963. Early Buddhist Theory of Knowledge, Motilal Banarsidass.","meta":{"dup_signals":{"dup_doc_count":13,"dup_dump_count":4,"dup_details":{"curated_sources":1,"2024-10":1,"2024-26":1,"unknown":10}},"filename":"out\/1006.4712_extract_Quantum_Physics_in_a_different_ontology.tex.md"},"subset":"arxiv"} +{"text":"abstract: I try to clarify several confusions in the popular literature concerning chaos, determinism, the arrow of time, entropy and the role of probability in physics. Classical ideas going back to Laplace and Boltzmann are explained and defended while some recent views on irreversibility, due to Prigogine, are criticized.\nauthor: J.Bricmont \nPhysique Th\u00e9orique, UCL, \nB-1348 Louvain-la-Neuve, Belgium\ntitle: Science of Chaos or Chaos in Science?[^1]\n\n'=12 = -0.7cm\n\n# Introduction\n\nPopularization of science seems to be doing very well: the Big Bang, the theory of elementary particles or of black holes are explained in countless books for the general public. The same is true for chaos theory, irreversibility or self-organization. However, it seems also that a lot of confusion exists concerning these latter notions, and that at least some of the popular books are spreading misconceptions. The goal of this article is to examine some of them, and to try to clarify the situation.\n\nIn particular, I will make a critical evaluation of the various claims concerning chaos and irreversibility made by Prigogine and by Stengers, since \"La Nouvelle Alliance\". Several of those claims, especially the most recent ones, are rather radical: \"the notion of chaos leads us to rethink the notion of \"law of nature\".\" (, p.15)[^2] For chaotic systems, \"*trajectories are eliminated from the probabilistic description* \u2026The statistical description is *irreducible*.\" (, p.59) The existence of chaotic dynamical systems supposedly marks a radical departure from a fundamentally deterministic world-view, makes the notion of trajectory obsolete, and offers a new understanding of irreversibility. Prigogine and Stengers claim that the classical conception was unable to incorporate time in our view of the world (, chap.1) or to account for the irreversibility of macroscopic phenomena. Boltzmann's attempt to explain irreversibility on the basis of reversible laws failed (, p.41).\n\nOn the basis of these theories, a number of speculations are put forward on the notion of \"event\", on the place of human beings in Nature, or even on overcoming Cartesian dualism (see , chap.9, , p.106, and ). These writings have been indeed quite influential, mostly among non-experts. They are frequently quoted in philosophical or cultural circles, as an indication that chaos, nonlinear phenomena or the \"arrow of time\" have led to a profound revolution in our way of thinking.\n\nI want to develop quite different views on most of these issues. In my opinion, chaos does not invalidate in the least the classical deterministic world-view; the existence of chaotic dynamical systems actually strengthens that view (Sect. 2). Besides, the relationship between chaos and irreversibility is quite different from what is claimed e.g. in \"Les lois du chaos\" . Finally, when they are correctly presented, the classical views of Boltzmann perfectly account for macroscopic irreversibility on the basis of deterministic, reversible, microscopic laws (Sect. 3). Part of the difficulty in understanding those views comes from some confusions about the use of the words \"objective\" and \"subjective\", associated with probability or entropy. I will try to be careful with these notions (Sect. 4 and 5). In section 6, I will discuss the applications of probabilistic reasoning to complex phenomena and biology. I shall also argue that most of the speculation on the \"new alliance\" between the human sciences and the natural ones is misguided and that the people working in sociology or psychology have very little to learn from the alleged \"leap from Newtonianism to Prigoginianism\" (Sect. 7).\n\nOn the other hand, I believe that the ideas of Laplace and of Boltzmann are worth defending against various misrepresentations and misunderstandings. Quite independently of the work of Prigogine, there are serious confusions that are found in the literature on irreversibility, chaos or time (some of which go back to philosophers such as Popper, Feyerabend or Bergson). Besides, many textbooks or popular books on statistical mechanics are rather obscure, at least in the part concerning the foundations of the field (e.g., on the role played by ergodic theorems). I will try to clarify these questions too (Sect. 4).\n\nI wrote this paper in a not too technical language, relegating formulas to footnotes and remarks. Nothing of what I say is new[^3]. In fact, everything is quite standard and old, and it is a sad fact that those ideas that were so nicely explained by Boltzmann a century ago have to be reexplained over and over again.\n\nFinally, I have to emphasize that this is in no way a criticism of Prigogine's work in general, and even less of the Brussels' school. I shall only discuss the radical claims made in the popular books and, in particular, the idea that fundamental flaws have been found in the scientific world-view and that one has to rethink the *notion* of law of nature. I believe that a lot of interesting scientific ideas have been developed around Prigogine and that he has had an exceptional taste for discovering new directions in physics, whether in irreversible thermodynamics or in chaotic phenomena. But this does not put his views on foundational questions beyond criticism[^4].\n\n# Chaos and determinism: Defending Laplace.\n\nA major scientific development in recent decades has been popularized under the name of \"chaos\". It is widely believed that this implies a fundamental philosophical or conceptual revolution. In particular, it is thought that the classical world-view brilliantly expressed by Laplace in his \"Philosophical Essay on Probabilities\" has to be rejected[^5]. Determinism is no longer defensible. I think this is based on a serious confusion between *determinism* and *predictability*. I will start by underlining the difference between the two concepts. Then, it will be clear that what goes under the name of \"chaos\" is a major scientific progress but does not have the radical philosophical implications that are sometimes attributed to it.\n\nIn a nutshell, determinism has to do with how Nature behaves, and predictability is related to what we, human beings, are able to observe, analyse and compute. It is easy to illustrate the necessity for such a distinction. Suppose we consider a perfectly regular, deterministic *and* predictable mechanism, like a clock, but put it on the top of a mountain, or in a locked drawer, so that its state (its initial conditions) become inaccessible to us. This renders the system trivially unpredictable, yet it seems difficult to claim that it becomes non-deterministic[^6]. Or consider a pendulum: when there is no external force, it is deterministic and predictable. If one applies to it a periodic forcing, it may become unpredictable. Does it cease to be deterministic?\n\nIn other words, anybody who admits that *some* physical phenomena obey deterministic laws must also admit that some physical phenomena, although deterministic, are not predictable, possibly for \"accidental\" reasons. So, a distinction must be made[^7]. But, once this is admitted, how does one show that *any* unpredictable system is *truly* non-deterministic, and that the lack of predictability is not merely due to some limitation of our abilities? We can never infer indeterminism from our ignorance alone.\n\nNow, what does one mean exactly by determinism? Maybe the best way to explain it is to go back to Laplace :\" Given for one instant an intelligence which could comprehend all the forces by which nature is animated and the respective situation of the beings who compose it- an intelligence sufficiently vast to submit these data to analysis- it would embrace in the same formula the movements of the greatest bodies of the universe and those of the lightest atom; for it, nothing would be uncertain and the future, as the past, would be present before its eyes.\" The idea expressed by Laplace is that determinism depends on what the laws of nature are. Given the state of the system at some time, we have a formula (a differential equation, or a map) that gives in principle the state of the system at a later time. To obtain predictability, one has to be able to measure the present state of the system with enough precision, and to compute with the given formula (to solve the equations of motion). Note that there exist alternatives to determinism: there could be no law at all; or the laws could be stochastic: the state at a given time (even if it is known in every conceivable detail) would determine only a probability distribution for the state at a later time.\n\nHow do we know whether determinism is true, i.e. whether nature obeys deterministic laws? This is a very complicated issue. Any serious discussion of it must be based on an analysis of the fundamental laws, hence of quantum mechanics, and I do not want to enter this debate here[^8]. Let me just say that it is conceivable that we shall obtain, some day, a complete set of fundamental physical laws (like the law of universal gravitation in the time of Laplace), and then, we shall see whether these laws are deterministic or not[^9]. Any discussion of determinism outside of the framework of the fundamental laws is useless[^10]. All I want to stress here is that the existence of chaotic dynamical systems does not affect *in any way* this discussion. What are chaotic systems? The simplest way to define them is through sensitivity to initial conditions. This means that, for any initial condition of the system, there is some other initial condition, arbitrarily close to the first one so that, if we wait long enough, the two systems will be markedly different[^11]. In other words, an arbitrarily small error on the initial conditions makes itself felt after a long enough time. Chaotic dynamical systems are of course unpredictable in practice, at least for long enough times[^12], since there will always be some error in our measurement of the initial conditions. But this does not have any impact on our discussion of determinism, since we are assuming from the beginning that the system obeys some deterministic law. It is only by analysing this deterministic system that one shows that a small error in the initial conditions may lead to a large error after some time. If the system did not obey any law, or if it followed a stochastic law, then the situation would be very different. For a stochastic law, two systems with the *same* initial condition could be in two very different states after a short time[^13].\n\nIt is interesting to note that the notion that small causes can have big effects (in a perfectly deterministic universe) is not new at all. Maxwell wrote: \"There is a maxim which is often quoted, that \"The same causes will always produce the same effects\"\". After discussing the meaning of this principle, he adds: \"There is another maxim which must not be confounded with that quoted at the beginning of this article, which asserts \"That like cause produce like effects.\" This is only true when small variations in the initial circumstances produce only small variations in the final state of the system\"(, p.13)[^14]. One should not conclude from these quotations[^15] that there is nothing new under the sun. A lot more is known about dynamical systems than in the time of Poincar\u00e9. But, the general idea that not everything is predictable, even in a deterministic universe, has been known for centuries. Even Laplace emphasized this point: after formulating universal determinism, he stresses that we shall always remain \"infinitely distant\" from the intelligence that he just introduced. After all, why is this determinism stated in a book on *probabilities*? The reason is obvious: for Laplace, probabilities lead to rational inferences in situations of incomplete knowledge (I'll come back below to this view of probabilities). So he is assuming from the beginning that our knowledge is incomplete, and that we shall never be able to *predict* everything. It is a complete mistake to attribute to some \"Laplacian dream\" the idea of perfect predictability[^16]. But Laplace does not commit what E. T. Jaynes calls the \"Mind Projection Fallacy\": \"We are all under an ego-driven temptation to project our private thoughts out onto the real world, by supposing that the creations of one's own imagination are real properties of Nature, or that one's own ignorance signifies some kind of indecision on the part of Nature\" [^17](, p.7). As we shall see, this is a most common error. But, whether we like it or not, the concept of dog does not bark, and we have to carefully distinguish between our representation of the world and the world itself.\n\nLet us now see why the existence of chaotic dynamical systems in fact supports universal determinism rather than contradicts it[^18]. Suppose for a moment that no classical mechanical system can behave chaotically. That is, suppose we have a theorem saying that any such system must eventually behave in a periodic fashion[^19]. It is not completely obvious what the conclusion would be, but certainly *that* would be an embarassment for the classical world-view. Indeed, so many physical systems seem to behave in a non-periodic fashion that one would be tempted to conclude that classical mechanics cannot adequately describe those systems. One might suggest that there must be an inherent indeterminism in the basic laws of nature. Of course, other replies would be possible: for example, the period of those classical motions might be enormously long. But it is useless to speculate on this fiction since we know that chaotic behaviour is compatible with a deterministic dynamics. The only point of this story is to stress that deterministic chaos increases the explanatory power of deterministic assumptions, and therefore, according to normal scientific practice, *strengthens* those assumptions. And, if we did not know about quantum mechanics, the recent discoveries about chaos would not force us to change a single word of what Laplace wrote[^20].\n\nNow, I will turn to the main thesis of Prigogine and his collaborators on chaotic dynamical systems: the notion of trajectory should be abandoned, and replaced by probabilities. What does this mean? Let me quote Prigogine: \"Our leitmotiv is that the formulation of the dynamics for chaotic systems must be done at the probabilistic level\" (, p.60). Or: \" We must therefore eliminate the notion of trajectory from our microscopic description. This actually corresponds to a realistic description: no measurement, no computation lead strictly to a point, to the consideration of a *unique* trajectory. We shall always face a *set* of trajectories\" (, p.60)[^21].\n\nLet us first see how reasonable it is to \"eliminate the notion of trajectory\" for chaotic systems by considering a concrete example[^22]. Take a billiard ball on a sufficiently smooth table, so that we can neglect friction (for some time), and assume that there are suitable obstacles and boundaries so that the system is chaotic. Now suppose that we use an \"irreducible\" probabilistic description, that is, instead of assigning a position to the ball, we assign to it a probability distribution[^23]. Consider next the evolution of that probability distribution. Since we are dealing with a chaotic system, that distribution will spread out all over the billiard table. This means that after a rather short time, there will be an almost uniform probability of finding the ball in any given region of the table. Indeed, even if our initial probability distribution is well peaked around the initial position of the ball, there will be lots of nearby initial conditions that will give rise to very different trajectories (that is exactly what it means to say that the system is chaotic). But now we can hardly take the probability distribution after some time seriously as an \"irreducible\" *description* of the system. Indeed, whenever we look at the system, we find the ball somewhere, at a rather well defined position on the table. It is certainly not completely described by its probability distribution. The latter describes adequately our knowledge (or rather our ignorance) of the system, obtained on the basis of our initial information. But it would be difficult to commit the Mind Projection Fallacy more radically than to confuse the objective position of the ball and our best bet for it . In fact, chaotic systems illustrate this difference: if all nearby initial conditions followed nearby trajectories, the distinction between probabilities and trajectories would not matter too much. But chaotic system show exactly how unreasonable is the assignment of \"irreducible\" probabilities, since the latter quickly spread out over the space in which the system evolves.\n\nOf course, nobody will deny that the ball is always somewhere. But this example raises the following the question: what does it mean exactly to \"eliminate trajectories\"[^24]. Either the dynamics is expressed directly at the level of probability distributions, and we run into the difficulties mentioned in the previous paragraph, or the dynamics is *fundamentally* expressed in terms of trajectories (remembering that the discussion takes place in a classical framework), probabilities are a very useful tool, whose properties are *derived* mathematically from those of the trajectories, and nothing radically new has been done. In [^25], Prigogine emphasizes the \"irreducible\" spectral decompositions of the so-called Perron-Frobenius operator. This is a rather technical notion, which I will discuss in Appendix 2. It suffices to say here that this will not solve the dilemma raised above. If one reformulates the laws of physics, or understands them differently, or whatever, there is still presumably something that evolves, in some fashion. The question is: what evolves, and how?\n\nWhat the example of the billiard ball also shows is that we must distinguish different levels of analysis. First of all, we may describe the system in a certain way: we may assign to the ball at least an approximate position at each time, hence an approximate trajectory[^26]. Certainly the ball is not *everywhere*, as the \"irreducible\" probabilistic description would suggest. The next thing we can do is to try to find exact or approximate laws of motion for the ball. The laws of elastic reflection against obstacles, for example. Finally, we may try to solve the equations of motion. We may not be able to perform the last step. But this does not mean that one should give up the previous ones. We may even realize that our laws are only approximate (because of friction, of external perturbations, etc\u2026). But why give up the notion of (approximate) trajectories? Of course, since we are not able to predict the evolution of trajectories one may *choose* to study instead the evolution of probability distributions. This is perfectly reasonable, as long as one does not forget that, in doing so, we are not only studying the physical system but also our ability or inability to analyse it in more detail. This will be very important in the next Section.\n\nAt this point, I want to briefly discuss the classical status of probability in physics, i.e. of probability as \"ignorance\". This will also be very important in the next Section. To quote Laplace again: \"The curve described by a molecule of air or of vapour is following a rule as certainly as the orbits of the planets: the only difference between the two is due to our ignorance. Probabilility is related, in part to this ignorance, in part to our knowledge.\" Let us consider the usual coin-throwing experiment. We assign a probability $1\/2$ to heads and $1\/2$ to tails. What is the logic of the argument? We examine the coin, and we find out that it is fair. We also know the person who throws the coin and we know that he does not cheat. But we are unable to control or to know exactly the initial conditions for each throw. We can however determine the average result of a large number of throws. This is simply because, if one considers as a single \"experiment\" $N$ consecutive throws of a coin, the overwhelming majority (for $N$ large) of the possible results will have an approximately equal number of heads and of tails. It is as simple as that, and there will be nothing conceptually more subtle in the way we shall use probabilities below. The part \"due to our ignorance\" is simply that we *use* probabilistic reasoning. If we were omniscient, it would not be needed (but the averages would remain what they are, of course). The part \"due to our knowledge\" is what makes the reasoning work. We could make a mistake: the coin could be biased, and we did not notice it. Or we could have a \"record of bad luck\" and have many more heads than tails. But that is the way things are: our knowledge *is* incomplete and we have to live with that. Nevertheless, probabilistic reasoning is extraordinarily successful in practice, but, when it works, this is due to our (partial) knowledge. It would be wrong to attribute any constructive role to our ignorance. And it is also erroneous to assume that the system must be somehow indeterminate, when we apply probabilistic reasoning to it. Finally, one could rephrase Laplace's statement more carefully as follows: \"Even if the curve described by a molecule of air follows a rule as certainly as the orbits of the planets, our ignorance would force us to use probabilistic reasonings\".\n\n# Irreversibility and the arrow of time\n\nWhat is the problem of irreversibility? The basic physical laws are reversible, which simply means that, if we consider an isolated system of particles, let it evolve for a time $t$, then reverse exactly the velocities of all the particles, and let the system again evolve for a time $t$, we get the original system at the initial time with all velocities reversed[^27]. Now, there are lots of motions that we see, without ever observing their associated \"time-reversed\" motion: we go from life to death but not vice versa, coffee does not jump out of the cup, mixtures of liquids do not spontaneously unmix themselves. Some of these examples taken from everyday life involve non-isolated systems, but that is not relevant[^28]. I shall center the discussion below on the canonical physical example (and argue that the other situations can be treated similarly): consider a gas that is initially compressed by a piston in the left half of a box; the piston is then released so that the gas expands into the whole container. We do not expect to see the particles to go back to the left half of the box, although such a motion would be as compatible with the laws of physics as the motion that does take place. So, the question, roughly speaking, is this: if the basic laws are reversible, why do we see some motions but never their time-reversed ones?\n\nThe first point to clarify is that this irreversibility does not lead to a *contradiction* with the basic physical laws[^29]. Indeed, the laws of physics are always of the form: given some initial conditions, here is the result after some time. But they never tell us how the world *is or evolves*. In order to account for that, one always needs to assume something about the initial conditions. The laws of physics are compatible with lots of possible worlds: there could be no earth, no life, no humans. Nothing of that would contradict the fundamental physical laws. So, it is hard to see what kind of argument would imply a contradiction between the reversibility of the laws and the existence of irreversible phenomena. But no argument at all is given, beyond a vague appeal to intuition, as for example: \"No speculation, no body of knowledge ever claimed the equivalence between doing and undoing, between a plant that grows, has flowers and dies, and a plant that resuscitates, becomes younger and goes back to its primitive seed, between a man who learns and becomes mature and a man who becomes progressively a child, then an embryo , and finally a cell. Yet, since its origins, dynamics, the physical theory that identifies itself with the triumph of science, implied this radical negation of time.\" (, p.25. The first of these sentences is quoted again in , p.178). But nobody says that there is an \"equivalence\" between the two motions, only that both are compatible with the laws of physics. Which one, if any, occurs depends on the initial conditions. And, if the laws are deterministic, assumptions about the initial conditions are ultimately assumptions about the initial state of the Universe.\n\nOnce one has remarked that, a priori, there is no contradiction between irreversibility and the fundamental laws, one could stop the discussion. It all depends on the initial conditions, period. But this is rather unsatisfactory, because, if one thinks about it, one realizes that too many things could be \"explained\" by simply appealing to initial conditions. Luckily, much more can be said. It is perfectly possible to give a natural account of irreversible phenomena on the basis of reversible fundamental laws, and of suitable assumptions about initial conditions. This was essentially done a century ago by Boltzmann, and despite numerous misunderstandings and misguided objections (some of them coming from famous scientists, such as Zermelo or Poincar\u00e9), his explanation still holds today. Yet, Prigogine writes (, p.41): \"He (Boltzmann) was forced to conclude that the irreversibility postulated by thermodynamics was incompatible with the reversible laws of dynamics\"[^30]. This is in rather sharp contrastwith Boltzmann's own words: \"From the fact that the differential equations of mechanics are left unchanged by reversing the sign of time without anything else, Herr Ostwald concludes that the mechanical view of the world cannot explain why natural processes run preferentially in a definite direction. But such a view appears to me to *overlook that mechanical events are determined not only by differential equations, but also by initial conditions*. In direct contrast to Herr Ostwald I have called it one of the most brilliant confirmations of the mechanical view of Nature that it provides an extraordinarily good picture of the dissipation of energy, as long as one assumes that the world began in an initial state satisfying certain initial conditions\" (italics are mine; quoted in , replies, p.115). I will now explain this \"brilliant confirmation of the mechanical view of Nature\", and show that all the alleged contradictions are illusory[^31].\n\n[^32]\n\nFirst of all, I should say that Boltzmann gives a *framework* in which to account for irreversible phenomena on the basis of reversible microscopic laws. He does not explain in detail every concrete irreversible phenomenon (like diffusion, or the growth of a plant). For that, more work is needed and, while the general framework that I shall discuss uses very little of the properties of the microscopic dynamics, the latter may be important in the explanation of specific irreversible phenomena[^33].\n\nLet us now see which systems do behave irreversibly. A good test is to record the behaviour of the system in a movie, and then to run the movie backwards. If it looks funny (e.g. people jump out of their graves), then we are facing irreversible behaviour. It is easy to convince oneself that all the familiar examples of irreversible behaviour involve systems with a large number of particles (or degrees of freedom). If one were to make a movie of the motion of one molecule, the backward movie would look completely natural. The same is true for a billiard ball on a frictionless billiard table[^34]. If, however, friction is present, then we are dealing with many degrees of freedom (the atoms in the billiard table, those in the surrounding air etc...).\n\nThere are two fundamental ingredients in the classical explanation of irreversibility, in addition to the microscopic laws. The first has already been introduced: initial conditions. The second is suggested by the remark that we deal with systems with many degrees of freedom: we *have* to distinguish between microscopic and macroscopic variables. Let us consider the phase space $\\Omega$ (see note ) of the system, so that the system is represented by a point ${\\bf x}$ in that space and its evolution is represented by a curve ${\\bf x}(t)=T^t({\\bf x})$. Various quantities of physical interest, for example the density, or the average energy, or the average velocity in a given cubic millimeter, can be expressed as functions on $\\Omega$[^35]. These functions (call them $F$) tend to be many-to-one, i.e. there are typically a huge number of configurations giving rise to a given value of $F$[^36]. For example, if $F$ is the total energy, then it takes a constant value on a surface in phase space. But if $F$ is the number of particles in a cubic millimeter, there are also lots of microscopic configurations corresponding to a given value of $F$. Now, let me make two statements, the first of which is trivial and the second not. Given a microscopic initial configuration ${\\bf x}_0$, giving rise to a trajectory ${\\bf x}(t)$, any function on phase space follows an induced evolution $F_0 \\rightarrow F_t$, where $F_0 =F({\\bf x}_0)$, and $F_t=F({\\bf x}(t))$ (here and below, I shall assume that $t$ is positive). That is the trivial part. The non-trivial observation is that, in many situations, one can find a suitable family of functions (I'll still denote by $F$ such a family) so that this induced evolution is actually (approximately) $autonomous$. That is, one can determine $F_t$ given $F_0$ alone, without having to know the microscopic configuration from which it comes[^37]. This means that the different microscopic configurations on which $F$ takes the value $F_0$, will induce the same evolution on $F_t$. A very trivial example is given by the globally conserved quantities (like the total energy): for all microscopic configurations, $F_t = F_0$, for all times. But that is not interesting. It is more interesting to observe that the solutions of all the familiar macroscopic equations (Navier-Stokes, Boltzmann, diffusion, \u2026) can be considered as defining such an induced evolution $F_0 \\rightarrow F_t$. Actually, there are several provisos to be made here: first of all, it is not true that *all* microscopic configurations giving rise to $F_0$ lead to the same evolution for $F_t$. In general, only the (vast) majority of microscopic configurations do that[^38]. Moreover, if we want that evolution to hold for all times, then this set of microscopic configurations may become empty[^39]. Finally, the laws used in practice may contain some further approximations.\n\nSo, the precise justification of a macroscopic law should be given along the following lines: given $F_0$, and given a (not too large) time $T$[^40], there exists a large subset of the set of ${\\bf x}$'s giving rise to $F_0$ (i.e. of the preimage in $\\Omega$, under the map $F$, of $F_0$) such that the induced evolution of $F_t$ is approximately described by the relevant macroscopic equations up to time $T$. It should be obvious that it is not easy to prove such a statement. One has to deal with dynamical systems with a large number of degrees of freedom, about which very little is known, and in addition one has to identify limits in which one can make sense of the approximations mentioned above (a large subset, a not too large time $T$ \u2026). Nevertheless, this can be done in some circumstances, the best known being probably the derivation of Boltzmann's equation by Lanford . In Appendix 1, I discuss a model due to Mark Kac which, while artificially simple, can be easily analysed and shows exactly what one would like to do in more complicated situations[^41].\n\nLet us come back to the problem of irreversibility: should we expect those macroscopic laws to be reversible? A priori, not at all. Indeed, I have emphasized in the abstract description above the role of initial conditions in their derivation[^42]. The macroscopic equations may be reversible or not, depending on the situation. But since *initial* conditions enter their derivation, there is no *logical* reason to expect them to be reversible[^43].\n\nLet me illustrate this explanation of irreversibility in a concrete physical example (see also Appendix 1 for a simple mathematical model). Consider the gas introduced in Section 3.1 that is initially compressed by a piston in the left half of a box, and that expands into the whole box. Let $F$ be the density of the gas. Initially, it is one (say) in one half of the box and zero in the other half. After some time $t$, it is (approximately) one half everywhere. The explanation of the irreversible evolution of $F$ is that the overwhelming majority of the microscopic configurations corresponding to the gas in the left half, will evolve deterministically so as to induce the observed evolution of $F$. There may of course be some exceptional configurations, for which all the particles stay in the left half. All one is saying is that those configurations are extraordinarily rare, and that we do not expect to see even one of them appearing when we repeat the experiment many times, not even once \"in a million years\", to put it mildly (see the end of note ()).\n\nThis example also illustrates the answer to the reversibility objection. Call \"good\" the microscopic configurations that lead to the expected macroscopic behaviour. Take all the good microscopic configurations in the left half of the box, and let them evolve until the density is approximately uniform. Now, reverse all the velocities. We get a set of configurations that still determines a density one half in the box. However, they are not good. Indeed, from now on, if the system remains isolated, the density just remains uniform according to the macroscopic laws. But for the configurations just described, the gas will move back to the left half, leading to a gross violation of the macroscopic law. What is the solution? Simply that those \"reversed-velocities\" configurations form a very tiny subset of all the microscopic configurations giving rise to a uniform density. And, of course, the original set of configurations, those coming from the left half of the box, also form such a small subset. Most configurations corresponding to a uniform density do not go to the left half of the box, neither in the future nor in the past (at least for reasonable periods of time, see Sect. 4.1). So that, if we prepare the system with a uniform density, we do not expect to \"hit\" even once one of those bad configurations[^44].\n\nNow comes a real problem. We are explaining that we never expect to get a microscopic configuration that will lead all the gas to the left of the box. *But we started from such a configuration*. How did we get there in the first place? The real problem is not to explain why one goes to equilibrium, but why there are systems out of equilibrium to start with. For the gas, obviously the system was not isolated: an experimentalist pushed the piston. But why was there an experimentalist? Human beings are also systems out of equilibrium, and they remain so (for some time) thanks to the food they eat, which itself depends on the sun, through the plants and their photosynthesis. Of course, in order to be able to take advantage of their food, humans also need their genetic program, which itself results from the long history of natural selection.\n\nAs discussed e.g. in Penrose , the earth does not gain energy from the sun (that energy is re-radiated by the earth), but low entropy (likewise, we seek low entropy rather than energy in our food); the sun sends (relatively) few high energy photons and the earth re-radiates more low energy photons (in such a way that the total energy is conserved). Expressed in terms of \"phase space\", the numerous low energy photons occupy a much bigger volume than the incoming high energy ones. So, the solar system, as a whole, moves towards a larger part of its phase space while the sun burns its fuel. That evolution accounts, by far, for what we observe in living beings or in other \"self-organized\" structures[^45]. I shall come back to this point in Section 6. Of course, for the sun to play this role, it has to be itself out of equilibrium, and to have been even more so in the past. We end up with an egg and hen problem and we have ultimately to assume that the Universe started in a state far from equilibrium, an \"improbable state\" as Boltzmann called it. To make the analogy with the gas in the box, it is as if the Universe had started in a very little corner of a huge box[^46].\n\nTo account in a natural way for such a state is of course a major open problem, on which I have nothing to say (see Penrose for further discussion, and Figure 7.19 there for an illustration), except that one cannot avoid it by \"alternative\" explanations of irreversibility. Given the laws of physics, as they are formulated now, the world could have started in equilibrium, and then we would not be around to discuss the problem[^47]. To summarize: the only real problem with irreversibility is not to explain irreversible behaviour in the future, but to account for the \"exceptional\" conditions of the Universe in the past.\n\nNow, I come to my basic criticism of the views of Prigogine and his collaborators, who argue that dynamical systems with very good chaotic properties, such as the baker's map, are \"intrinsically irreversible\". Let me quote from a letter of a collaborator of Prigogine, D. Driebe , criticizing an article of Lebowitz explaining Boltzmann's ideas. This letter is remarkably clear and summarizes well the main points of disagreement. \"If the scale-separation argument were the whole story, then irreversibility would be due to our approximate observation or limited knowledge of the system. This is difficult to reconcile with the constructive role of irreversible processes\u2026Irreversibility is not to be found on the level of trajectories or wavefunctions but is instead manifest on the level of probability distributions\u2026Irreversible processes are well observed in systems with few degrees of freedom, such as the baker and the multibaker transformations\u2026The arrow of time is not due to some phenomenological approximations but is an intrinsic property of classes of unstable dynamical systems\"[^48].\n\nLet us discuss these claims one by one. First of all, as I emphasized above, the scale-separation (i.e. the micro\/macro distinction) is not \"the whole story\". Initial conditions have to enter into the explanation (and also the dynamics, of course). Next, what does it mean that \"irreversible processes are observed in systems such as the baker transformation\"? This transformation describes a chaotic system with few degrees of freedom, somewhat like the billiard ball on a frictionless table[^49]. For those systems, there is no sense of a micro\/macro distinction: how could one define the macroscopic variables? To put it otherwise, we can make a movie of the motion of a point in the plane evolving under the baker's map, or of a billiard ball, or of any isolated chaotic system with few degrees of freedom, and run it backwards, we shall not be able to tell the difference. There is nothing funny or implausible going on, unlike the backward movie of any real irreversible macroscopic phenomenon. So, the first critique of this alleged connection between unstable dynamical systems (i.e. what I call here chaotic systems) and irreversibility is that one \"explains\" irreversibility in systems in which nothing irreversible happens, and where therefore there is nothing to be explained.\n\nIt is true that probability distributions for those systems evolve \"irreversibly\", meaning that any (absolutely continuous, see note ) probability distribution will spread out all over the phase space and will quickly tend to a uniform distribution. This just reflects the fact that different points in the support of the initial distribution, even if they are close to each other initially, will be separated by the chaotic dynamics. So, it is true, in a narrow sense, that \"irreversibility is manifest on the level of probability distributions\". But what is the physical meaning of this statement? A physical system, chaotic or not, is described by a trajectory in phase space, and is certainly not described adequately by the corresponding probability distributions. As I discussed in Section 2.2, the latter reflects, in part, our ignorance of that trajectory. Their \"irreversible\" behaviour in this sense is therefore not a genuine physical property of the system. We can, if we want, focus our attention on probabilities rather than on trajectories, but that \"choice\" cannot have a basic role in our explanations.\n\nOne cannot stress strongly enough the difference between the role played by probabilities here and in the classical solution. In the latter, we use probabilities as in the coin-throwing experiment. We have some macroscopic constraint on a system (the coin is fair; the particles are in the left half of the box), corresponding to a variety of microscopic configurations. We predict that the behaviour of certain macroscopic variables (the average number of heads; the average density) will be the one induced by the vast majority of microscopic configurations, compatible with the initial constraints. That's all. But it works only because a large number of variables are involved, *in each single physical system*. However, each such system is described by a point in phase space (likewise, the result of many coin throwings is a particular sequence of heads and tails). In the \"intrinsic irreversibility\" approach, a probability distribution is assigned to *each single physical system*, as an \"irreducible\" description. The only way I can make sense of that approach is to consider a *large number* of billiard balls or of copies of the baker's map, all of them starting with nearby initial conditions. Then, it would be like the particles in the box, the average density would tend to become uniform, and we are back to the standard picture. But this does not force us to \"rethink the notion of law of nature\".\n\nI will now discuss the alleged \"subjectivity\" of this account of irreversibility (i.e., that it is due to our approximate observation or limited knowledge of the system). I shall consider in Section 6 the \"constructive role\" of irreversible processes, mentioned in Driebe's letter . Branding Boltzmann's ideas as \"subjective\" is rather common. For example, Prigogine writes: \"In the classical picture, irreversibility was due to our approximations, to our ignorance.\" (, p.37) But, thanks to the existence of unstable dynamical systems, \"the notion of probability that Boltzmann had introduced in order to express the arrow of time does not correspond to our ignorance and acquires an objective meaning\" (, p.42)[^50]. To use Popper's image: \"Hiroshima is not an illusion\" (I shall come back to Popper's confusions in Section 4.4.). This is only a dramatization of the fact that irreversible events are not subjective, or so it seems. The objection is that, if the microscopic variables behave reversibly and if irreversibility only follows when we *\"choose\"* to concentrate our attention on macroscopic variables, then our explanation of irreversibility is unavoidably tainted by subjectivism. I think that this charge is completely unfair, and reflects some misunderstanding of what irreversible phenomena really are. The point is that, upon reflection, one sees that all irreversible phenomena deal with these macroscopic variables. There is no subjectivism here: the evolution of the macroscopic variables is objectively determined by the microscopic ones, and they behave as they do whether we look at them or not. In that sense they are completely objective. But it is true that, if we look at a single molecule, or at a collection of molecules represented by a point in phase space, there is no sense in which they evolve \"irreversibly\", if we are not willing toconsider some of the macroscopic variables that they determine.\n\nHowever, the apparently \"subjective\" aspect of irreversibility has been sometimes overemphasized, at least as a way to speak. Heisenberg wrote: \"Gibbs was the first to introduce a physical concept which can only be applied to an object when our knowledge of the object is incomplete. If for instance the motion and the position of each molecule in a gas were known, then it would be pointless to continue speaking of the temperature of the gas.\"(, p.38)[^51]. And Max Born said: \"Irreversibility is therefore a consequence of the explicit introduction of ignorance into the fundamental laws.\" (, p.72). These formulations, although correct if they are properly interpreted, lead to unnecessary confusions. For example, Popper wrote: \"It is clearly absurd to believe that pennies fall or molecules collide in a random fashion *because we do not know* the initial conditions, and that they would do otherwise if some demon were to give their secret away to us: it is not only impossible, it is absurd to explain objective statistical frequencies by subjective ignorance.\" (, p.106)[^52]. However, just after saying this, Popper gives what he calls \"an objective probabilistic explanation of irreversible processes\" (, p.107), attributed to Planck, which, as far as I can tell, is not very different from what I call the classical solution. The source of the confusion comes from two uses of the word \"knowledge\". Obviously, the world does what it does, whether we know about it or not. So, indeed, if \"some demon\" were to provide us with a detailed knowledge of the microscopic state of the gas in the left half of the box, nothing would change to the future evolution of that gas. But we may imagine situations where one can *control* more variables, hence to \"know\" more about the system. When the piston forces the gas to be in the left half of the box, the set of available microscopic states is different than when the piston is not there, and obviously we have to take that \"knowledge\" into account. But there is nothing mysterious here.\n\nI believe that statistical mechanics would become easier to understand by students if it were presented without using an anthropomorphic language and subjective sounding notions such as information, observation or knowledge. Or, at least, one should explain precisely why these notions are introduced and why they do not contradict an objectivist view of natural phenomena (see the writings of Jaynes on this point ). But I also believe that the charge of subjectivity should be completely reversed: to \"explain\" irreversibility through the behaviour of probability distributions (which *are* describing our ignorance), as Prigogine does, is to proceed as if the limitations of human knowledge played a fundamental physical role.\n\n# Some misconceptions about irreversibility\n\nAccording to Prigogine (, p.23) Poincar\u00e9 did not recommend reading Boltzmann, because his conclusions were in contradiction with his premises. Discussing our example of a gas expanding in a container, Prigogine observes that \"if irreversibility was only that, it would indeed be an illusion, because, if we wait even longer, then it may happen that the particles go back to the same half of the container. In this view, irreversibility would simply be due to the limits of our patience.\" (, p.24) This is basically the argument derived from the Poincar\u00e9 recurrence theorem (and used by Zermelo against Boltzmann ), which says that, if the container remains isolated long enough, then indeed the particles will return to the half of the box from which they started. Replying to that argument, Boltzmann supposedly said \"You should live that long\". For any realistic macroscopic system, the Poincar\u00e9 recurrence times (i.e. the time needed for the particles to return to the left half of the box) are much much larger than the age of the universe. So that again no contradiction can be derived, from a physical point of view, between Boltzmann's explanations and Poincar\u00e9's theorem. However, there is still a mathematical problem (and this may be what Poincar\u00e9 had in mind): if one tries to rigorously derive an irreversible macroscopic equation from the microscopic dynamics and suitable assumptions on initial conditions, the Poincar\u00e9 recurrence time will put a limit on the length of the time interval over which these statements can be proven. That is one of the reasons why one discusses these derivations in suitable limits (e.g. when the number of particles goes to infinity) where the Poincar\u00e9 recurrence time becomes infinite. But one should not confuse the fact that one takes a limit for mathematical convenience and the source of irreversibility. In the Kac model discussed in Appendix 1, one sees clearly that there are very different time scales: one over which convergence to equilibrium occurs, and a much larger one, where the Poincar\u00e9 recurrence takes place. But the first time scale is not an \"illusion\". In fact, it is on that time scale that all phenomena that we can possibly observe do take place.\n\nOne often hears that, for a system to reach \"equilibrium\", it must be ergodic, or mixing. The fact is that those properties, like the \"intrinsic irreversibility\" discussed above, *are neither necessary nor sufficient* for a system to approach equilibrium. Let me start with ergodicity. A dynamical system is *ergodic* if the average time spent by a trajectory in any region of the phase space is proportional to the volume of that region. To be more precise: average means in the limit of infinite time and this property has to hold for all trajectories, except (possibly) those lying in a subset of zero volume. One says that it holds for \"almost all\" trajectories. This property implies that, for any reasonable function on phase space, the average along almost all trajectories will equal the average over the phase space[^53]. Then, the argument goes, the measurement of any physical quantity will take some time. This time is long compared to the \"relaxation time\" of molecular processes. Hence, we can approximately regard it as infinite. Therefore, the measured quantity, a time average, will approximately equal the average over phase space of the physical quantity under consideration. But this latter average is exactly what one calls the equilibrium value of the physical quantity. So, according to the usual story, if a dynamical system is ergodic, it converges towards equilibrium. This appeal to ergodicity in order to justify statistical mechanics is rather widespread[^54] even though it has been properly criticized for a long time by, e.g., Tolman , p.65, Jaynes , p.106, and Schwartz .\n\nLet us see the problems with this argument: a well-known, but relatively minor, problem is that it is very hard to give a mathematical proof that a realistic mechanical system is ergodic. But let us take such a proof for granted, for the sake of the discussion. Here is a more serious problem. Assume that the argument given above is true: how would it then be possible to observe or measure *any non-equilibrium* phenomenon? In the experiment with the box divided in two halves, we should not be able to see any intermediate stage, when the empty half gets filled, since the time for our measurements is supposed to be approximately infinite. So, where is the problem? We implicitly identified the \"relaxation time\" with what one might call the \"ergodic time\", i.e. the time taken by the system to visit all regions of phase space sufficiently often so that the replacement of time averages by spatial averages is approximately true. But, whatever the exact meaning of the word \"relaxation time\" (for a few molecules) is, the ergodic time is certainly enormously longer. Just consider how large is the volume in phase space that has to be \"sampled\" by the trajectory. For example, all the particles could be in the right half of the box, and ergodicity says that they will spend some time there (note that this is not implied by Poincar\u00e9's theorem; the latter only guarantees that the particles will return to the part of the box from which they started, i.e. the left half here). To be more precise, let us partition the phase space into a certain number of cells, of a given volume, and consider the time it takes for a given trajectory to visit each cell, even once, let us say[^55]. That, obviously, will depend on the size (hence, on the number) of the cells. By taking finer and finer partitions, we can make that time as large as one wishes. So, if one were to take the argument outlined above literally, the \"ergodic time\" is infinite, and speaking loosely about a relaxation time is simply misleading.\n\nAt this point of the discussion, one often says that we do not need the time and space average to be (almost) equal for all functions, but only for those of physical relevance (like the energy or particle densities). This is correct, but the criticism of the \"ergodic\" approach then changes: instead of not being *sufficient* to account for irreversibility, we observe that it is not *necessary*. To see this, consider another partition of phase space: fix a set of macroscopic variables, and partition the phase space according to the values taken by these variables (see e.g. figures 7.3 and 7.5 in Penrose for an illustration, and Appendix 1 here for an example). Each element of the partition consists of a set of microscopic states that give the same value to the chosen macroscopic variables. Now, these elements of the partition have very different volumes. This is similar to the law of large numbers. There are (for $N$ large) vastly more results of $N$ throws of a coin where the number of heads is approximately one half than throws where it is approximately one quarter (the ratio of these two numbers varies exponentially with $N$). By far the largest volumes correspond to the *equilibrium values* of the macroscopic variables (and that is how \"equilibrium\" should be defined). So, we need a much weaker notion than ergodicity. All we need is that the microscopic configuration evolves in phase space towards those regions where the relevant macroscopic variables take their equilibrium values. The Kac model (see Appendix 1) perfectly illustrates this point: it is not ergodic in any sense, yet, on proper time scales, the macroscopic variables evolve towards equilibrium.\n\nThere is a hierarchy of \"ergodic\" properties that are stronger than ergodicity: mixing, K-system, Bernoulli, see Lebowitz and Penrose . But none of these will help us to understand, in principle, irreversible behaviour any more than ergodicity.\n\nThe problem with all those approaches is that they try to give a purely mechanical criterion for \"irreversible behaviour\". Here is the basic dilemna: either we are willing to introduce a macro\/micro distinction and to give a basic role to initial conditions in our explanation of irreversibility or we are not. If we make the first choice, then, as explained in Section 3, there is no deep problem with irreversibility, and subtle properties of the dynamics (like ergodic properties) play basically no role. On the other hand, nobody has ever given a consistent alternative, namely an explanation of irreversibility that would hold for *all* initial conditions or apply to *all* functions on configuration space (therefore avoiding the micro\/macro distinction). So, we have to make the first choice. But then, everything is clear and nothing else is needed.\n\nAnother critique of the \"ergodic\" approach is that systems with one or few degrees of freedom may very well be ergodic, or mixing, or Bernoulli (like the baker's transformation). And, as we discussed in Section 3.4, it makes no sense to speak about irreversibility for those systems. So, this is another sense in which the notion of ergodicity is not sufficient (see e.g. Vauclair ( p.197), where the approach to equilibrium is illustrated by the baker's transformation).\n\nTo avoid any misunderstandings, I emphasize that the study of ergodic properties of dynamical systems gives us a lot of interesting information on those systems, especially for chaotic systems. Besides, ergodic properties, like other concrete dynamical properties of a system, may play a role in the form of the macroscopic equations obeyed by the system, in the value of some transport coefficients or in the speed of convergence to equilibrium. But, and this is the only point I wanted to make, the usual story linking ergodicity (or mixing) and \"approach to equilibrium\" is highly unsatisfactory.\n\nSometimes it is alleged that, for some reason (the Poincar\u00e9 recurrences, for example) a truly isolated system will never reach equilibrium. But it does not matter, since true isolation never occurs and external (\"random\") disturbances will always drive the system towards equilibrium [^56]. This is true but irrelevant[^57].\n\nIn order to understand this problem of non-isolation, we have to see how to deal with idealizations in physics. Boltzmann compares this with Galilean invariance (see , p.170). Because of non-isolation, Galilean (or Lorentz) invariance can never be applied strictly speaking (except to the entire universe, which is not very useful). Yet, there are many phenomena whose explanation involve Galilean (or Lorentz) invariance. We simply do as if the invariance was exact and we then argue that the fact that it is only approximate does not spoil the argument. One uses a similar reasoning in statistical mechanics. If we can explain what we want to explain (e.g. irreversibility) by making the assumption that the system is perfectly isolated, then we do not have to introduce the lack of isolation in our explanations. We have only to make sure that this lack of isolation does not conflict with our explanation. And how could it? The lack of isolation should, in general, speed up the convergence towards equilibrium[^58]. Also, if we want to explain why a steamboat cannot use the kinetic energy of the water to move, we apply irreversibility arguments to the system boat$+$water, even though the whole system is not really isolated.\n\nAnother way to see that lack of isolation is true but irrelevant is to imagine a system being more and more isolated. Is irreversibility going to disappear at some point? That is, will different fluids not mix themselves, or will they spontaneously unmix? I cannot think of any example where this could be argued. And I cannot tell with a straight face to a student that (part of) our explanation for irreversible phenomena on earth depends on the *existence* of Sirius.\n\nHere, I will discuss various confusions that have been spread by some philosophers. Bergson was a rather unscientific thinker, and many readers may wonder why he belongs here. I have myself been very surprised to see how much sympathy Prigogine and Stengers seem to have for Bergson (see the references to Bergson in ). But Bergson has been extremely influential, at least in the French culture, and, I am afraid, still is[^59]. In particular, he is one source of the widespread confusion that there is contradiction between life and the Second Law of thermodynamics. Roughly speaking, Bergson saw a great opposition between \"matter\" and \"life\", and a related one between intellect and intuition. The intellect can understand matter, but intuition is needed to apprehend life[^60]. Bergson was not a precursor of the discovery of DNA, to put it mildly[^61]. The Second Law of thermodynamics, which he called the \"most metaphysical of the laws of physics\" (, p.264), was very important for him[^62]. It reinforced his \"vision of the material world as that of a falling weight.\" (, p.266), hence, that \"all our analyses show indeed in life an effort to climb the slope that matter has descended.\" (, p.267) \"The truth is that life is possible wherever energy goes down the slope of Carnot's law, and where a cause, acting in the opposite direction, can slow down the descent.\"(, p.278) It's all metaphorical, of course, but Bergson's philosophy *is* entirely a \"metaphorical dialectics devoid of logic, but not of poetry\", as Monod calls it (). In any case, life is perfectly compatible with the Second Law (see Section 3.3).\n\nTurning to Popper, we have already seen that he had lots of problems with statistical mechanics. Since Popper is generally considered positively by scientists[^63], it is worth looking more closely at his objections. He took too literally the claims of Heisenberg, Born and Pauli on irreversibility as \"subjective\" (see Section 3.5), which he thought (maybe rightly so) were precursors of the subjectivism of the Copenhagen interpretation of quantum mechanics (see ). Besides, he was strongly opposed to determinism and he was convinced that \"the strangely law-like behaviour of the statistical sequences remain, for the determinist, *ultimately irreducible and inexplicable*.\" (, p.102). As I discussed in Section 2.2, there is no problem in using probabilities, even in a deterministic universe. He then invented a rather obscure \"propensity\" interpretation of probabilities. He also felt that one should define \"objectively\" what a random sequence is. A sequence (of zeros and ones) will be random if there are (almost) as many zeroes and ones, as many pairs $00$, $01$, $10$, $11$, etc\u2026 (see e.g. , p.112). He did not seem to realize that this is like saying that a \"microscopic configuration\" (a sequence) gives to certain \"macroscopic variables\" (the average number of occurences of finite subsequences) the values which are given to them by the overwhelming majority of sequences. So that the difference with what he calls the \"subjective\" viewpoint is not so great.\n\nFinally, Popper was very critical of Boltzmann. Although he admires Boltzmann's realist philosophy, he calls Boltzmann's interpretation of time's arrow \"idealist\" and claims that it was a failure. As we saw, any explanation of irreversibility ultimately forces us to say that the universe started in an \"improbable\" state. Boltzmann tried to explain it as follows: in an eternal and infinite universe globally in equilibrium, all kinds of fluctuations will occur. What we call our universe is just the result of one such gigantic fluctuation, on its way back to equilibrium. But this explanation does not really work. Indeed, the most probable assumption, if a fluctuation theory is to hold, is simply that my brain is a fluctuation out of equilibrium, just at this moment and in this small region of space, while none of the familiar objects of the universe (stars, planets, other human beings) exist and all my (illusory) perceptions and memories are simply encoded in the states of my neurons (a \"scientific\" version of solipsism). However improbable such a fluctuation is, it is still far more probable than a fluctuation giving rise to the observed universe, of which my brain is a part. Hence, according to the fluctuation theory, that \"solipsist\" fluctuation must actually have occurred many more times than the big fluctuation in which we live, and therefore no explanation is given for the fact that we happen to live in the latter (see Feynman and Lebowitz for a discussion of that fluctuation theory).\n\nBoltzmann's cosmology does not work. So, what? When Popper wrote (1974), no one took Boltzmann's cosmology seriously anyway: it had long since been superseded by cosmologies based on general relativity. Besides, Popper does not raise the objection I just made. His criticism is, rather, that this view would render time's arrow \"subjective\" and make Hiroshima an \"illusion\". This is complete gibberish. Boltzmann gives a complete and straightforward explanation of irreversible processes in which Hiroshima is as objective as it unfortunately is (when it is described at the macroscopic level, which is what we mean by \"Hiroshima\"). Of course, questions remain concerning the initial state of the universe. In the days of Boltzmann, very little was known about cosmology. What the failure of Boltzmann's hypothesis on the origin of the initial state shows is that cosmology, like the rest of science, cannot be based on pure thought alone[^64].\n\nPopper was also too much impressed with Zermelo's objections to Boltzmann, based on the Poincar\u00e9 recurrence theorem, and discussed above (see ). But he has even stranger criticisms: in , he argues that Brownian motion (where fluctuations may pull the particle against gravity) is a serious problem for the Second Law. Maxwell had already observed that \"The Second Law is constantly being violated\u2026in any sufficiently small group of molecule\u2026As the number \u2026is increased \u2026the probability of a measurable variation \u2026 may be regarded as practically an impossibility.\" (, quoted in ) Going from bad to worse, Feyerabend invents a \"perpetuum mobile of the second kind\" (i.e. one respecting the first law but not the second) using *a single molecule* . He adds that he assumes \"frictionless devices\" (he had better do so!). Those claims are then repeated in his popular book \"Against Method\" , where it is explained that Brownian motion refutes the Second Law[^65]. This is how the general educated public is misled into believing that there are deep open problems which are deliberately ignored by the \"official science\"!\n\nUnfortunately, this is not the end of it. Contemporary (or post-modern) French \"philosophy\" is an endless source of confusions on chaos and irreversibility. Here are just a few examples. The well-known philosopher Michel Serres says, in an interview with the sociologist of science Bruno Latour, entitled paradoxically \"Eclaircissements\": \"Le temps ne coule pas toujours selon une ligne (la premi\u00e8re intuition se trouve dans un chapitre de mon livre sur Leibniz, pp.\u00a0284\u2013286) ni selon un plan, mais selon une vari\u00e9t\u00e9 extraordinairement complexe, comme s'il montrait des points d'arr\u00eat, des ruptures, des puits, des chemin\u00e9es d'acc\u00e9l\u00e9ration foudroyante, des d\u00e9chirures, des lacunes, le tout ensemenc\u00e9 al\u00e9atoirement, au moins dans un d\u00e9sordre visible. Ainsi le d\u00e9veloppement de l'histoire ressemble vraiment \u00e0 ce que d\u00e9crit la th\u00e9orie du chaos \u2026\"[^66] (). Another philosopher, Jean-Fran\u00e7ois Lyotard writes:\"L'id\u00e9e que l'on tire de ces recherches (et de bien d'autres) est que la pr\u00e9\u00e9minence de la fonction continue \u00e0 deriv\u00e9e comme paradigme de la connaissance et de la pr\u00e9vision est en train de dispara\u0131\u0302tre. En s'int\u00e9ressant aux ind\u00e9cidables, aux limites de la pr\u00e9cision du contr\u00f4le, aux quanta, aux conflits \u00e0 l'information non compl\u00e8te, aux \"*fracta*\", aux catastrophes, aux paradoxes pragmatiques, la science postmoderne fait la th\u00e9orie de sa propre \u00e9volution comme discontinue, catastrophique, non rectifiable, paradoxale. Elle change le sens du mot savoir, et elle dit comment ce changement peut avoir lieu. Elle produit non pas du connu, mais de l'inconnu. Et elle sugg\u00e8re un mod\u00e8le de l\u00e9gitimation qui n'est nullement celui de la meilleure performance, mais celui de la diff\u00e9rence comprise comme paralogie.\"[^67] (). A sociologist, Jean Baudrillard observes that \"Il faut peut-\u00eatre consid\u00e9rer l'histoire elle-m\u00eame comme une formation chaotique o\u00f9 l'acc\u00e9l\u00e9ration met fin \u00e0 la lin\u00e9arit\u00e9, et o\u00f9 les turbulences cr\u00e9\u00e9es par l'acc\u00e9l\u00e9ration \u00e9loignent d\u00e9finitivement l'histoire de sa fin, comme elles \u00e9loignent les effets de leurs causes. La destination, m\u00eame si c'est le Jugement dernier, nous ne l'atteindrons pas, nous en sommes d\u00e9sormais s\u00e9par\u00e9s par un hyperespace \u00e0 r\u00e9fraction variable. La r\u00e9troversion de l'histoire pourrait fort bien s'interpr\u00e9ter comme une turbulence de ce genre, due \u00e0 la pr\u00e9cipitation des \u00e9v\u00e9nements qui en inverse le cours et en ravale la trajectoire.\"[^68] (). Finally, Gilles Deleuze and F\u00e9lix Guattari understood chaos as follows: \" On d\u00e9finit le chaos moins par son d\u00e9sordre que par la vitesse infinie avec laquelle se dissipe toute forme qui s'y \u00e9bauche. C'est un vide qui n'est pas un n\u00e9ant, mais un *virtuel*, contenant toutes les particules possibles et tirant toutes les formes possibles qui surgissent pour dispara\u0131\u0302tre aussit\u00f4t, sans consistance ni r\u00e9f\u00e9rence, sans cons\u00e9quence (Ilya Prigogine et Isabelle Stengers, *Entre le temps et l'\u00e9ternit\u00e9*, pp.\u00a0162\u2013163).\"[^69] () Of course, Prigogine and Stengers are not responsible for *these* confusions (in that reference, they discuss the origin of the universe). But this illustrates the difficulties and the dangers of the popularization of science. Besides, Guattari wrote a whole book on \"Chaosmose\" (), which is full of references to non-existent concepts such as \"nonlinear irreversibility thresholds\" and \"fractal machines\"[^70].\n\n# Entropies\n\nThere is some kind of mystique about entropy. According to , , von Neumann suggested to Shannon to use the word \"entropy\" adding that \"it will give you a great edge in debates because nobody really knows what entropy is anyway\". But there is a very simple way to understand the notion of entropy. Just consider any set of macroscopic variables (at a given time) and consider the volume of the subset of phase space (of the microscopic variables) on which these macroscopic variables take a given value. The *Boltzmann entropy* (defined as a function of the values taken by the macroscopic variables) equals the logarithm of that volume. Defined this way, it looks quite arbitrary. We may define as many entropies as we can find sets of macroscopic variables. Furthermore, since the micro\/macro distinction is not sharp, we can always take finer grained entropies, until we reach the microscopic variables (the positions and the momenta of the particles), in which case the entropy is constant and equals zero (giving a volume equal to one to a single microstate, which is rather a quantum-mechanical way to count).\n\nBut one should make several remarks:\n\n1. These entropies are not necessarily \"subjective\". They are as objective as the corresponding macroscopic variables. Jaynes, following Wigner, calls these entropies \"anthropomorphic\" (, p.85). A better word might be \"contextual\", i.e. they depend on the physical situation and on its level of description.\n\n2. The \"usual\" entropy of Clausius, the one which is most useful in practice, corresponds to a particular choice of macroscopic variables (e.g. energy and number of particles per unit volume for a monoatomic gas without external forces). The derivative with respect to the energy of *that* entropy, restricted to equilibrium values, defines the inverse temperature. One should not confuse the \"flexible\" notion of entropy introduced above with the more specific one used in thermodynamics[^71].\n\n3. The Second Law seems now a bit difficult to state precisely. \"Entropy increases\"; yes, but which one? One can take several attitudes. The most conservative one is to restrict oneselve to the evolution of a given isolated system between two equilibrium states and then the increasing entropy is the one discussed in point (2) above. The Second law is then a rather immediate consequence of the irreversible evolution of the macroscopic variables: the microscopic motion will go from small regions of phase space to larger ones (in the sense of the partitions discussed in Section 4.2). The gas in the box goes from an equilibrium state in the left half of the box to another equilibrium state in the whole box. There are many more microscopic configurations corresponding to a uniform density than there are configurations corresponding to the gas being entirely in one half of the box. But this version of the Second Law is rather restrictive, since most natural phenomena to which we apply \"Second Law\" arguments are not in equilibrium. When used properly in non-equilibrium situations, reasonings based on the Second Law give an extremely reliable way to predict how a system will evolve. We simply assume that a system will never go spontaneously towards a very small subset of its phase space (as defined by the macroscopic variables). Hence, if we observe such an evolution, we expect that some hidden external influence is forcing the system to do so, and we try to discover it (see also Jaynes for a nice discussion of apparent violations of the Second Law)[^72].\n\n4. In most non-equilibrium situations, most of these entropies are very hard to compute or even to estimate. However, Boltzmann was able to find an approximate expression of his entropy (minus his $H$ function), valid for dilute gases (e.g. for the gas in the box initially divided in two of Section 3) and to write down an equation for the evolution of that approximate entropy. A lot of confusion is due to the identification between the \"general\" Boltzmann entropy defined above, and the approximation to it given by (minus) the $H$-function (as emphasized by Lebowitz in ). Another frequent confusion about Boltzmann's equation is to mix two conceptually different ingredients entering in its derivation[^73]: one is an assumption about *initial conditions* and the other is to make a particular approximation (i.e. one consider the Boltzmann-Grad limit, see Spohn , in which the equation becomes exact; in the Kac model in Appendix 1, this limit reduces simply to letting $n$ go to infinity for fixed $t$). To account for irreversible behaviour, one has always, as we saw, to assume something on initial conditions, and the justification of that assumption is statistical. But that part does not require, in principle, any approximation. To write down a concrete (and reasonably simple) equation, as Boltzmann did, one uses this approximation. Failure to distinguish these two steps leads one to believe that there is some deep problem with irreversibility outside the range of validity of that approximation[^74].\n\n5. Liouville's theorem[^75] is sometimes invoked against such ideas. For instance, we read in Prigogine and Stengers (, p.104): \"All attempts to construct an entropy function, describing the evolution of a set of trajectories in phase space, came up against Liouville's theorem, since the evolution of such a set cannot be described by a function that increases with time\"[^76] (see , p.8 for a similar statement). What is the solution of that \"paradox\"? Here I consider *a single system* evolving in time and associate to it a certain set of macroscopic variables to which in turn an entropy is attached. But, since the values of the macroscopic variables change with time, the corresponding set of microstates changes too. For the gas in the box, the initial set of microstates are all those where the particles are in the left half, while the final set consists of the microstates giving rise to a uniform density. In other words, I \"embed\" my microscopic state into different sets of microscopic states as time changes, and the evolution of that set should not be confused with a set of *trajectories*, whose volume is indeed forced to remain constant (by Liouville's theorem)[^77].\n\n6. A related source of confusion comes from the fact that Gibbs' entropy, $-\\int \\rho \\log\\rho d{\\bf x}$, which is sometimes viewed as more \"fundamental\" (because it is expressed via a distribution function $\\rho$ on phase space), is indeed constant in time (by Liouville's theorem again). But why should one use this Gibbs entropy out of equilibrium? In equilibrium, it agrees with Boltzmann and Clausius entropies (up to terms that are negligible when the number of particles is large) and everything is fine[^78]. When we compare two different equilibrium states all these entropies change, and the direction of change agrees with the Second Law[^79]. The reason being that the values taken by the macroscopic variables are different for different equilibrium states. Actually, trying to \"force\" the Gibbs entropy to increase by various coarse-graining techniques, gives then the impression that irreversibility is only due to this coarse-graining and is therefore arbitrary or subjective (see e.g. Coveney (, p.412).: \"Irreversibility is admitted into the description by asserting that we only observe a coarse-grained probability;\").\n\n7. Finally, why should one worry so much about entropy for non-equilibrium states? A distinction has to be made between two aspects of irreversibility: one is that macroscopic variables tend to obey irreversible laws and the other is that when an isolated system can go from one equilibrium state to another, the corresponding thermodynamic entropies are related by an inequality. Both aspects are connected, of course, and they can be both explained by similar ideas. But this does not mean that, in order to account for the irreversible behaviour of macroscopic variables, we have to introduce an entropy function that evolves monotonically in time. It may be useful or interesting to do so, but it is not required to account for irreversibility. All we really *need* is to define suitably the entropy for equilibrium states, and that was done a long time ago.\n\n8. Jaynes rightly says that he does not know what is the entropy of a cat ( p.86). The same thing could be said for a painting, an eye or a brain. The problem is that there is no well-defined set of macroscopic variables that is specified by the expression \"a cat\".\n\n# Order out of Chaos? \n\nIn this section, I will discuss the \"constructive role\" of irreversible processes[^80]. But I also want to discuss the impact of scientific discoveries on the cultural environment. At least since the Enlightenment and the Encyclopaedia, scientists have communicated their discoveries to society, and, through the popular books and the educational system, have profoundly influenced the rest of culture. But one has to be very careful. In his recent book on Darwin, the philosopher D. Dennett makes a list of popular misconceptions about the theory of evolution (, p.392). One of them is that one no longer needs the theory of natural selection, since we have chaos theory! He does not indicate the precise source of this strange idea, but this illustrates how easily people can be confused by loose talk, analogies and metaphors.\n\nI think that one should clearly reaffirm certain principles: first of all, no macroscopic system has ever jumped out of equilibrium spontaneously. Moreover, isolated macroscopic systems always evolve towards equilibrium. These are general qualitative statements that one can make about macroscopic mechanical systems. No violations of them have ever been found. Of course, nobody explicitly denies those principles, but I am nevertheless afraid that many people are confused about this point[^81].\n\nOf course, it has always been known that very complicated and interesting phenomena occur out of equilibrium, human beings for example. But this raises two completely different problems. Oneis to explain those phenomena on the basis of the microscopic laws and of suitable assumptions on initial conditions. Much progress in this direction have been made, but we are far from understanding everything, and, of course, to account for the existence of human beings, Darwin's theory *is* needed.\n\nThe other question, a much easier one, is to understand why there is no *contradiction* between the general tendency towards equilibrium and the appearance of self-organization, of complex structures or of living beings. *That* is not difficult to explain qualitatively, see Section 3.3 and Penrose .\n\nGoing back to Popper (again), he wanted to solve the alleged contradiction between life and the Second Law (see note ) by turning to Prigogine and saying that \"*open systems in a state far from equilibrium* show no tendency towards increasing disorder, even though they produce entropy. But they can export this entropy into their environment, and can increase rather than decrease their internal order. They can develop structural properties, and thereby do the very opposite of turning into an equilibrium state in which nothing exciting can happen any longer.\" (, p. 173). This is correct, provided that part of the environment *is more ordered than the system*, where \"order\" is taken in a technical sense: the system plus its environment (considered as approximately isolated) is in a state of low entropy, or is in a small subset of its *total* phase space and moves towards a larger subset of that space[^82] (where the subsets are elements of a partition like the one discussed in Section 4.2). But it is misleading to suggest that order is created out of nothing, by rejecting \"entropy\" in an unspecified environment[^83]. It is not enough to be an \"open system\"; the environment must be in a state of low entropy. While it is correct to say that the Second Law \"applies only to isolated systems\", it should not be forgotten that most systems can be considered, at least approximately, as subsystems of isolated ones, and that, therefore, the Second Law does imply some constraints even for open systems.\n\nHere are some examples which *may* create this confusion[^84]: In (, p.157) Prigogine wants to give an example of one of the \"many phenomena'\" that cannot be understood through the \"general interpretation of the growth of entropy\" due to Boltzmann. He considers a system of particles (on a line), which start in a disordered configuration. Then \"the strong interactions between those particles\" will push them to form an ordered crystal. It looks like a \"passage from a disordered situation to an ordered one\". But is this an isolated system? This is not clear if one considers the pictures. The final configuration looks like a perfect crystal. But if there are interactions between the particles favoring an ordered crystal, the disordered initial configuration must have been one of high potential energy, hence the \"ordered\" configuration will have a high kinetic energy, and oscillations will occur. Of course, if the total initial energy is sufficiently small, the oscillations will be small and the equilibrium state will be crystalline. But that is not incompatible with the \"general interpretation of the growth of entropy\". Equilibrium states maximize entropy, for a given energy, but may be crystalline (at least for higher dimensional lattices). This is one example where maximum entropy is not necessarily the same as maximal disorder (in the intuitive sense of the word). On the other hand, if dissipation takes place, the \"passage from a disordered situation to an ordered one\" is possible, even starting from a configuration of high potential energy. But this means that some environment absorbs the energy of the system, in the form of heat, hence it increases *its* entropy. And the environment must have been more \"ordered\" to start with. Again, this is in agreement with the \"general interpretation of the growth of entropy\".\n\nTo give another example, Prigogine and Stengers emphasize in (, p.427) that, for the B\u00e9nard instability[^85] to occur *one must provide more heat to the system*. As noticed by Meessen (, p.118) \"It is remarkable that the creation of a structure is initiated by a source of heat, which is usually a source of disorder\". This quotation shows clearly what is confusing: heating suggests an increase of disorder, while the result is the appearance of a self-organized structure. But what is needed, of course, is a temperature *difference* between the two plates. So, if one heats up from below, one must have some cooling from above. The cooling acts like a refrigerator, so it requires some \"ordered\" source of energy. The more one heats, the more efficient must be the cooling.\n\nThese are fairly trivial remarks, but which, I believe, have to be made, at least for the general public, if one wants to avoid giving the impression that processes violating the Second Law can occur: all the emergence of complex structures, of whatever one sees, is perfectly compatible with the universal validity of the \"convergence to equilibrium\", provided one remembers that our universe started (and still is) in a low entropy state[^86].\n\nBesides, one should be careful with the issue of determinism, at the level of macroscopic laws, for example when bifurcations occur. In many places, Prigogine and Stengers seem to attach a deep meaning to the notion of *event*: \"By definition, an event cannot be deduced from a deterministic law: it implies, one way or another, that what happened \"could\" not have happened.\" (, p.46)[^87] Let us consider Buridan's ass. One can describe it as being \"in between\" two packs of food. It could choose either. But that is a macroscopic description. Maybe one of the eyes of the ass is tilted in one direction, or some of its neurons are in a certain state favoring one direction. This is an example where the macroscopic description does not lead to an autonomous macroscopic law. At the macroscopic level, things are indeterminate, and the scheme of Section 3 does not apply: the microscopic configurations may fall into different classes, corresponding to different future evolutions for the macroscopic variables, and no single class constitutes an overwhelming majority. Thus, when we repeat the experiment (meaning that we control the same *macroscopic* variables) different outcomes will occur, because different experiments will correspond to microscopic variables that belong to different classes.\n\nThe same thing may happen in a variety of phenomena, e.g. which way a roll in a B\u00e9nard cell will turn. But that (true) remark has nothing to do with the issue of determinism, which is meaningful only at the microscopic level: in a perfectly deterministic universe (at that level) there will always be lots of situations where no simple autonomous macroscopic laws can be found, hence we shall have the illusion of \"indeterminism\" if we consider only the macroscopic level[^88].\n\nOne should avoid (once more) the Mind Projection Fallacy. The macroscopic description may be all that is accessible to us, hence the future becomes unpredictable, but, again, it does not mean that Nature is indeterminate[^89].\n\nI will conclude with some remarks on Boltzmann and Darwin, which may also clarify the relation between \"subjective\" evaluations of probabilities and what we call an \"explanation\". As we saw, Boltzmann had a great admiration for Darwin. While preparing this article, I read in \"La Recherche\" that \"the couple random mutations-selection has some descriptive value, but not at all an explanatory one\" (). That attitude is rather common (outside biology), but it goes a bit too far. Actually, there is an analogy between the kind of explanation given by Darwin and the one given by Boltzman, and they are both sometimes similarly misunderstood[^90] (of course, Darwin's discovery, although less quantitative than statistical mechanics, had a much deeper impact on our culture). What does it mean to explain some fact, like evolution or irreversibility? As we saw, we claim to understand some macroscopically observed behaviour when, given some macroscopic constraint on a system, the overwhelming majority of the microscopic configurations compatible with those constraints (and evolving according to the microscopic laws) drive the macroscopic variables in agreement with that observed behaviour.\n\nTurning to Darwin, his problem was to explain the diversity of species and, more importantly, the *complexity* of living beings, \"those organs of extreme perfection and complication\", like eyes or brains, as Darwin called them[^91]. The fact is that we do not know, and we shall never know every microscopic detail about the world, especially about the past (such as every single mutation, how every animal died etc \u2026). Besides, the initial conditions of the world could be just so that complex organs are put together in one stroke. To use a common image, it would be like \"hurling scrap metal around at random and happening to assemble an airliner\" (Dawkins,, p.8). This does not violate any known law of physics. But it would be similar to various \"exceptional\" initial conditions that we encountered before (e.g. the particles going back to the left half of the box). And we would not consider an explanation valid if it appealed to such \"improbable\" initial conditions. But to say that such a scenario is \"improbable\" simply means that, given our (macroscopic) description of the world, there are very few microscopic configurations compatible with that description and giving rise to this scenario. And, indeed, if the world was four thousand years old, the existence of those complex organs would amount to a miracle.\n\nTo understand the Darwinian explanation, one must take into account four elements, at the level of the macroscopic description: natural selection (very few animals have offspring), variation (small differences between parents and offsprings occur, at least in the long run), heritability and time (the earth is much older than used to be thought). Then, the claim is that the overwhelming majority of microscopic events (which mutations occur, which animals die whithout children) compatible with such a macroscopic description leads to the appearance of those \" organs of extreme perfection and complication\" [^92]. Note that we do not need to assume that mutations are genuinely \"random\". They may obey perfectly deterministic laws, and the randomness may reflect only our ignorance of the details.\n\nA final point which is common to Boltzmann and to Darwin (and his successors) is that they have provided \"brilliant confirmations of the mechanical view of Nature\"[^93]. Many people simply cannot swallow mechanical and reductionist explanations. They need some vital spirit, some teleological principle or some other animist view. Their philosophies \"thrive upon the errors and confusions of the intellect\". And this is probably why the theories of Boltzmann and of Darwin have been constantly attacked and misrepresented. Putting philosophical considerations aside, I believe that what we understand well, we understand in mechanical and reductionist terms. There is no such thing as a holist explanation in science. And thanks to people like Boltzmann and Darwin the \"mechanical view of Nature\" is alive and well, and is here to stay.\n\n# Conclusion: What makes poets happy?\n\nThis paper has been written mainly for scientists. However, many references to Prigogine are found in the literature of the human sciences and philosophy. But why should anybody in those fields worry about what happens in physics or chemistry? In his most recent book , Prigogine starts by opposing the objective scientific view of the world with the subjective view (our feeling of time or of \"free will\") which some philosophers take as their starting point. His goal is to reconcile both approaches through his new understanding of physics.\n\nOf course, it would be nice if one could fulfill that goal. But there are again basic confusions[^94]. Take the issue of free will. It is true that if the fundamental laws of physics are deterministic, and if one rejects dualism, then free will is, in some sense, an illusion. But it is not clear that an element of \"intrinsic randomness\" in the fundamental physical laws would make it less an illusion. The only thing which is clear is that our inability to predict the future is not very relevant for this discussion. So that the fact that I am unable to predict which way a B\u00e9nard cell will rotate is not going to make me feel \"free\". Ignorance does not explain anything. And there is no precise sense in which a \"narrow path\" has been found between \"blind laws\" and \"arbitrary events\" (, p.224).\n\nAnother confusion concerns the relationship between the natural and the social sciences. In our discussion of the macroscopic level versus the microscopic one, we should locate the problems that psychology or the social sciences deal with at a very macroscopic level. Humans or societies are so many scales above molecules that modifications in the basic physical laws is (probably) almost irrelevant for the understanding of human actions[^95]. The main problem of the social sciences is to exist as sciences, i.e. to discover theories that are well tested and that explain some non-trivial aspect of human affairs. The only thing that people working in those fields might learn from the natural sciences is a general scientific attitude, what one might call the epistemology of the Enlightenment: a critical mind, not to rely on authorities, to compare theory with experiment, etc\u2026. But there is no need to ape what happens in the exact sciences. So that, even if there was really a shift of paradigm (whatever that means) from Newtonianism to Prigoginianism in physics, that would be no reason at all for the social scientists to rush towards theories where randomness is important[^96]. And, of course, probabilistic models may be relevant in the social sciences, even if the fundamental laws are determistic.\n\nThe final confusion concerns the \"end of certainties\" or the \"desillusion with science\" . The plain fact is that we know much more about the world than we did three centuries ago, or fifty, or twenty years ago. Even the discovery that one cannot predict the weather (for more than a few weeks) means that our understanding of the laws governing the weather has improved. The general feeling that there is a \"crisis in science\" in turn fuels various anti-scientific attitudes that combine an extreme skepticism towards science with an equally unreasonable openness towards pseudo-sciences and superstitions[^97]. In intellectual circles, this attitude is found in cultural and philosophical relativism or in some parts of the \"sociology of science\"[^98]. Of course, science is in a perpetual \"crisis\", because it is not a dogma, and is subject to revision. But what is not revisable is what I called the epistemology of the Enlightenment, and I have more than a suspicion that this epistemology is really what is being attacked by people who insist that there is a deep \"crisis in science\". It is interesting to note (but another article would be needed to develop that point) that skepticism with respect to science is based on two very different lines of thought: the first one is based on traditional philosophical arguments going back to Berkeley, Hume or Kant. While some of these arguments are clever and interesting, the progress of science is such that these a priori skeptical arguments leave many people cold. Another, conceptually different, line of thought is to try to show that science itself has reached some kind of limit, or \"has to admit\" that one cannot go further. Quantum mechanics, Chaos, the Big Bang or G\u00f6del's theorem are usually cited as evidence for those claims. But this is basically pure confusion and misunderstanding, as I tried to show in this paper, at least for one of those examples. When all is said and done, science and reason is all we have. Outside of them, there is no hope.\n\n**APPENDIX 1. The Kac ring model.**\n\nLet me analyse a simple model, due to Mark Kac ( p.99, see also Thompson ( p.23)), which nicely illustrates Boltzmann's solution to the problem of irreversibility, and shows how to avoid various misunderstandings and paradoxes.\n\nI shall describe a slightly modified version of the model and state the relevant results, referring to for the proofs (the quotations below come from ).\n\n\"On a circle we consider $n$ equidistant points\"; $m$ of the intervals between the points are marked and form a set called $S$. The complementary set (of $n-m$ intervals) will be called $\\bar S$.\n\n\"Each of the $n$ points is a site of a ball which can be either white $(w)$ or black $(b)$. During an elementary time interval each ball moves counterclockwise to the nearest site with the following proviso\".\n\nIf the ball crosses an interval in $S$, it changes color upon completing the move but if it crosses an interval in $\\bar S$, it performs the move without changing color.\n\n\"Suppose that we start with all white balls; the question is what happens after a large number of moves\". Below (after eq. 3), we shall also consider other initial conditions.\n\nLet us emphasize the analogy with mechanical laws. The balls are described by their positions and their (discrete) \"velocity\", namely their color. One of the simplifying features of the model is that the \"velocity\" does not affect the motion. The only reason I call it a \"velocity\" is that it changes when the ball collides with a fixed \"scatterer\", i.e. an interval in $S$. Scattering with fixed objects tends to be easier to analyse than collisions between particles. The \"equations of motion\" are given by the counterclockwise motion, plus the changing of colors (see eqs (5,6) below). These equations are obviously deterministic and reversible: if after a time $t$, we change the orientation of the motion from counterclockwise to clockwise, we return after $t$ steps to the original state[^99]. Moreover, the motion is strictly periodic: after $2n$ steps each interval has been crossed twice by each ball, hence they all come back to their original color. This is analogous to the Poincar\u00e9 cycles, with the provision that, here, the length of the cycle is the same for all configurations (there is no reason for this feature to hold in general mechanical systems). Moreover, it is easy to find special configurations which obviously do not tend to equilibrium: start with all white balls and let every other interval belong to $S$ (with $m=\\frac{n}{2}$). Then, after two steps, all balls are black, after four steps they are all white again, etc... The motion is periodic with period 4. Turning to the solution, one can start by analysing the approach to equilibrium in this model \u00e0 la Boltzmann:\n\n*Analog of the Classical Solution of Boltzmann.* Let $N_w(t)(N_b(t))$ denote the total number of white (black) balls at time $t$ (i.e., after $t$ moves; $t$ being an integer) and $N_w(S;t)(N_b(S;t))$ the number of white (black) balls which are going to cross an interval in $S$ at time $t$.\n\n\"We have the immediate conservation relations: $$\\begin{aligned}\nN_w(t+1) &=& N_w(t) - N_w(S;t) + N_b (S;t) \\nonumber \\\\\nN_b (t+1) &=& N_b(t) - N_b (S;t) + N_w (S;t) \\label{A1}\n\\end{aligned}$$\n\nNow to follow Boltzmann, we introduce the assumption (\"Stosszahlansatz\" or \"hypothesis of molecular chaos\"[^100]) $$\\begin{aligned}\nN_w(S;t) &=& mn^{-1} N_w (t) \\nonumber \\\\\nN_b (S;t) &=& mn^{-1} N_b (t)\" \\label{A2}\n\\end{aligned}$$\n\nOf course, if we want to solve (1) in a simple way we have to make some assumption about $N_w(S;t), N_b (S;t)$. Otherwise, one has to write equations for $N_w(S;t), N_b (S;t)$ that will involve new variables and lead to a potentially infinite regress.\n\nThe intuitive justification for this assumption is that each ball is \"uncorrelated\" with the event \"the interval ahead of the ball belongs to $S$\", so we write $N_w (S;t)$ as equal to $N_w(t)$, the total number of white balls, times the density $\\frac{n}{m}$ of intervals in $S$. This assumption looks completely reasonable. However, upon reflection, it may lead to some puzzlement (just as the hypothesis of \"molecular chaos\" does): what does \"uncorrelated\" exactly mean? Why do we introduce a statistical assumption in a mechanical model? Fortunately here, these questions can be answered precisely and we shall answer them later by solving the model exactly. But let us return to the Boltzmannian story.\n\n\"One obtains $$N_w(t+1) - N_b (t+1) = (1-2mn^{-1})(N_w(t)-N_b(t))$$ Thus $$\\begin{aligned}\nn^{-1} [N_w t) - N_b (t)] &=& (1-2mn^{-1})^t n^{-1}[N_w(0)-N_b(0)]\\nonumber\\\\\n&=& (1-2mn^{-1})^t \\label{A3}\n\\end{aligned}$$ and hence if $$2m1$ or $>5$.\n\n## Data\n\nWe used the National Institute of Environmental Health Science's Environmental Genome Project SNPs database\u00a0, which results from direct Sanger resequencing of environmental response genes in several populations. We considered all diallelic SNPs in 5.01 Mb of sequence from noncoding regions of 219 autosomal genes (supporting information). These data have been the subject of many publications, including\u00a0. As an assessment of quality, additional high-coverage short-read sequencing has recently been performed across 8 samples in this data set. Over 26,000 sites, the SNP concordance between this next-generation sequencing and the original Sanger sequencing averages 99.5% (D.\u00a0Nickerson, personal communication). Given the high quality of this data set, we do not incorporate sequencing error into our modeling. We believe such correction will be essential in future applications to less accurate short-read sequencing data, as inference based on the frequency spectrum is sensitive to rare alleles.\n\nTo estimate the ancestral allele, we aligned to the panTro2 build of the chimp genome\u00a0. Like other methods based on the unfolded AFS, our analysis is sensitive to errors in identifying the ancestral allele. We statistically corrected the AFS for ancestral misidentification\u00a0, using a context-dependent substitution model\u00a0. This procedure has been shown to perform better than aligning to multiple species\u00a0. To account for missing data and ease qualitative comparisons between populations, we projected all spectra down to 20 samples per population\u00a0 (supporting information).\n\nThe human-chimp divergence in the data is 1.13%. We assumed a divergence time of 6 My\u00a0 and a generation time of 25 years. This yielded an estimated neutral mutation rate of $\\mu = 2.35 \\times 10^{-8}$ per site per generation, which is comparable to direct estimates\u00a0. There is some controversy as to the appropriate generation time to assume in human population genetic studies\u00a0. In particular, the human generation time may differ between cultures and may have changed during our biological and cultural evolution. The bootstrap uncertainties reported in Tables\u00a0 and\u00a0 do not include systematic uncertainties in the human-chimp divergence or generation times. The generation time, however, formally cancels when converting between genetic and chronological times.\n\n## Nonsynonymous polymorphism\n\nIn our prediction of the distribution of nonsynonymous polymorphism, the distribution of selective effects assumed was a negative-gamma distribution with shape parameter $\\alpha = 0.184$ and scale $\\beta = 8200$\u00a0. The AFS was calculated by trapezoid-rule integration over this distribution, using 201 evaluations logarithmically spaced over $\\gamma = [-300, -10^{-6}]$. All demographic parameters, including the scaled mutation rate $\\theta$, were set to the maximum-likelihood values from our Out of Africa analysis.\n\n# Results\n\nFirst, we explored how various demographic forces affect the AFS, building intuition for our subsequent applications to real data. We then compared the performance of diffusion versus coalescent methods for evaluating the AFS, finding that the diffusion approach is substantially faster. We then applied our diffusion approach to infer parameters for plausible demographic models for the history of continental human populations. We first considered the expansion of humans out of Africa and then the settlement of the New World. In these applications, we inferred the maximum composite-likelihood parameters of our models using diffusion fits to the real data. To account for linkage in estimating variances and critical values for hypothesis tests, we then repeatedly fit both conventional and parametric bootstrap data sets. Finally, in an application incorporating selection, we predicted the distribution of nonsynonymous variation between populations in our Out of Africa model, finding good agreement with the available data.\n\n## Demographic effects on the AFS\n\nIn Fig\u00a0, we provide examples of the AFS under different demographic scenarios. Fig\u00a0B illustrates the isolation-with-migration model for which the spectra are calculated. The expected spectrum at zero divergence time is shown in Fig\u00a0C. Fig\u00a0D shows the expected spectrum at various divergence times under various demographic scenarios. Qualitatively, correlation between population allele frequencies declines with increasing divergence time, depopulating the diagonal of the AFS. On the other hand, migration prolongs and sustains correlation. Less obviously, AFS entries corresponding to shared low-frequency alleles distinguish between increased migration and reduced divergence time (supporting information). Additionally, differences in genetic drift between populations with different effective sizes result in asymmetries in the AFS. These qualitative features of the AFS are also evident in human data; detailed modeling allows us to quantify our inference regarding the type, timing, and strength of demographic events that are consistent with the data.\n\n## Computational performance\n\nThe computer program implementing our method is named $\\partial$a$\\partial$i\u00a0(Diffusion Approximations for Demographic Inference). It is open-source and freely available at `http:\/\/dadi.googlecode.com`.\n\nFig\u00a0E compares $\\partial$a$\\partial$i\u00a0with a coalescent approach to evaluating the likelihood of frequency spectrum data. The coalescent simulator *ms*\u00a0 was used to generate a simulated data set from the model in Fig\u00a0B, with parameters $\\nu_1=0.9$, $\\nu_2 = 0.1$, $M=2$, $\\tau = 2$, $\\theta = 1000$, scaled total recombination rate $\\rho=1000$, and 20 samples per population. Coalescent-based estimates of the expected AFS were generated by averaging $10^5$ *ms* simulations, each run with $\\theta = 1$ and $\\rho = 0$. These estimates were scaled to $\\theta = 1000$ for comparison with the simulated data set. (This procedure is substantially faster than simulating with larger $\\theta$ and $\\rho$.) Each estimate took approximately 7.2 seconds of computation. The histogram in Fig\u00a0E shows the resulting distribution of estimated likelihoods of the data. Shown by the red line in Fig\u00a0E is the result from our diffusion approach (with grid sizes $G = \\{40,50,60\\}$), which took approximately 2.0 seconds of computation. The yellow line is the likelihood from $10^8$ coalescent simulations, illustrating the high accuracy of our diffusion approach. (Note that the coalescent approach we consider here is not necessarily optimal. We are, however, unaware of any such approach that is competitive in computational speed with the diffusion method.)\n\nThe computational advantage of the diffusion method is even larger when placed in the context of parameter optimization. Unlike the coalescent approach, there is no simulation variance, so efficient derivative-based optimization methods can be used. As examples, consider our applications to human data, which involve 20 samples per population. On a modern workstation, fitting a single-population three-parameter model took roughly a minute, while fitting a two-population six-parameter model took roughly 10 minutes. The fits of three-population models with roughly a dozen parameters typically took a few hours to converge from a reasonable initial parameter set. This speed allows us to use extensive bootstrapping to estimate variances, overcoming the limitations of composite likelihood.\n\n## Expansion out of Africa\n\nOur analysis of human expansion out of Africa used data from three HapMap populations: 12 Yoruba individuals from Ibadan, Nigeria (YRI); 22 CEPH Utah residents with ancestry from northern and western Europe (CEU); and 12 Han Chinese individuals sampled in Beijing, China (CHB). Because approaches based on the frequency spectrum are sensitive to miscalling of the ancestral state, we statistically corrected for ancestral misidentification using an approach that accounts for a myriad of mutation and context-dependent biases (such as CpG effects)\u00a0. To ease qualitative comparison among populations and account for missing data, we projected the data down to 20 sampled chromosomes per population\u00a0. Because this data set is of very high quality ($>$``{=html}99% concordance of sequenced SNPs with next-generation sequencing of the same individuals to high coverage; see Materials and Methods), we do not explicitly correct for sequencing errors here. We were left with 17,446 segregating diallelic single nucleotide polymorphisms (SNPs) from effectively 4.04 Mb of sequence. Fig\u00a0A shows the resulting AFS. For ease of visualization, the top row of Fig\u00a0C shows the two-population marginal spectra.\n\nThere are many possible three-population demographic models one could consider for these populations. To develop a parsimonious yet realistic model, we first considered the marginal AFS for each population and each pair of populations. Previous analyses found that the YRI spectrum is well-fit by a two-epoch model with ancient population growth\u00a0, and we found this as well (supporting information). Previous analyses of the CEU and CHB populations found that both populations went through bottlenecks\u00a0 concurrent with divergence\u00a0. Such models qualitatively fit the marginal CEU-CHB spectrum (supporting information).\n\n```latex\n\\begin{table*}\n\\caption{\n{\\bf Out of Africa inferred parameters}}\n\\begin{minipage}{\\textwidth}\n\\begin{tabular*}{\\hsize}{@{\\extracolsep{\\fill}}cccc}\n& & conventional & parametric bootstrap\\\\\n& maximum & bootstrap 95\\% & bias-corrected 95\\%\\\\\nparameter\\footnote{See Fig~\\ref{fig:YRIfit}B for model schematic. Growth rates $r$ and migration rates $m$ are per generation.} & likelihood & confidence interval & confidence interval\\\\\n\\hline\n$N_A$ & 7,300 & 4,400 -- 10,100 & 6,300 -- 9,200\\\\\n$N_{AF}$ & 12,300 & 11,500 -- 13,900 & 11,100 -- 13,100\\\\\n$N_B$ & 2,100 & 1,400 -- 2,900 & 1,700 -- 2,600\\\\\n$N_{EU0}$ & 1,000 & 500 -- 1,900 & 500 -- 1,500\\\\\n$r_{EU}$ (\\%) & 0.40 & 0.15 -- 0.66 & 0.26 -- 0.57\\\\\n$N_{AS0}$ & 510 & 310 -- 910 & 320 -- 750\\\\\n$r_{AS}$ (\\%) & 0.55 & 0.23 -- 0.88 & 0.32 -- 0.79\\\\\n$m_{AF-B}$ ($\\times 10^{-5}$) & 25 & 15 -- 34 & 19 -- 36\\\\\n$m_{AF-EU}$ ($\\times 10^{-5}$) & 3.0 & 2.0 -- 6.0 & 1.6 -- 7.6\\\\\n$m_{AF-AS}$ ($\\times 10^{-5}$) & 1.9 & 0.3 -- 10.4 & 0.7 -- 6.9\\footnote{One low-migration outlier was removed for each of these estimations.}\\\\\n$m_{EU-AS}$ ($\\times 10^{-5}$) & 9.6 & 2.3 -- 17.4$^b$ & 5.7 -- 20.2\\\\\n$T_{AF}$ (kya) & 220 & 100 -- 510 & 90 -- 410\\\\\n$T_{B}$ (kya) & 140 & 40 -- 270 & 60 -- 310\\\\\n$T_{EU-AS}$ (kya) & 21.2 & 17.2 -- 26.5 & 17.6 -- 23.9\\\\\n\\end{tabular*}\n\\end{minipage}\n\\label{tbl:YRIfit}\n\\end{table*}\n```\n\nCombining these demographic features yields the model illustrated in Fig\u00a0B. The maximum likelihood values for the 14 free parameters are reported in Table\u00a0. Qualitatively, the resulting model reproduces the observed spectra well, as seen in the second and third rows of Fig\u00a0C. (The correlation between adjacent residuals is due in part to our projection of the data down from a larger sample size (supporting information).) Allowing for asymmetric gene flow yielded very little improvement in fit, as did allowing for growth in the Eurasian ancestral population or allowing the CEU and CHB bottleneck and divergence times to differ (data not shown).\n\nOur composite likelihood function assumes that polymorphic sites are independent. Because it thus overestimates the number of effective independent data points, confidence intervals calculated directly from the composite likelihood function will be too liberal. To control for linkage, we performed both conventional and parametric bootstraps. Because our sequenced genes are typically well separated, they can be treated as independent, and our conventional bootstrap resampled from the 219 sequenced loci. For the parametric bootstrap, simulated data sets that incorporate linkage and the EGP's sequencing strategy were generated with *ms*\u00a0.\n\nTable\u00a0 reports parameter 95% confidence intervals from both the conventional and bias-corrected parametric bootstraps. The parametric bootstraps yield slightly smaller confidence intervals than the conventional bootstrap, suggesting that some variability in the data has not been accounted for by our simulations. This variability may involve small varied selective forces on the sequenced regions, or slight relatedness between sampled individuals. The parametric bootstrap results additionally show that our method possesses very little bias in parameter inference (supporting information).\n\nAs seen in Table\u00a0, the times for growth in the African ancestral population and divergence of the Eurasian ancestral population ($T_{AF}$ and $T_{B}$) have particularly wide confidence intervals, likely a consequence of the high inferred migration rate $m_{AF-B}$ between the African and Eurasian ancestral populations. $T_{AF}$ shows high correlation with the ancestral population size $N_A$, while $T_B$ shows no strong linear correlation with any other single parameter (supporting information). We found that 92 out of our 100 conventional bootstrap fits yield $N_{AS0} < N_{EU0}$, supporting the contention that the CHB population suffered a more severe bottleneck than the CEU population\u00a0.\n\nWe used several metrics to assess our model's goodness-of-fit, in additional to visual inspection of the residuals seen in Fig\u00a0C. Fig\u00a0D compares the decay of linkage disequilibrium (LD) in the data and in the parametric bootstrap simulations. The agreement seen is notable because our demographic inference used no LD information in building and fitting the model. This LD comparison thus serves as independent validation of both our model and bootstrap simulations We also asked whether the likelihood $\\mathcal{L}$ found in the real data fit is atypical of fits to simulated data. Out of fits to 100 simulated data sets, 2 produced a smaller likelihood (worse fit) than the real data fit (Fig\u00a0E), yielding a p-value of $\\approx$``{=html}0.02. One can craft examples in which a likelihood-based goodness-of-fit test fails to exclude very poor models\u00a0. Thus we also applied Pearson's $\\chi^2$ goodness-of-fit test, a more robust and standard method for data that is in Poisson-distributed bins, such as the AFS\u00a0. In our case, we must use our parametric bootstraps to assess the significance of the sum-of-squared-residuals test statistic $X^2$, because many entries in the AFS are small and because they are not strictly independent. Fig\u00a0E shows the bootstrap-derived empirical distribution of $X^2$. Two of the bootstraps yielded a larger $X^2$ (worse fit) than the real data fit, giving a p-value of $\\approx$``{=html}0.02, identical to that from the likelihood-based test. (The two simulations that yield a higher $X^2$ than the real fit are not the same two that yield a lower $\\mathcal{L}$, suggesting that these tests are somewhat independent.) In some cases specific frequency classes of SNPs, such as rare alleles, may be of particular interest. In the supporting information, we provide comparisons of the joint distribution of rare alleles seen in the data with that from our simulations. These comparisons indicate that our model also reproduces well this interesting region of the frequency spectrum. Finally, in Fig\u00a0 we compare the model and data using larger bins of SNPs specific to specific populations or segregating at high or low frequency. In all cases the model agrees within the uncertainty of the bootstrapped data. Taken together, these tests suggest that our model provides a reasonable, though not complete, explanation of the data, lending credence to our demographic estimates.\n\nThe inferred contemporary migration parameters ($m_{AF-EU}$, $m_{AF-AS}$ and $m_{EU-AS}$) are small, raising the question as to whether they are statistically distinguishable from zero. Figure\u00a0F shows that the improvement in fit to the real data upon adding contemporary migration to the model is much larger than would be expected if there were no such migration, implying that the contemporary migration we infer is highly statistically significant. Omitting ancient migration ($m_{AF-B}$) reduced fit quality even more, indicating that the data also demand substantial ancient migration.\n\n## Settling the New World\n\nTo study the settlement of the Americas, we used the previously considered 22 CEU and 12 CHB individuals plus an additional 22 individuals of Mexican descent sampled in Los Angeles (MXL). Data were processed as in our Out of Africa analysis, yielding 13,290 segregating SNPs from effectively 4.22 Mb of sequence. Fig\u00a0A shows the resulting AFS, while Fig\u00a0C shows the marginal spectra.\n\nA model in which the CEU and CHB diverge from an equilibrium population did not reproduce the AFS well (supporting information). Interestingly, a model allowing a prior size change in the ancestral population better fit the AFS but very poorly fit the observed LD decay (supporting information). Thus, reproducing the AFS does not guarantee reproduction of LD, at least given a historically unrealistic model. To develop a more realistic model, we endeavored to include the effects of Eurasian divergence from and migration with the African population. Computational limits precluded us from considering all 4 populations simultaneously, so we dropped the African population from the simulation upon MXL divergence (Fig\u00a0B).\n\n```latex\n\\begin{table*}\n\\caption{\n{\\bf Settlement of New World inferred parameters}}\n\\begin{minipage}{\\textwidth}\n\\begin{tabular*}{\\hsize}{@{\\extracolsep{\\fill}}cccc}\n& & conventional & parametric bootstrap\\\\\n& maximum & bootstrap 95\\% & bias-corrected 95\\%\\\\\nparameter\\footnote{See Fig~\\ref{fig:MXLfit}B for model schematic. Growth rates $r$ and migration rates $m$ are per generation. $f_{MX}$ is the average European admixture proportion of the Mexican-Americans sampled.} & likelihood & confidence interval & confidence interval\\\\\n\\hline\n$N_{EU0}$ & 1,500 & 700 -- 2,100 & 900 -- 2,200\\\\\n$r_{EU}$ (\\%) & 0.23 & 0.08 -- 0.45 & 0.16 -- 0.34\\\\\n$N_{AS0}$ & 590 & 320 -- 800 & 410 -- 790\\\\\n$r_{AS}$ (\\%) & 0.37 & 0.16 -- 0.60 & 0.24 -- 0.51\\\\\n$N_{MX0}$ & 800 & 160 -- 1,800 & 140 -- 1,600\\\\\n$r_{MX}$ (\\%) & 0.50 & 0.14 -- 1.17 & 0.41 -- 0.98\\\\\n$m_{EU-AS}$ ($\\times 10^{-5}$) & 13.5 & 7.5 -- 32.2 & 9.9 -- 20.8\\\\\n$T_{EU-AS}$ (kya) & 26.4 & 18.1 -- 43.1 & 21.7 -- 30.7\\\\\n$T_{Mx}$ (kya) & 21.6 & 16.3 -- 26.9 & 18.6 -- 24.7\\\\\n$f_{MX}$ (\\%) & 48 & 42 -- 60 & 41 -- 55\\\\\n\\end{tabular*}\n\\end{minipage}\n\\label{tbl:MXLfit}\n\\end{table*}\n```\n\nTable\u00a0 records the maximum-likelihood parameter values inferred for this model. Because this fit did not include African data, we could not reliably infer demographic parameters involving the African population. Thus, for this point estimate we fixed the Africa-related parameters $N_A$, $N_{AF}$, $N_B$, $m_{AF-B}$, $m_{AF-EU}$, $m_{AF-AS}$, $T_{AF}$ and $T_{B}$ to their maximum-likelihood values from Table\u00a0. Fig\u00a0C compares the model and data spectra. The residuals show little correlation, with the possible exception that the model may underestimate the number of high-frequency segregating alleles.\n\nParameter confidence intervals are reported in Table\u00a0. To account for our uncertainty in those parameters derived from the Out of Africa fit, for each conventional bootstrap fit we used a set of Africa-related parameters randomly chosen from the sets yielded by our Out of Africa conventional bootstrap. For the parametric bootstrap, we used the maximum-likelihood point estimates. Again, we see that the conventional bootstrap confidence intervals are comparable to, although slightly wider than, the parametric bootstrap intervals. Several parameters in this analysis have direct correspondence with our Out of Africa analysis. Of particular note, the confidence intervals for the CEU-CHB divergence time $T_{EU-AS}$ overlap.\n\nIn assessing goodness of fit, Fig\u00a0D shows that this model does indeed reproduce the observed pattern of LD decay. Unlike in our Out of Africa analysis, however, here the LD decay was used to choose the form of the model (although not its parameter values), so this is not a completely independent assessment of fit. Of our 100 parametric bootstrap fits, 13 yielded a worse likelihood than the real fit (Fig\u00a0E), for a p-value of $\\approx0.13$. Applying Pearson's $\\chi^2$ test, we find that 23 of 100 bootstrap fits yield a higher (worse) $X^2$ than the fit to the real data, for a p-value of $\\approx0.23$, similar to that of the likelihood analysis. Comparing distributions of rare alleles, our model typically reproduces the observed distribution well, although it may be somewhat overestimating the proportion of alleles that are rare or absent in the CHB population (supporting information). In sum, our model appears to be a reasonable explanation of this data, somewhat better than in our Out of Africa analysis.\n\nAn essential feature of the Mexican-American individuals considered here is that they are typically admixed from Native American and European ancestors. The $\\approx$``{=html}50% average European admixture proportion we inferred for the MXL population is consistent with previous estimates for Los Angeles Latinos\u00a0. We have no direct data from the Native American populations ancestral to MXL, but our model does account for their divergence from East Asia. A model neglecting this divergence (by setting $T_{MX}$ to zero) fit the data substantially worse and yields an unrealistically high average European admixture proportion into MXL of 0.68.\n\nNot only are Mexican-American individuals admixed, their admixture proportions also vary, and this subtlety is not directly accounted for in our analysis. To assess its effect on our results, we first roughly estimated the ancestry proportion of each individual, using essentially a maximum-likelihood version\u00a0 of the algorithm used in *structure*\u00a0 (supporting information). (Methods based on \"admixture LD\", which identify breakpoints between regions of Native American and European ancestry, may be more powerful\u00a0. However, the strategy used by the EGP of sequencing widely spaced genes will resolve few of these breakpoints, limiting the applicability of these methods.) We then performed additional parametric bootstrap analyses, using simulations with a distribution of individual ancestry chosen to mimic that seen in the data and, to further test the method, with an extremely wide distribution. These simulations showed that variation in individual ancestry does not bias our parameter inferences (supporting information). Remarkably, it does not even change our statistical power. This is evidenced by the fact that these bootstrap simulations yielded confidence intervals identical to our original simulations without variation in ancestry proportion (supporting information). Nevertheless, future studies may profit by incorporating individual ancestry information\u00a0, perhaps inferred from admixture LD.\n\nFinally, our model allowed us to assess the role recurrent migration from Asia played in the settlement of the New World\u00a0. When we added CHB-MXL migration to our model, we found that the maximum likelihood migration rate was $1.7 \\times 10^{-5}$ per generation. As shown in Fig\u00a0F, the resulting improvement in likelihood is typical (p-value $\\approx$``{=html}0.45) of fits including CHB-MXL migration to data simulated without it. Our data and analysis thus yielded no evidence of recurrent migration in the settlement of the New World. Note, however, that this simple test does not necessarily rule out more complex scenarios, in which migration may vary over time.\n\n## Nonsynomymous polymorphism\n\nPolymorphisms that change protein amino acid sequence are of medical interest because they are particularly likely to affect gene function\u00a0. Correspondingly, they are often subject to natural selection. Diffusion approaches are particularly useful for studying such nonsynonymous polymorphism, because they easily incorporate selection. Although the diffusion approximation assumes that sites are unlinked, nonsynonymous segregating sites are rare enough that this is often a reasonable approximation\u00a0.\n\nAs an illustration, we used our Out of Africa demographic model to predict the distribution of such variation between continental populations. To do so, we must specify a distribution for the selective effects of nonsynonymous mutations that enter the population. For this we adopted a negative gamma distribution whose parameters were recently inferred\u00a0. The resulting distribution of segregating variation is shown in Figure\u00a0A. (To ease comparison, we have assumed the same scaled mutation rate as in the neutral case of Fig\u00a0C.) As expected, selection sharply reduces the amount of segregating polymorphism. Figure\u00a0B shows the proportion of variants within various classes. Also as expected, selection shifts nonsynonymous variation toward lower frequencies, raising the proportion of singletons and lowering the proportion at frequency greater than 10%. Less obviously, it also reduces the proportion of variation that is shared between populations. In the neutral case, 43% of polymorphism is predicted to be present in more than one population, while in the selected case only 35% is. Thus genetic inferences from coding polymorphism may be less transferable between populations than might be expected from neutral patterns of allele sharing.\n\nIn the data considered here, there are about 400 nonsynomymous polymorphisms segregating in the three populations considered. This is too few for a detailed goodness-of-fit test of our predicted distribution. (Although see supporting information for a direct AFS comparison.) Nevertheless, we observe that our predictions shown in Figure\u00a0B all lie within the bootstrap 95% confidence intervals from the data.\n\n# Discussion\n\nOur diffusion approximation to the joint allele frequency spectrum is a powerful tool for population genetic inference. Although the diffusion approximation neglects linkage between sites, our method's computational efficiency allows us to use extensive bootstrap simulations to account for the effects of linkage. (Let us reiterate that linkage does not affect the expected site-frequency spectrum of neutral sites, so our diffusion-based approach is estimating the same AFS that coalescent simulations are estimating, but in a small fraction of the time). We applied our method to human expansion out of Africa and settlement of the New World, using public resequencing data from the Environment Genome Project. The flexibility of the diffusion approach also allowed us to consider the distribution of non-neutral variation, which is difficult to address with other approaches. Although no model can capture in detail the complete history of any population, the models presented here help refine our understanding of human expansion across the globe.\n\nOur demographic results are broadly consistent with previous analyses of human populations. In particular, single-population analyses have also inferred African population growth and European and Asian bottlenecks\u00a0. Also, the migration rates we infer are similar to those inferred by Schaffner et al.\u00a0 but somewhat smaller than those of Cox et al.\u00a0. On the other hand, Keinan et al.\u00a0 inferred no significant migration between CEU and CHB. Finally, our estimate of a New World founding effective population size in the hundreds is compatible other inferences\u00a0.\n\nPerhaps our most interesting demographic results are the inferred divergence times. Other studies\u00a0 have estimated divergence times between Europeans and East Asians similar to the $\\approx$``{=html}23 kya we infer. Interestingly, archeological evidence places humans in Europe much earlier ($\\approx$``{=html}40 kya)\u00a0. Our inferred divergence time of $\\approx$``{=html}22 kya between East Asians and Mexican-Americans is somewhat older than the oldest well-accepted New World archeological evidence\u00a0. The divergence we infer may reflect the settlement of Beringia, rather than the expansion into the New World proper\u00a0. Finally, the divergence time of $\\approx$``{=html}140 kya we infer between African and Eurasian populations is consistent with archeological evidence for modern humans in the Middle East $\\approx$``{=html}100 kya\u00a0, but it is much older than other inferences of $\\approx$``{=html}50 kya divergence from mitochondrial DNA\u00a0. This discrepancy may be explained by our inclusion of migration in the model. Migration preserves correlation between population allele frequencies, so an observed correlation across the genome can be explained by either recent divergence without migration or ancient divergence with migration. In fact, the African-Eurasian migration rate we infer of $\\approx$$25\\times 10^{-5}$ per generation is comparable to the $\\approx$$100\\times10^{-5}$ inferred from census records between modern continental Europe and Britain\u00a0.\n\nOne difficulty in interpreting our divergence times is that the sampled populations may not best represent those in which historically important divergences occurred. For example, the Yoruba are a West African population, so the divergence time we infer between Yoruba and Eurasian ancestral populations may correspond to divergence within Africa itself. Future studies of more populations\u00a0 will help alleviate this difficulty.\n\nAnother difficulty is that the genic loci we study here may not be ideal for demographic inference. Although we consider only noncoding sequence in fitting our historical model, selection on regulatory or linked coding sites may skew the AFS\u00a0. In fact, the EGP data have been shown to differ in some ways (e.g. Tajima's\u00a0$D$) from intergenic regions\u00a0. Nevertheless, we use the EGP data because it is currently the largest public resource of noncoding human genetic variation, and we fit a neutral model because disentangling the small expected effects of selection on these sites from demographic effects will require additional data. The rapidly declining cost of sequencing will give future studies access to many more loci that are likely to be less influenced by selection. Importantly, the computational burden of our method is independent of the amount of sequence used to construct the AFS. Additional loci will also increase power to discriminate between models and incorporate more detail.\n\nThe AFS encodes substantial demographic information. It is has been shown, however, that an isolated population's AFS does not uniquely and unambiguously identify its demographic history\u00a0; we expect a similar result to hold for multiple interacting populations. Moreover, the AFS does not capture all the information in the data. As illustrated by the alternative New World models we considered, patterns of linkage disequilibrium encode additional information. Future studies may profit from coupling our efficient AFS simulation with methods that address other aspects of the data.\n\nWe have developed a powerful diffusion-based method for demographic inference from the joint allele frequency spectrum. We applied our method to human expansion out of African and the settlement of the New World, developing models of human history that refine our knowledge and raise intriguing questions. We also applied our method to predict the distribution of nonsynonymous variation across populations, and this prediction is consistent with the available data. Our methods and the models inferred from it offer a foundation for studying the history and evolution of both our own species and others.","meta":{"dup_signals":{"dup_doc_count":11,"dup_dump_count":3,"dup_details":{"curated_sources":1,"2015-18":1,"unknown":9}},"filename":"out\/0909.0925_extract_toArXiv.tex.md"},"subset":"arxiv"} +{"text":"abstract: The next-generation astronomy archives will cover most of the universe at fine resolution in many wavelengths. One of the first of these projects, the Sloan Digital Sky Survey (SDSS) will create a 5-wavelength catalog over 10,000 square degrees of the sky. The 200 million objects in the multi-terabyte database will have mostly numerical attributes, defining a space of 100+ dimensions. Points in this space have highly correlated distributions. The archive will enable astronomers to explore the data interactively. Data access will be aided by multidimensional spatial indices. The data will be partitioned in many ways. Small tag objects consisting of the most popular attributes speed up frequent searches. Splitting the data among multiple servers enables parallel, scalable I\/O. Hashing techniques allow efficient clustering and pairwise comparison algorithms. Randomly sampled subsets allow debugging otherwise large queries at the desktop. Central servers will operate a data pump that supports sweeping searches that touch most of the data.\nauthor: Alexander S. Szalay, Peter Kunszt, Anirudha Thakar; Jim Gray and Don Slutz\ntitle: The Sloan Digital Sky Survey$^1$ and its Archive\n\n# Introduction\n\nAstronomy is undergoing a major paradigm shift. Data gathering technology is riding Moore's law: data volumes are doubling quickly, and becoming more homogeneous. For the first time data acquisition and archival is being designed for online interactive analysis. Shortly, it will be much easier to download a detailed sky map or object class catalog, than wait several months to access a telescope that is often quite small. Several multi-wavelength projects are under way: SDSS, GALEX, 2MASS, GSC-2, POSS2, ROSAT, FIRST and DENIS, each surveying a large fraction of the sky. Together they will yield a Digital Sky, of interoperating multi-terabyte databases. In time, more catalogs will be added and linked to the existing ones. Query engines will become more sophisticated, providing a uniform interface to all these datasets. In this era, astronomers will have to be just as familiar with mining data as with observing on telescopes.\n\n# The Sloan Digital Sky Survey\n\nThe Sloan Digital Sky Survey (SDSS) will digitally map about half of the Northern sky in five spectral bands from ultraviolet to the near infrared. It is expected to detect over 200 million objects. Simultaneously, it will measure redshifts for the brightest million galaxies (see http:\/\/www.sdss.org\/). The SDSS is the successor to the Palomar Observatory Sky Survey (POSS), which provided a standard reference data set to all of astronomy for the last 40 years. Subsequent archives will augment the SDSS and will interoperate with it. The SDSS project thus consists of not only of building the hardware, and reducing and calibrating the data, but also includes software to classify, index, and archive the data so that many scientists can use it. The SDSS will revolutionize astronomy, increasing the amount of information available to researchers by several orders of magnitude. The SDSS archive will be large and complex: including textual information, derived parameters, multi-band images, spectra, and temporal data. The catalog will allow astronomers to study the evolution of the universe in great detail. It is intended to serve as the standard reference for the next several decades. After only a month of operation, SDSS found the two most distant known quasars. With more data, other exotic properties will be easy to mine from the datasets. The potential scientific impact of the survey is stunning. To realize this potential, data must be turned into knowledge. This is not easy - the information content of the survey will be larger than the entire text contained in the Library of Congress.\n\nThe SDSS is a collaboration between the University of Chicago, Princeton University, the Johns Hopkins University, the University of Washington, Fermi National Accelerator Laboratory, the Japanese Participation Group, the United States Naval Observatory, and the Institute for Advanced Study, Princeton, with additional funding provided by the Alfred P. Sloan Foundation, NSF and NASA. The SDSS project is a collaboration between scientists working in diverse areas of astronomy, physics and computer science.\n\nThe survey will be carried out with a suite of tools developed and built especially for this project - telescopes, cameras, fiber spectrographic systems, and computer software. SDSS constructed a dedicated 2.5-meter telescope at Apache Point, New Mexico, USA. The telescope has a large, flat focal plane that provides a 3-degree field of view. This design balances the areal coverage of the instrument against the detector's pixel resolution. The survey has two main components: a photometric survey, and a spectroscopic survey. The photometric survey is produced by drift scan imaging of 10,000 square degrees centered on the North Galactic Cap using five broadband filters that range from the ultraviolet to the infrared. The effective exposure is 55 sec. The photometric imaging uses an array of 30x2Kx2K Imaging CCDs, 22 Astrometric CCDs, and 2 Focus CCDs. Its 0.4 arcsec pixel size provides a full sampling of the sky. The data rate from the 120 million pixels of this camera is 8 Megabytes per second. The cameras can only be used under ideal conditions, but during the 5 years of the survey SDSS will collect more than 40 Terabytes of image data. The spectroscopic survey will target over a million objects chosen from the photometric survey in an attempt to produce a statistically uniform sample. The result of the spectroscopic survey will be a three-dimensional map of the galaxy distribution, in a volume several orders of magnitude larger than earlier maps. The primary targets will be galaxies, selected by a magnitude and surface brightness limit in the r band. This sample of 900,000 galaxies will be complemented with 100,000 very red galaxies, selected to include the brightest galaxies at the cores of clusters. An automated algorithm will select 100,000 quasar candidates for spectroscopic follow-up, creating the largest uniform quasar survey to date. Selected objects from other catalogs will also be targeted.\n\nThe spectroscopic observations will be done in overlapping 3$^\\circ$ circular tiles. The tile centers are determined by an optimization algorithm, which maximizes overlaps at areas of highest target density. The spectroscopic survey will utilize two multi-fiber medium resolution spectrographs, with a total of 640 optical fibers. Each fiber is 3 seconds of arc in diameter, that provide spectral coverage from 3900 - 9200 \u00c5. The system can measure 5000 galaxy spectra per night. The total number of galaxy spectra known to astronomers today is about 100,000 - only 20 nights of SDSS data! Whenever the Northern Galactic cap is not accessible, SDSS repeatedly images several areas in the Southern Galactic cap to study fainter objects and identify variable sources. SDSS has also been developing the software necessary to process and analyze the data. With construction of both hardware and software largely finished, the project has now entered a year of integration and testing. The survey itself will take about 5 years to complete.\n\n## The SDSS Archives\n\nThe SDSS will create four main data sets: a photometric catalog, a spectroscopic catalog, images, and spectra. The photometric catalog is expected to contain about 500 distinct attributes for each of one hundred million galaxies, one hundred million stars, and one million quasars. These include positions, fluxes, radial profiles, their errors, and information related to the observations. Each object will have an associated image cutout (\"atlas image\") for each of the five filters. The spectroscopic catalog will contain identified emission and absorption lines, and one-dimensional spectra for 1 million galaxies, 100,000 stars, and 100,000 quasars. Derived custom catalogs may be included, such as a photometric cluster catalog, or quasar absorption line catalog. In addition there will be a compressed 1TB Sky Map. These products add up to about 3TB.\n\nThe collaboration will release this data to the public after a period of thorough verification. This public archive is expected to remain the standard reference catalog for the next several decades. This long-lifetime presents design and legacy problems. The design of the SDSS archival system must allow the archive to grow beyond the actual completion of the survey. As the reference astronomical data set, each subsequent astronomical survey will want to cross-identify its objects with the SDSS catalog, requiring that the archive, or at least a part of it, be dynamic with a carefully defined schema and metadata.\n\nObservational data from the telescopes is shipped on tapes to Fermi National Laboratory (FNAL) where it is reduced and stored in the Operational Archive (OA), protected by a firewall, accessible only to personnel working on the data processing. Data in the operational archive is reduced and calibrated via method functions. Within two weeks the calibrated data is published to the Science Archive (SA). The Science Archive contains calibrated data organized for efficient science use. The SA provides a custom query engine that uses multidimensional indices. Given the amount of data, most queries will be I\/O limited, thus the SA design is based on a scalable architecture, ready to use large numbers of cheap commodity servers, running in parallel. Science archive data is replicated to Local Archives (LA) within another two weeks. The data gets into the public archives (MPA, PA) after approximately 1-2 years of science verification, and recalibration. A WWW server will provide public access.\n\nThe Science Archive and public archives employ a three-tiered architecture: the user interface, an intelligent query engine, and the data warehouse. This distributed approach provides maximum flexibility, while maintaining portability, by isolating hardware specific features. Both the Science Archive and the Operational Archive are built on top of Objectivity\/DB, a commercial OODBMS.\n\nQuerying these archives requires a parallel and distributed query system. We have implemented a prototype query system. Each query received from the User Interface is parsed into a Query Execution Tree (QET) that is then executed by the Query Engine. Each node of the QET is either a query or a set-operation node, and returns a bag of object-pointers upon execution. The multi-threaded Query Engine executes in parallel at all the nodes at a given level of the QET. Results from child nodes are passed up the tree as soon as they are generated. In the case of aggregation, sort, intersection and difference nodes, at least one of the child nodes must be complete before results can be sent further up the tree. In addition to speeding up the query processing, this data push strategy ensures that even in the case of a query that takes a very long time to complete, the user starts seeing results almost immediately, or at least as soon as the first selected object percolates up the tree (Thakar etal 1999).\n\n## Typical Queries\n\nThe astronomy community will be the primary SDSS user. They will need specialized services. At the simplest level these include the on-demand creation of (color) finding charts, with position information. These searches can be fairly complex queries on position, colors, and other parts of the attribute space. As astronomers learn more about the detailed properties of the stars and galaxies in the SDSS archive, we expect they will define more sophisticated classifications. Interesting objects with unique properties will be found in one area of the sky. They will want to generalize these properties, and search the entire sky for similar objects.\n\nA common query will be to distinguish between rare and typical objects. Other types of queries will be non-local, like \"find all the quasars brighter than r=22, which have a faint blue galaxy within 5 arcsec on the sky\". Yet another type of a query is a search for gravitational lenses: \"find objects within 10 arcsec of each other which have identical colors, but may have a different brightness\". This latter query is a typical high-dimensional query, since it involves a metric distance not only on the sky, but also in color space. Special operators are required to perform these queries efficiently. Preprocessing, like creating regions of attraction is not practical, given the number of objects, and that the sets of objects these operators work on are dynamically created by other predicates.\n\n# Data Organization\n\nGiven the huge data sets, the traditional Fortran access to flat files is not a feasible approach for SDSS. Rather non-procedural query languages, query optimizers, database execution engines, and database indexing schemes must replace traditional \"flat\" file processing. This \"database approach\" is mandated both by computer efficiency, and by the desire to give astronomers better analysis tools.\n\nThe data organization must support concurrent complex queries. Moreover, the organization must efficiently use processing, memory, and bandwidth. It must also support the addition of new data to the SDSS as a background task that does not disrupt online access.\n\nIt would be wonderful if we could use an off-the-shelf SQL, OR, or OO database system for our tasks, but we are not optimistic that this will work. As explained presently, we believe that SDSS requires novel spatial indices and novel operators. It also requires a dataflow architecture that executes queries concurrently using multiple disks and processors. As we understand it, current systems provide few of these features. But, it is quite possible that by the end of the survey, some commercial system will provide these features. We hope to work with DBMS vendors towards this end.\n\n## Spatial Data Structures\n\nThe large-scale astronomy data sets consist primarily of vectors of numeric data fields, maps, time-series sensor logs and images: the vast majority of the data is essentially geometric. The success of the archive depends on capturing the spatial nature of this large-scale scientific data.\n\nThe SDSS data has high dimensionality \u2013 each item has thousands of attributes. Categorizing objects involves defining complex domains (classifications) in this N-dimensional space, corresponding to decision surfaces.\n\nThe SDSS teams are investigating algorithms and data structures to quickly compute spatial relations, such as finding nearest neighbors, or other objects satisfying a given criterion within a metric distance. The answer set cardinality can be so large that intermediate files simply cannot be created. The only way to analyze such data sets is to pipeline the answers directly into analysis tools. This data flow analysis has worked well for parallel relational database systems (DeWitt 92). We expect these data river ideas will link the archive directly to the analysis and visualization tools.\n\nThe typical search of these multi-Terabyte archives evaluates a complex predicate in k-dimensional space, with the added difficulty that constraints are not necessarily parallel to the axes. This means that the traditional indexing techniques, well established with relational databases, will not work, since one cannot build an index on all conceivable linear combinations of attributes. On the other hand, one can use the fact that the data are geometric and every object is a point in this k-dimensional space (Samet 1990a,b). Data can be quantized into containers. Each container has objects of similar properties, e.g. colors, from the same region of the sky. If the containers are stored as clusters, data locality will be very high - if an object satisfies a query, it is likely that some of the object's \"friends\" will as well. There are non-trivial aspects of how to subdivide, when the data has large density contrasts (Csabai etal 96).\n\nThese containers represent a coarse-grained density map of the data. They define the base of an index tree that tells us whether containers are fully inside, outside or bisected by our query. Only the bisected container category is searched, as the other two are wholly accepted or rejected. A prediction of the output data volume and search time can be computed from the intersection.\n\nThe SDSS data is too large to fit on one disk or even one server. The base-data objects will be spatially partitioned among the servers. As new servers are added, the data will repartition. Some of the high-traffic data will be replicated among servers. It is up to the database software to manage this partitioning and replication. In the near term, designers will specify the partitioning and index schemes, but we hope that in the long term, the DBMS will automate this design task as access patterns change.\n\nThere is great interest in a common reference frame the sky that can be universally used by different astronomical databases. The need for such a system is indicated by the widespread use of the ancient constellations - the first spatial index of the celestial sphere. The existence of such an index, in a more computer friendly form will ease cross-referencing among catalogs. A common scheme, that provides a balanced partitioning for all catalogs, may seem to be impossible; but, there is an elegant solution, a 'shoe that fits all': that subdivides the sky in a hierarchical fashion. Our approach is described in detail by Kunszt etal (1999).\n\n## Broader Metadata Issues\n\nThere are several issues related to metadata for astronomy datasets. One is the database schema within the data warehouse, another is the description of the data extracted from the archive and the third is a standard representation to allow queries and data to be interchanged among several archives. The SDSS project uses Platinum Technology's Paradigm Plus, a commercially available UML tool, to develop and maintain the database schema. The schema is defined in a high level format, and a script generator creates the .h files for the C++ classes, and the .ddl files for Objectivity\/DB. This approach enables us to easily create new data model representations in the future (SQL, IDL, XML, etc).\n\nAbout 20 years ago, astronomers agreed on exchanging most of their data in self-descriptive data format. This format, FITS, standing for the Flexible Image Transport System (Wells 81) was primarily designed to handle images. Over the years, various extensions supported more complex data types, both in ASCII and binary form. FITS format is well supported by all astronomical software systems. The SDSS pipelines exchange most of their data as binary FITS files. Unfortunately, FITS files do not support streaming data, although data could be blocked into separate FITS packets. We are currently implementing both an ASCII and a binary FITS output stream, using such a blocked approach. We expect large archives to communicate with one another via a standard, easily parseable interchange format. We plan to define the interchange formats in XML, XSL, and XQL.\n\nThe Operational Archive exports calibrated data to the Science Archive as soon as possible. Datasets are sent in coherent chunks. A chunk consists of several segments of the sky that were scanned in a single night, with all the fields and all objects detected in the fields. Loading data into the Science Archive could take a long time if the data were not clustered properly. Efficiency is important, since about 20 GB will be arriving daily. The incoming data are organized by how the observations were taken. In the Science Archive they will be inserted into the hierarchy of containers as defined by the multi-dimensional spatial index, according to their colors and positions.\n\nData loading might bottleneck on creating the clustering units - databases and containers - that hold the objects. Our load design minimizes disk accesses, touching each clustering unit at most once during a load. The chunk data is first examined to construct an index. This determines where each object will be located and creates a list of databases and containers that are needed. Then data is inserted into the containers in a single pass over the data objects.\n\n## Scalable Server Architectures\n\nAccessing large data sets is primarily I\/O limited. Even with the best indexing schemes, some queries must scan the entire data set. Acceptable I\/O performance can be achieved with expensive, ultra-fast storage systems, or with many of commodity servers operating in parallel. We are exploring the use of commodity servers and storage to allow inexpensive interactive data analysis. We are still exploring what constitutes a balanced system design: the appropriate ratio between processor, memory, network bandwidth, and disk bandwidth.\n\nUsing the multi-dimensional indexing techniques described in the previous section, many queries will be able to select exactly the data they need after doing an index lookup. Such simple queries will just pipeline the data and images off of disk as quickly as the network can transport it to the astronomer's system for analysis or visualization. When the queries are more complex, it will be necessary to scan the entire dataset or to repartition it for categorization, clustering, and cross comparisons. Experience will teach us the ratio between processor power, memory size, IO bandwidth, and system-area-network bandwidth.\n\nOur simplest approach is to run a scan machine that continuously scans the dataset evaluating user-supplied predicates on each object (Acharya 95). Consider building an array of 20 nodes, each with 4 Intel Xeon 450 Mhz processors, 256MB of RAM, and 12x18GB disks (4TB of storage in all). Experiments show that one such node is capable of reading data at 150 MBps while using almost no processor time (Hartman 99). If the data is spread among the 20 nodes, they can scan the data at an aggregate rate of 3 GBps. This half-million dollar system could scan the complete (year 2004) SDSS catalog every 2 minutes. By then these machines should be 10x faster. This should give near-interactive response to most complex queries that involve single-object predicates.\n\nMany queries involve comparing, classifying or clustering objects. We expect to provide a second class of machine, called a hash machine that performs comparisons within data clusters. Hash machines redistribute a subset of the data among all the nodes of the cluster. Then each node processes each hash bucket at that node. This parallel-clustering approach has worked extremely well for relational databases in joining and aggregating data. We believe it will work equally well for scientific spatial data.\n\nThe hash phase scans the entire dataset, selects a subset of the objects based on some predicate, and \"hashes\" each object to the appropriate buckets - a single object may go to several buckets (to allow objects near the edges of a region to go to all the neighboring regions as well). In a second phase all the objects in a bucket are compared to one another. The output is a stream of objects with corresponding attributes.\n\nThese operations are analogous to relational hash-join, hence the name (DeWitt 92). Like hash joins, the hash machine can be highly parallel, processing the entire database in a few minutes. The application of the hash-machine to tasks like finding gravitational lenses or clustering by spectral type or by redshift-distance vector should be obvious: each bucket represents a neighborhood in these high-dimensional spaces. We envision a non-procedural programming interface to define the bucket partition and analysis functions.\n\nThe hash machine is a simple form of the more general data-flow programming model in which data flows from storage through various processing steps. Each step is amenable to partition parallelism. The underlying system manages the creation and processing of the flows. This programming style has evolved both in the database community (DeWitt 92, Graefe 93, Barclay 95) and in the scientific programming community with PVM and MPI (Gropp 98). This has evolved to a general programming model as typified by a river system (Arpaci-Dusseau 99).\n\nWe propose to let astronomers construct dataflow graphs where the nodes consume one or more data streams, filter and combine the data, and then produce one or more result streams. The outputs of these rivers either go back to the database or to visualization programs. These dataflow graphs will be executed on a river-machine similar to the scan and hash machine. The simplest river systems are sorting networks. Current systems have demonstrated that they can sort at about 100 MBps using commodity hardware and 5 GBps if using thousands of nodes and disks (Sort benchmark).\n\nWith time, each astronomy department will be able to afford local copies of these machines and the databases, but for now, they will be a network service. The scan machine will be interactively scheduled: when an astronomer has a query, it is added to the query mix immediately. All data that qualifies is sent back to the astronomer, and the query completes within the scan time. The hash and river machines will be batch scheduled.\n\n## Desktop Data Analysis\n\nMost astronomers will not be interested in all of the hundreds of attributes of each object. Indeed, most will be interested in only 10% of the entire dataset - but different communities and individuals will be interested in a different 10%. We plan to isolate the 10 most popular attributes (3 Cartesian positions on the sky, 5 colors, 1 size, 1 classification parameter) into small 'tag' objects, which point to the rest of the attributes. Then we will build a spatial index on these attributes. These will occupy much less space, thus can be searched more than 10 times faster, if no other attributes are involved in the query.\n\nLarge disks are available today, and within a few years 100GB disks will be common. This means that all astronomers can have a vertical partition of the 10% of the SDSS on their desktops. This will be convenient for targeted searches and for developing algorithms. But, full searchers will still be much faster on the server machines because the servers will have much more IO bandwidth and processing power. Vertical partitioning can also be applied by the scan, hash, and river machines to reduce data movement and to allow faster scans of popular subsets. We also plan to offer a 1% sample (about 10 GB) of the whole database that can be used to quickly test and debug programs. Combining partitioning and sampling converts a 2 TB data set into 2 gigabytes, which can fit comfortably on desktop workstations for program development.\n\nIt is obvious, that with multi-terabyte databases, not even the intermediate data sets can be stored locally. The only way this data can be analyzed is for the analysis software to directly communicate with the Data Warehouse, implemented on a server cluster, as discussed above. Such an Analysis Engine can then process the bulk of the raw data extracted from the archive, and the user needs only to receive a drastically reduced result set.\n\nGiven all these efforts to make the server parallel and distributed, it would be inefficient to ignore IO or network bottlenecks at the analysis level. Thus it is obvious that we need to think of the analysis engine as part of the distributed, scalable computing environment, closely integrated with the database server itself. Even the division of functions between the server and the analysis engine will become fuzzy - the analysis is just part of the river-flow described earlier. The pool of available CPU's will be allocated to each task.\n\nThe analysis software itself must be able to run in parallel. Since it is expected that scientists with relatively little experience in distributed and parallel programming will work in this environment, we need to create a carefully crafted application development environment, to aid the construction of customized analysis engines. Data extraction needs to be considered also carefully. If our server is distributed and the analysis is on a distributed system, the extracted data should also go directly from one of the servers to one of the many Analysis Engines. Such an approach will also distribute the network load better.\n\n# Summary\n\nAstronomy is about to be revolutionized by having a detailed atlas of the sky available to all astronomers. With the SDSS archive it will be easy for astronomers to pose complex queries to the catalog and get answers within seconds, and within minutes if the query requires a complete search of the database. The SDSS datasets pose interesting challenges for automatically placing and managing the data, for executing complex queries against a high-dimensional data space, and for supporting complex user-defined distance and classification metrics. The SDSS project is \"riding Moore's law\": the data set we started to collect today - at a linear rate - will be much more manageable tomorrow, with the exponential growth of CPU speed and storage capacity. The scalable archive design presented here will be able to adapt to such changes.\n\nWe would like to acknowledge support from the Astrophysical Research Consortium, the HSF, NASA and Intel's Technology for Education 2000 program, in particular George Bourianoff (Intel).","meta":{"dup_signals":{"dup_doc_count":65,"dup_dump_count":51,"dup_details":{"curated_sources":5,"2022-27":2,"2021-10":1,"2019-09":1,"2018-30":1,"2018-22":1,"2018-13":1,"2018-09":1,"2017-51":1,"2017-47":1,"2017-43":1,"2017-39":1,"2017-34":1,"2017-30":1,"2017-26":1,"2017-22":1,"2017-17":1,"2017-09":1,"2017-04":1,"2016-50":1,"2016-44":1,"2016-40":1,"2016-36":1,"2016-30":1,"2016-26":1,"2016-22":1,"2016-18":1,"2016-07":1,"2015-48":1,"2015-40":1,"2015-35":1,"2015-32":1,"2015-27":1,"2015-22":1,"2015-14":1,"2014-52":1,"2014-49":2,"2014-42":4,"2014-41":2,"2014-35":2,"2014-23":2,"2014-15":2,"2023-14":1,"2017-13":1,"2015-18":1,"2015-11":1,"2015-06":1,"2014-10":1,"2013-48":1,"2013-20":2,"2024-18":1}},"filename":"out\/astro-ph9912382_extract_O1-02.tex.md"},"subset":"arxiv"} +{"text":"abstract: To truly eliminate Cartesian ghosts from the science of consciousness, we must describe consciousness as an aspect of the physical. Integrated Information Theory states that consciousness arises from intrinsic information generated by dynamical systems; however existing formulations of this theory are not applicable to standard models of fundamental physical entities. Modern physics has shown that fields are fundamental entities, and in particular that the electromagnetic field is fundamental. Here I hypothesize that consciousness arises from information intrinsic to fundamental fields. This hypothesis unites fundamental physics with what we know empirically about the neuroscience underlying consciousness, and it bypasses the need to consider quantum effects.\nauthor: Adam B.\u00a0Barrett[^1] \n \n*Sackler Centre for Consciousness Science* and *Department of Informatics* \nUniversity of Sussex, Brighton BN1 9QJ, UK\ndate: \\[Published Feb.\u00a04, 2014 in the *Consciousness Research* specialty section of *Frontiers in Psychology*, article no.\u00a05(63).\\]\ntitle: An Integration of Integrated Information Theory with Fundamental Physics\n\n# Introduction\n\nThe key question in consciousness science is: \"Given that consciousness (i.e., subjective experience) exists, what are the physical and biological mechanisms underlying the generation of consciousness?\". From a basic property of our phenomenology, namely that conscious experiences are integrated representations of large amounts of information, Integrated Information Theory (IIT) hypothesizes that, at the most fundamental level of description, consciousness is integrated information, defined as information generated by a whole system, over and above its parts (Tononi, 2008). Further, given the private, non-externally observable nature of consciousness, IIT considers consciousness to be an intrinsic property of matter, as fundamental as mass, charge or energy. Thus, more precisely, IIT posits that consciousness is intrinsic integrated information, where by intrinsic information it is meant that which is independent of the frame of reference imposed by outside observers of the system. The quantity of consciousness generated by a system is the amount of intrinsic integrated information generated (Balduzzi and Tononi, 2008), whilst the qualities of that consciousness arise from the precise nature of informational relationships between the parts of the system (Balduzzi and Tononi, 2009).\n\nIIT has garnered substantial attention amongst consciousness researchers. However, it has been criticized for its proposed measures of integrated information not successfully being based on an intrinsic perspective (Gamez, 2011; Beaton and Aleksander, 2012; Searle, 2013). The proposed \"$\\Phi$\" measures are applicable only to networks of discrete nodes, and thus for a complex system depend on the observer choosing a particular graining. More broadly, information can only be intrinsic to fundamental physical entities, and descriptions of information in systems modeled at a non-fundamental level necessarily rely on an extrinsic observer's choice of level (Floridi, 2009, 2010; Gamez, 2011). Here I propose a potential solution to this problem, what might be called the field integrated information hypothesis (FIIH). Modern theoretical physics describes the universe as being fundamentally composed of continuous fields. Electrical signals are the predominant substrate of information processing in brains, and the electromagnetic field that these produce is considered fundamental in physics, i.e., it is not a composite of other fields. Thus, I hypothesize that consciousness arises from information intrinsic to fundamental fields, and propose that, to move IIT forward, what is needed is a measure of intrinsic information applicable to the configuration of a continuous field.\n\nThe remainder of this article is laid out as follows. First I discuss the concept of fundamental fields in physics, and how if one takes the view that consciousness is an intrinsic property of matter, then it must be a property arising from configurations of fields. In the following section, I discuss the hypothesis that consciousness arises from integrated information intrinsic to fundamental fields, the shortcomings of existing approaches to integrated information, and the possibility of constructing a measure that can successfully measure this quantity for field configurations. I then explain how IIT and the FIIH imply a limited form of panpsychism, and why this should not be considered a problem, before contrasting the FIIH with previously proposed field theories of consciousness, such as that of Pockett (2000). Finally, the summary includes some justification for this theoretical approach to consciousness.\n\n# Fundamental fields and consciousness\n\n| | **Mass (GeV\/$c^2$)** | **Electric charge** | **Strong charge** | **Weak charge** |\n|:---|:---|:---|:---|:---|\n| **LEPTONIC MATTER** | | | | |\n| electron neutrino ($\\nu_e$) | $<1.3\\times10^{-10}$ | 0 | No | Yes |\n| electron (e) | 0.0005 | -1 | No | Yes |\n| muon neutrino ($\\nu_\\mu$) | $<1.3\\times10^{-10}$ | 0 | No | Yes |\n| muon ($\\mu$) | 0.106 | -1 | No | Yes |\n| tau neutrino ($\\nu_\\tau$) | $<1.4\\times10^{-10}$ | 0 | No | Yes |\n| tau ($\\tau$) | 1.78 | -1 | No | Yes |\n| **QUARK MATTER** | | | | |\n| up (u) | 0.002 | 2\/3 | Yes | Yes |\n| down (d) | 0.005 | -1\/3 | Yes | Yes |\n| charm (c) | 1.3 | 2\/3 | Yes | Yes |\n| strange (s) | 0.1 | -1\/3 | Yes | Yes |\n| top (t) | 173 | 2\/3 | Yes | Yes |\n| bottom (b) | 4.2 | -1\/3 | Yes | Yes |\n| **BOSONS** | | | | |\n| **Electromagnetic force:** | | | | |\n| photon ($\\gamma$) | 0 | 0 | No | No |\n| **Strong force:** | | | | |\n| gluon (g) | 0 | 0 | Yes | No |\n| **Weak force:** | | | | |\n| $W^-$ | 80 | -1 | No | No |\n| $W^+$ | 80 | 1 | No | No |\n| Z | 91 | 0 | No | No |\n| **Gravity:** | | | | |\n| graviton$^*$ | 0 | 0 | No | No |\n| **Higgs mechanism:** | | | | |\n| Higgs (H) | 126 | 0 | No | Yes |\n| | | | | |\n\nTable of the fields\/particles that are considered fundamental. Familiar matter arises from leptons and quarks, while the forces of nature arise from interactions of matter with \"carrier\" bosons. Mass is given in giga electron volts per speed of light squared (Gev\/$c^2\\approx 2\\times10^{-27}$kg). Electric charge is in standard units relative to minus the charge of the electron, i.e., one unit equals $1.6\\times10^{-19}$ Coulombs. A description of the group theoretic strong and weak charges is beyond the scope of this article, but the table shows which fields have strong and weak charges. \\*The gravity field is considered fundamental and is well-studied, but the gravity particle (graviton) has not to date explicitly been observed; at quantum (i.e., very microscopic) spatial scales, a consistent set of field equations for gravity have yet to be constructed.\n\nContemporary physics postulates that \"fields\" are the fundamental physical ingredients of the universe, with the more familiar quantum particles arising as the result of microscopic fluctuations propagating across fields, see e.g., Oerter (2006) for a lay person's account, or Coughlan et al.\u00a0(2006) for an introduction for scientists. In theoretical terms, a field is an abstract mathematical entity, which assigns a mathematical object (e.g., scalar, vector) to every point in space and time. (Formally a field is a mapping $F$ from the set $S$ of points in spacetime to a scalar or vector field $X$, $F: S \\to X$.) So, in the simplest case, the field has a number associated with it at all points in space. At a very microscopic scale, ripples, i.e., small perturbations, move through this field of numbers, and obey the laws of quantum mechanics. These ripples correspond to the particles that we are composed of, and there is precisely one fundamental field for each species of fundamental particle. At the more macroscopic level, gradients in field values across space give rise to forces acting on particles. The Earth's gravitational field, or the electromagnetic field around a statically charged object, are examples of this, and the classical (as opposed to quantum) description is a good approximation at this spatial scale. However, both levels of description can be considered equally fundamental if the field is fundamental, i.e., not some combination of other simpler fields. Note that the electromagnetic and gravitational fields are both examples of fundamental fields, with the corresponding fundamental particles being the photon and the graviton. Particles are divided up into matter particles and force-carrying particles, but all types of particle have associated fields; all the forces of nature can be described by field theories which model interactions, i.e., exchanges of energy, between fields. See Table 1 for a list of fields\/particles that are considered fundamental according to this so-called \"Standard Model\" of particle physics.\n\nTo be consistent with modern theoretical physics, a theory of consciousness that considers consciousness to be a fundamental attribute of matter must describe how consciousness manifests itself in the behavior of either fundamental fields or quantum particles. Since we know that the brain generates electric fields with a rich spatiotemporal structure, and that, for the main part, information processing in the brain is carried out by electrical signaling between neurons operating mostly in the classical (as opposed to quantum) regime (Koch and Hepp, 2006), empirical evidence favors the former. Thus, on the view that consciousness is a fundamental attribute of matter, it must be the structure and\/or dynamics of the electromagnetic field (which is an example of a fundamental field) that is fundamentally the generator of brain-based consciousness.\n\nOnce one ascribes electromagnetic fields with the potential to generate consciousness, it is natural to ask whether other fields might also have the potential to generate consciousness. According to modern physics, there was a symmetry between all fields at the origin of the universe, although these symmetries were broken as the universe began to cool (Georgi and Glashow, 1974; see Hawking, 2011 for a lay-person's account). It could be argued by Occam's razor that it makes more sense to posit that potential for consciousness existed at the outset, and hence potential for consciousness is a property of all fields, than that it emerged only during symmetry breaking. However, in practice, it is unlikely that any complex consciousness could exist in any field other than the electromagnetic field, for reasons to do with the physics and chemistry of the electromagnetic field compared with other fields. Considering the four forces: strong, weak, electromagnetic and gravitational, the strong and weak forces don't propagate over distances much larger than the width of the nucleus of an atom, and gravity alone cannot generate complex structures by virtue of being solely attractive; in contrast, the electromagnetic field can propagate over macroscopic scales, is both repulsive and attractive, and is fundamentally what enables non-trivial chemistry and biology. Considering fields associated with matter, these in general do not have any undulations at spatial scales larger than the quantum scale; the non-trivial structures in these fields are essentially just the ripples associated with the familiar quantum matter particles, i.e., electrons and quarks, and various \"exotic\" particles detectable in particle physics experiments (see Table 1). Finally, the recently discovered Higgs field has essentially a uniform structure; quantum interactions exist between the Higgs field and many of the other fields, and this is fundamentally the origin of mass in the universe (see e.g., Coughlan et al., 2006; Oerter, 2006). Thus, the physics of the electromagnetic field uniquely lends itself to the generation of complex structures.\n\n# The Field Integrated Information Hypothesis\n\nGiven the above, I propose that the principal conceptual postulates of IIT should be restated as follows. Consciousness arises from information intrinsic to the configuration of a fundamental field. The amount of consciousness generated by a patch of field is the amount of integrated information intrinsic to it. When a patch of field generates a large quantity of intrinsic integrated information, mathematically there is a high-dimensional informational structure associated with it (Tononi, 2008; Balduzzi and Tononi, 2009). The geometrical and topological details of this structure determine the contents of consciousness. The task now is to correctly mathematically characterize intrinsic integrated information, and construct equations to measure it.\n\nA true measure of intrinsic integrated information must be frame invariant, just like any fundamental quantity in physics. That is, it must be independent of the point of view of the observer: independent of the units used to quantify distance or time, independent of which direction is up, and independent of the position of the origin of the coordinate system; and also independent of the scale used for quantifying charge, or field strength.\n\nThe \"$\\Phi$\" measures put forth by existing formulations of IIT (Balduzzi and Tononi, 2008; Barrett and Seth, 2011) are not applicable to fields because they require a system with discrete elements, and fields are continuous in space. One could ask, however, whether a perspective on a system in terms of discrete elements could actually be equivalent to an intrinsic field-based perspective, thus obviating the need for a field-based measure. To see explicitly that this is not the case, let us revisit the photodiode, which, according to the existing theory (Tononi, 2008), has 1 bit of intrinsic information by virtue of having two states, on or off. There is a wire inside the photodiode, and the electrons inside the wire are all individually fluctuating amongst many different states. The electromagnetic field generated by the diode, and the circuit to which it is connected has two stable configurations for as long as the circuit is connected. But other more general configurations for an electromagnetic field are ruled out by each of these states. Considering the system at this level of description yields a distinct perspective, and would lead one to deduce that the amount of information generated by the system's states is some quantity other than 1 bit. Thus the field-based perspective is not equivalent to the observer-dependent discrete perspective.\n\nThe idea here is that a formula should be obtained that could in theory be applied universally to explore the intrinsic information in any patch of spacetime, without requiring an observer to do any modeling, i.e., one would just measure field values in as fine a graining as possible to get the best possible approximations to the intrinsic informational structure. Only a formula in continuous space and time would allow this. If a discrete formula were to be applied, there would always be the possibility of encountering an informational structure on a finer scale than that of the formula. (Unless the graining required by the formula were the Planck scale, i.e., the scale of the hypothesized superstring, on which continuous models of physics break down; however there do not exist complex structures at that scale.) In practice however, observations of systems are necessarily discrete, so discrete approximations to a continuous formula could be useful for empirical application. See Balduzzi (2012) for some recent work on the information-theoretic structure of distributed measurements.\n\nWe don't yet know how to properly calculate intrinsic information, so must remain agnostic on the precise amount of intrinsic integrated information generated by photodiodes, or of anything. However, the failure of existing approaches does not rule out the construction in the future of a successful formula. While it is beyond the scope of this present paper to make a serious attempt at solving this problem, I speculate that a formula in terms of thermodynamic entropy as opposed to Shannon entropy might be more likely to succeed, as the former is inherently an intrinsic property, whereas the latter was constructed for the purpose of describing an external observer's knowledge of a system (Floridi, 2009, 2010; Gamez, 2011; Beaton and Aleksander, 2012).\n\n# Integrated Information Theory and panpsychism\n\nSearle (2013) criticizes IIT for its stance that integrated information always produces consciousness, stating that this ludicrously ascribes consciousness to all kinds of everyday objects and would mean that consciousness is \"spread thinly like a jam across the universe\". Koch and Tononi (2013) counter that only \"local maxima\" of integrated information exist (over spatial and temporal scales): \"my consciousness, your consciousness, but nothing in between\". If local maxima of intrinsic integrated information in field configurations always generate consciousness, then there must be minute amounts, say \"germs\", of consciousness all over the universe, even though there would be no superordinate consciousness amongst groups of people. Thus, IIT and the FIIH do imply a form of panpsychism. However, the phenomenology assigned to an isolated electron in a vacuum, or even a tree, which has no complex electromagnetic field, would be very minimal. Since the only consciousness we can be certain of is our own, the positing by integrated information theories of germs of consciousness everywhere is no reason to dismiss them. A theory should stand or fall on whether or not it can elegantly and empirically describe human consciousness.\n\nFor those uncomfortable with subscribing to a panpsychist theory, a possible way round the problem is to assign an attribute \"potential consciousness\" to matter at the most fundamental level. Then, the quantity of potential consciousness is simply the quantity of integrated intrinsic information. But only when there is a large amount of intrinsic integrated information with a sufficiently rich structure to be worthy of being compared to a typical healthy adult human waking conscious moment, should we say that the integrated information has \"actual consciousness\" associated with it. A line could thus be drawn somewhere between the potential consciousness of an isolated electron in a vacuum and the actual consciousness generated by my brain as I write this article. The problem with such a distinction however is that potential consciousness would still be assigned phenomenal content, so it is perhaps more elegant to just use a single term \"consciousness\" for the whole spectrum of integrated information. On the other hand, since consciousness is defined by some as any mental content, but by others as only self-reflective mental content, there is no single terminology that appeals to everybody. The key point, irrespective of the precise definition of consciousness, is that on the theory discussed here, intrinsic integrated information is what underlies subjective experience at the most fundamental level of description. Alternatively, one could further imagine different lines being drawn for different purposes. For example, a threshold of conscious awareness above which surgery cannot be performed; or thresholds at which various people are comfortable eating animals.\n\n# Relation to previous electromagnetic field theories of consciousness\n\nThere have been several other theories of consciousness put forward that identify consciousness with various types or configurations of fields, see Pockett (2013) for a review. Notably, Pockett's electromagnetic field theory (EMT) of consciousness (Pockett, 2000, 2011, 2012) posits that \"conscious perceptions (and sensations, inasmuch as they can be said to have independent existence) are identical with certain spatiotemporal electromagnetic patterns generated by the normal functioning of waking mammalian brains\" (Pockett, 2013). In the most recent formulation of this theory, the key feature of field patterns underlying consciousness is the presence of a neutral region in the middle of a radial pattern. This hypothesis was motivated by the observation that such field patterns appear during recurrent cortical activity, (with the neutral region in layer 4), and the empirical association of consciousness with recurrent processing (Pockett, 2012).\n\nA problem common to previous field theories of consciousness (Libet, 1994; Pockett, 2000, 2013; McFadden, 2002) is that they claim that cutting outgoing neural connections from a slab of cortex that generates a conscious experience will not affect the ability to report that conscious experience. EMT argues that the electromagnetic field within such an isolated hypothetical slab would still propagate through space and enable communication between the conscious field generated by the slab and the spatially contiguous larger conscious mental field. This is not however compatible with the laws of physics. Any cutting of synapses to or from regions of cortex that are generating consciousness will alter the field, and will therefore alter the conscious experience. There is no electromagnetic field residing in the brain other than that generated specifically by all of the neural and chemical activity. And it does not make sense to talk of the brain's electromagnetic field and its firing neurons and synapses as being able to exist independently of each other. On the theory put forward here, neurons can be considered the scaffolding that enable very complex electromagnetic field configurations to be sustained. As far as describing the mechanisms of perception and cognition that generate the specific contents of consciousness in any given scenario, the current paradigm of associating it with neural activity is of course the only valid and useful level of description. However, in terms of explaining more fundamentally how matter gives rise to consciousness, a description in terms of fields would be much more elegant than a description in terms of the complex entities that are neurons.\n\nAnother shortcoming of previous field theories of consciousness is that none of them relate physical properties of proposed correlates of consciousness to properties of phenomenology, i.e., they do not posit \"explanatory correlates of consciousness\" (Seth, 2009). The FIIH raises for the first time the possibility of constructing a field theory of consciousness that can account for a fundamental aspect of phenomenology, namely that conscious experiences are integrated representations of large amounts of information.\n\n# Discussion\n\nIn this paper I have hypothesized that, at the most fundamental level of description, human consciousness arises from information intrinsic to the complex electromagnetic fields generated by the brain. This \"FIIH\" builds on the axioms of IIT, namely that consciousness is integrated information, and that consciousness is an intrinsic and fundamental property of matter analogous to mass or charge. However, it also implies that a new mathematical formalism is required to properly quantify intrinsic integrated information, since electromagnetic fields are continuous in space, and existing \"$\\Phi$\"-type measures of integrated information are applicable only to discrete systems (which require an observer dependent perspective). The idea that consciousness can be identified with certain spatiotemporal electromagnetic patterns has been previously put forward in other electromagnetic field theories of consciousness. But by suggesting that integrated information is the key factor, the theory here connects, for the first time, such electromagnetic field theories of consciousness to basic aspects of phenomenology.\n\nThe hypothesis is admittedly rather speculative, and any proposed mathematical formula for conscious level in terms of information intrinsic to an electromagnetic field will be difficult to test directly, simply because we do not have the technological tools or the computational resources to record in full detail the three-dimensional electromagnetic field structure generated by the brain. Rather, this can only be sampled at a spatial scale that is sparse compared to the finest scale of its undulations. However, there is a strong case to be made that the theoretical development of the ideas presented here has substantial value. Theories in physics have been vigorously pursued for their logic and beauty, in the absence of imminent direct experimental tests. For example, there is a vast amount of work being conducted on string theory; there, rather than experimental verification, the goal is an elegant explanation of our existing empirical knowledge of particle physics and gravity. If there already existed several analogous theories of consciousness, then one could argue that it would not be useful to add to the speculation. However, there is as yet no compellingly believable set of equations for describing, fundamentally, how consciousness is generated. IIT has potential in this direction, but a major step forward for the theory would be a truly plausible formula for intrinsic information applicable to fundamental physical entities. The FIIH provides a conceptual starting point for achieving this. All this is not to say that such a theory will aid understanding of all aspects of consciousness; indeed the multi-faceted nature of consciousness requires descriptions at many different levels. Non-reductionist frameworks are required to understand the complexity of the biological machinery that enables the brain to do any kind of information processing, conscious or unconscious, and to understand the differences between conscious and unconscious cognitive processes neural dynamics and behavior must necessarily be modeled at multiple levels of description.\n\nFinally, any theory can potentially indirectly make predictions. Indeed IIT has already inspired heuristic measures of information integration\/complexity that have been successfully applied to recorded electrophysiological data and are able to distinguish the waking state from diverse unconscious states, i.e., sleep and anaesthesia under various anaesthetics (Massimini et al., 2005; Ferrarelli et al., 2010; Casali et al., 2013). The results are in broad agreement with the predictions of IIT and provide encouragement for further theoretical work on the relationship between information integration and consciousness. Theories built from the FIIH could make new and distinct predictions about the types of structural and\/or functional neuronal architectures that are capable of generating consciousness; and new theory can only further inform the quest for ever more reliable measures of consciousness that can be applied to observable brain variables.\n\n# Acknowledgements\n\nI thank Emily Lydgate and Anil Seth for invaluable discussions during the writing of this paper, and Daniel Bor and David Gamez for very useful comments on draft manuscripts. ABB is funded by EPSRC grant EP\/L005131\/1.\n\n[^1]: email@example.com","meta":{"dup_signals":{"dup_doc_count":34,"dup_dump_count":31,"dup_details":{"curated_sources":3,"2022-33":1,"2022-27":2,"2021-31":1,"2021-10":1,"2020-29":1,"2020-10":1,"2019-51":1,"2019-43":1,"2019-35":1,"2019-26":1,"2019-18":1,"2019-04":1,"2018-47":1,"2018-43":1,"2018-34":1,"2018-22":1,"2018-13":1,"2017-47":1,"2017-39":1,"2017-26":1,"2017-22":1,"2017-09":1,"2016-44":1,"2016-40":1,"2016-36":1,"2016-30":1,"2023-14":1,"2024-10":1,"2017-13":1,"2024-18":1}},"filename":"out\/1407.4706.tex.md"},"subset":"arxiv"} +{"text":"author: [^1] \n\nThe rejection of the contamination, or background, from low-energy strong interactions at hadron collider experiments is a topic that has received significant attention in the field of particle physics. This article builds on a particle-level view of collision events, in line with recently-proposed subtraction methods. While conventional techniques in the field usually concentrate on probability distributions, our study is, to our knowledge, the first attempt at estimating the frequency distribution of background particles across the kinematic space inside individual collision events. In fact, while the probability distribution can generally be estimated given a model of low-energy strong interactions, the corresponding frequency distribution inside a single event typically deviates from the average and cannot be predicted a priori. We present preliminary results in this direction, and establish a connection between our technique and the particle weighting methods that have been the subject of recent investigation at the Large Hadron Collider.\n\n\u00a0 \n**Keywords:** 29.85.Fj; High Energy Physics; Particle Physics; Large Hadron Collider; LHC; soft QCD; pile-up; mixture models; Gibbs sampler; Markov Chain Monte Carlo; Expectation Maximisation.\n\n# Nomenclature and general remarks\n\n- Collisions: proton-proton collisions at the Large Hadron Collider.\n\n- Events: triggered proton-proton collisions.\n\n- Bunch crossings: intersections between colliding proton beam bunches.\n\n- Physics processes: either the high-energy parton scattering of interest or low-energy strong interactions.\n\n- Missing transverse energy: event-level energy imbalance measured on a plane perpendicular to the direction of the colliding particle beams.\n\n- Particle transverse momentum, $p_T$: absolute value of the component of the particle momentum vector on a plane perpendicular to the direction of the colliding beams.\n\n- Particle pseudorapidity, $\\eta$: kinematic quantity expressed in terms of the particle polar angle in the laboratory frame, $\\theta$, by $\\eta=-\\mbox{log}\\left[\\mbox{tan}(\\theta\/2)\\right]$.\n\n- Whenever neutral particles are referred to in the text, neutrinos are not considered.\n\n# Introduction\n\nThe subtraction of the contamination, or background, from soft, i.e.\u00a0low-energy, physics processes described by Quantum Chromodynamics (QCD) that take place in proton-proton collisions is a critical task at the Large Hadron Collider (LHC). The impact of the correction is going to become even more significant in the upcoming scenarios whereby the hard, i.e.\u00a0high-energy, parton scattering of interest will be superimposed with a higher number of low-energy interactions from collisions between other protons, the so-called pile-up events. This is an important aspect at the LHC, and one that is going to have an even more significant impact at the High-Luminosity LHC (HL-LHC), i.e.\u00a0at the accelerator that is going to be built following the LHC upgrade project. In fact, the contribution of pile-up particles to the events of interest often makes the study of rare processes particularly challenging.\n\nSubtraction techniques are well established, and typically combine tracking information for charged particles with estimates of the energy flow associated with neutral particles that originate from low-energy QCD interactions . In particular, pile-up subtraction is a key component in the data processing pipelines responsible for the reconstruction and calibration of jets, i.e.\u00a0of collections of particles interpreted as originating from the same scattered parton.\n\nIn this context, an important role has been played by correction procedures based on jet area , which provides a measure of the susceptibility of reconstructed jets to the soft QCD energy flow. Such methods work by subtracting from the total momentum of the high-energy jets a quantity proportional to an event-level estimate of the background momentum density as well as to the area of the jet of interest. Therefore, this takes into account event-to-event background variability, and, since the correction can be calculated in a kinematics-depedent way, also the presence of different levels of pile-up activity in different kinematic regions inside events. However, due to the quantum nature of the underlying physics, the density of pile-up particles can be different even inside jets with very similar kinematics in the same collision event. While techniques based on jet area cannot correct for this, more recent methods exploiting information encoded in the substructure of jets can effectively take this into account.\n\nIn this article, we explore a different perspective in order to estimate the frequency distribution of soft QCD particles inside events. We build on a view of collision events as collections of particles whereby soft QCD particles are rejected upstream of jet reconstruction, in line with the particle-level pile-up subtraction algorithms that are currently being evaluated at the LHC .\n\nDue to the quantum nature of the underlying physics processes and the limited number of final-state particles inside individual collisions, the particle multiplicity across the kinematic space inside events will generally vary across collisions even when the physics processes involved are exactly the same. More precisely, the soft QCD particle-level frequency distribution will normally deviate from the corresponding probability distribution, and will be different in different events. What is discussed in this article is a data-driven method of estimating the soft QCD particle multiplicity across the kinematic space inside each event, using the following:\n\n- The kinematic probability distributions of soft QCD particles and of particles originating from the signal hard scattering, e.g.\u00a0obtained using simulated data;\n\n- The average fraction of soft QCD particles in the events;\n\n- The observed particle multiplicity, i.e. the observed number of particles in different kinematic regions in the event.\n\nOur approach relies on particles from high-energy scattering processes having pseudorapidity distribution more peaked at $\\eta=0$, as well as higher values of $p_T$, on average, than those originating from low-energy strong interactions. This is essentially due to the higher transverse momentum transferred between the colliding protons, and results in different kinematic distributions of the final-state particles on the ($\\eta, p_T$) plane, as illustrated in Fig. with reference to the control sample. Although different signal processes will generally be associated with different kinematic signatures, the dissimilarity between background soft QCD and hard scattering signal particles in terms of their $\\eta$ and $p_T$ distributions typically outweighs the variability associated with the choice of signal process.\n\nMoreover, as a filtering stage upstream of physics analysis, reconstructed events at the experiments are usually subdivided, or \"skimmed\", into multiple data streams enriched in different signal processes. For the purpose of this discussion, the signal model can be thought of as describing the particle-level kinematics corresponding to the high-energy processes that the events analysed are enriched in.\n\nThe kinematic variables used in this study are those that the relevant signal and background probability distributions can be written as functions of, namely particle pseudorapidity, $\\eta$, and transverse momentum, $p_T$. To our knowledge, this is the first method of estimating how the frequency distribution of soft QCD particles inside individual events deviates from the expectation due to the non-deterministic nature of the underlying processes.\n\nThis article reports preliminary results on simulated collision events at the LHC, showing that the algorithm produces reasonable estimates of the number of soft QCD particles in different $(\\eta, p_T)$ regions inside events regardless of the presence in those regions of particles from the hard scattering. Given that assessing the performance of this method requires knowledge of the true frequency distribution of soft QCD particles in each event, which is not available at the experiments, the validation was performed on simulated data, using an event generator commonly employed in the field . Specifically, background and signal particles were generated using soft QCD processes and $gg\\rightarrow t\\bar{t}$, respectively.\n\nOur interest in the estimation of the multiplicity of soft QCD particles across the kinematic space inside individual collision events relates to the development of further-improved methods of rejecting pile-up in high-luminosity regimes. Since our approach is based on a different principle and works in a different way as compared to established techniques, we expect its combined use with existing methods to result in enhanced pile-up subtraction in the upcoming higher-luminosity regimes at the LHC. We speculate that this can also lead to improved missing transverse energy resolution and to higher-quality estimates of particle isolation as the pile-up rates increase.\n\nIt should be noted that, in addition to pile-up, another source of soft QCD particles at the LHC is the so called Underlying Event (UE), which consists of particles from low-energy parton interactions taking place in the same proton-proton collision that contains the particles produced by the hard parton scattering. Pile-up and UE particles originate from similar processes: for this reason, with regard to estimating the frequency distribution of soft QCD particles inside events, it is expected that UE particles will contribute to the background category, i.e.\u00a0that they will count towards the number of soft QCD particles together with those that originate from pile-up. In any case, although the distinction between pile-up and UE particles is conceptually important, pile-up is by far the primary source of soft QCD contamination at the ATLAS and CMS experiments at the LHC.\n\nThe algorithm that we describe in this article is a simplified deterministic variant of the Markov Chain Monte Carlo technique used in , where we discussed the idea of filtering collision events particle by particle upstream of jet reconstruction. Both our previous contributions and the present article relate to the development of new subtraction methods, with a view to improving further on the rejection of contamination from low-energy strong interactions in high-luminosity hadron collider environments. In particular, it is our opinion that the simplicity and parallelisation potential of this technique make it a promising candidate for inclusion in particle-by-particle event filtering procedures at the reconstruction level at future high-luminosity hadron collider experiments.\n\n# The algorithm\n\nBy construction, the probability density function (PDF) describing the kinematics of particles originating from a given process, e.g.\u00a0with reference to soft QCD interactions, can be estimated as the limit of the corresponding frequency distribution, averaged over a large enough number of events. On the other hand, the corresponding frequency distribution inside a single event normally deviates from the PDF due to the limited number of particles. In fact, even when the processes involved are exactly the same, different collisions contain independent, and therefore different, realisations of the underlying quantum processes. For this reason, the shapes of the corresponding particle-level frequency distributions, e.g.\u00a0that of soft QCD particles, generally vary across collisions.\n\nLet $f_0$ and $f_1$ denote the kinematic PDFs of background and signal particles, respectively, normalised such that $\\int\\int f_i(\\eta, p_T) d\\eta dp_T = 1,~i=0,1$. For the purpose of this study, we describe collision events as statistical populations of particles originating from soft QCD interactions and from the hard scattering, using a mixture model of the form $\\alpha_0 f_0(\\eta, p_T) + \\alpha_1 f_1(\\eta, p_T)$, where $\\alpha_0$ is the fraction of soft QCD particles in the events, and $\\alpha_1 = 1-\\alpha_0$.\n\nIn this context, the probability for a given particle to originate from soft QCD interactions can be expressed using the following quantity:\n\n$$w_0(\\eta, p_T) \\equiv \\displaystyle \\frac{\\alpha_0 f_0(\\eta, p_T)}{\\alpha_0 f_0(\\eta, p_T) + \\alpha_1 f_1(\\eta, p_T)}.\n\\label{eq:w0}$$\n\nInside each collision event, although the actual numbers of background and signal particles in the different $(\\eta, p_T)$ bins are not known, it is possible to estimate the corresponding expected numbers, $\\nu_b(\\eta, p_T)$ and $\\nu_s(\\eta, p_T)$, given a background and a signal model, respectively.\n\nFor the purpose of this discussion, we express $\\nu_b$ in terms of $\\nu_b(\\eta, p_T) = N \\alpha_0 f_0(\\eta, p_T)\\Delta\\eta \\Delta p_T$, where $N$ is the total number of particles in the event, and $\\Delta\\eta$ and $\\Delta p_T$ are the bin widths along the $\\eta$ and $p_T$ axes, respectively. The corresponding expected number of signal particles in the bin, $\\nu_s(\\eta, p_T)$, can be calculated in a similar way using $f_1$.\n\nIf one denotes the unknown true numbers of signal and soft QCD particles as functions of particle $\\eta$ and $p_T$ in each event by $n_s^*(\\eta, p_T)$ and $n_b^*(\\eta, p_T)$, respectively, then $n(\\eta, p_T) = n_s^*(\\eta, p_T) + n_b^*(\\eta, p_T)$, where $n(\\eta, p_T)$ is the corresponding number of particles in the data. When one considers LHC events with a number of proton-proton interactions in line with what is expected in the upcoming higher-luminosity regimes, the final-state particle multiplicities associated with soft QCD interactions and with the signal hard scattering are such that the expected number of signal particles in each bin, $\\nu_s(\\eta, p_T)$, is on average much lower than the corresponding number of soft QCD particles, i.e. $\\left<\\nu_s(\\eta, p_T)\\right> \\ll \\left$, where the average is taken over the $(\\eta, p_T)$ space.\n\nOne therefore expects that the statistical fluctuations on the observed number of particles in each $(\\eta, p_T)$ bin will also be dominated by those on the number of soft QCD particles, i.e.\u00a0$\\left<\\sigma_{n_s}(\\eta, p_T)\\right> \\ll \\left<\\sigma_{n_b}(\\eta, p_T)\\right>$.\n\nIt should be noted that, if the variability of the number of signal particles can be neglected, the quantity\n\n$$\\hat{n}_b(\\eta, p_T) = w_0(\\eta, p_T) n(\\eta, p_T)\n\\label{eq:nbhat}$$\n\nis expected to provide a more reliable estimate of the unknown number of soft QCD particles, $n^*_b(\\eta, p_T)$, than $\\nu_b(\\eta, p_T)$ does. In fact, if the true number of soft QCD particles in each $(\\eta, p_T)$ bin inside a given event deviates from the expected value by an amount $\\Delta n_b$, i.e.\u00a0if $n^*_b = \\nu_b + \\Delta n_b$, then a fraction $w_0$ of $\\Delta n_b$ will contribute to $\\hat{n}_b$ in that bin. On the other hand, $\\nu_b$ reflects an expectation and therefore does not contain any information about statistical fluctuations inside events.\n\nGiven the estimated number of soft QCD particles in each ($\\eta, p_T$) bin, $\\hat{n}_b(\\eta, p_T)$, the corresponding unknown actual number can be treated as a random variable following a binomial distribution with mean given by expression () and standard deviation\n\n$$\\sigma_{n_b} = \\sqrt{n w_0 (1-w_0)}.\n\\label{eq:sigma_nbhat}$$\n\nBased on expression (), for the purpose of estimating the background particle multiplicity inside each event, the discrimination between soft QCD interactions and the hard parton scattering exploits the different shapes of the corresponding PDFs as functions of $\\eta$ and $p_T$. Specifically, as anticipated, the discrimination relies on particles from the hard scattering having a pseudorapidity distribution more peaked at $\\eta=0$, as well as having on average higher values of $p_T$.\n\nThe use of expression () for $w_0(\\eta, p_T)$ in order to estimate the multiplicity of soft QCD particles across the kinematic space inside the events is essentially equivalent to weighting particles against PDFs that summarise our knowledge of the kinematics of the underlying processes, thereby taking into account the average fraction of soft QCD particles in the events. As far as the hard scattering is concerned, in addition to $gg\\rightarrow t\\bar{t}$, which is used to illustrate our method in the present article, the algorithm has also been run on particles originating from decays of the Standard Model Higgs boson produced via vector boson fusion . In fact, such a process does not involve colour exchange between the colliding protons, and is therefore expected to lead to a lower degree of particle activity around $\\eta=0$ when compared to $gg\\rightarrow t\\bar{t}$.\n\nThe following discussion refers to neutral particles, since the identification of charged pile-up particles is made significantly easier by the availability of information from the tracking detectors at the experiments.\n\nIt is worth pointing out that, although the signal and background PDFs were derived using simulated collision events in the context of this study, similar information can in principle be obtained using control samples from the data. As for the estimation of the soft QCD particle fraction among the neutral particles in the events, $\\alpha_0$, it was decided to use the corresponding fraction of charged particles averaged over the events generated. In fact, the investigation of a more sophisticated approach including the use of a kinematic correction factor obtained from Monte Carlo showed no significant performance improvement . It should also be emphasised that the present estimate of $\\alpha_0$ based on charged particles was exclusively obtained for the purpose of this investigation, and that more information will typically be available at the experiments, e.g.\u00a0in the form of data on the neutral energy flow provided by the calorimeters.\n\nWhereas $\\alpha_0$ was defined as a global event-level quantity, the possibility of introducing a dependence on $\\eta$ and $p_T$ is worth investigating in the future, as this could lead to further-improved results. Nonetheless, it was decided to rely on as simple an approach as possible for the purpose of this feasibility study.\n\nAn overview of this method is given in Fig. , which highlights what information is extracted from the models and what comes from the data. We will show that this approach produces a more accurate estimate of the background particle multiplicity inside the events than would be obtained using exclusively the expected number, $\\nu_b$, as long as the statistical fluctuations in the data are dominated by those on the number of soft QCD particles.\n\n# Results\n\nWe discuss the results of a proof-of-concept study on Monte Carlo data at the generator level. We used Pythia 8.176 to generate 1,000 events, each consisting of a $gg\\rightarrow t\\bar{t}$ hard parton scattering at $\\sqrt{s} =$ 14\u00a0TeV, superimposed with 50 soft QCD interactions to simulate the presence of pile-up. Soft QCD interactions were generated with \"SoftQCD:all\", \"PartonLevel:ISR\", \"PartonLevel:FSR\" and \"PartonLevel:MI\" set to \"on\". The performance of this method on a reference event will be discussed for the sake of illustration, and distributions on all events generated will then be shown in order to confirm the consistency of the results.\n\nAs a prerequisite for the execution of this method, the particle-level $(\\eta, p_T)$ space in each event was subdivided into bins of widths $\\Delta\\eta = 0.5$ and $\\Delta p_T = 0.05~\\mbox{GeV}\/c$. Whereas our $\\Delta p_T$ binning will have to be revised in the context of a full detector simulation study, which is outside the scope of this article, we are using this choice of bins as a starting point to illustrate our method. Our analysis focusses on particles with $0 < p_T < 1~\\mbox{GeV}\/c$, which are the majority of those produced by soft QCD interactions.\n\nSignal and background PDF templates as functions of particle $\\eta$ and $p_T$ were obtained using a control sample dataset containing ${\\sim}300,000$ particles from $gg\\rightarrow t\\bar{t}$ and ${\\sim}300,000$ from soft QCD interactions. The corresponding Monte Carlo truth ($\\eta, p_T$) distributions of neutral soft QCD particles and of neutral particles from the hard scattering are shown in Fig. (a) and Fig. (b), respectively, each normalised to unit volume.\n\nThe above-mentioned collections of ${\\sim}300,000$ particles, although significantly-lower statistics than the large datasets normally used in the field, are considered adequate for the purpose of estimating the signal and background probability distributions in the context of this study. In fact, our emphasis is on how the soft QCD frequency distributions inside individual events deviate from the corresponding probability distribution. For this purpose, ${\\sim}300,000$ particles are high-enough statistics for the local features in the frequency distributions due to the presence of statistical fluctuations to be averaged out.\n\nThe distributions shown in Fig. were used together with the previously-mentioned estimate of the average fraction of soft QCD particles over all neutrals in the events, $\\alpha_0$, in order to calculate $w_0$ according to expression (). The distribution of $w_0(\\eta, p_T)$ is shown in Fig. (a) in relation to the Monte Carlo event chosen to illustrate our results.\n\nFig. (a) displays the true multiplicity of neutral soft QCD particles as a function of particle $\\eta$ and $p_T$ in the reference event. As expected, the frequency distribution deviates from the corresponding higher-statistics distribution, shown in Fig. (a), due to the presence of local features that are typically washed out when multiple events are lumped together.\n\nThe estimate of the multiplicity of neutral soft QCD particles across the $(\\eta, p_T)$ space obtained using this method is shown in Fig. (b) with reference to the same event. A comparison with the corresponding true distribution in Fig. (a) suggests that the local features of the distribution are reasonably well described, e.g.\u00a0the excess at $\\eta\\simeq 2.5$ and $p_T\\simeq 0.2~\\mbox{GeV}\/c$. The performance is discussed further below with reference to all the events generated.\n\nThe absolute and relative statistical uncertainties on $\\hat{n}_b$ in the reference event are displayed in Fig. (b) and Fig. (a), respectively, where the absolute uncertainty, $\\sigma_{n_b}$, was estimated using expression (). In particular, Fig. (b) suggests that the precision of this method is better than 1 particle, although, in order for this claim to be made, the precision over all events generated also needs to be assessed, as discussed in the following.\n\nIt is worth emphasising that, in general, it is not known which $(\\eta, p_T)$ bins in the event contain signal particles and which do not. Therefore, once it is observed that $\\hat{n}_b$ is a better estimate of $n^*_b$ than $\\nu_b$ is, and that it is sufficiently precise, it is also necessary to verify that $\\hat{n}_b$ is more accurate than the estimate that would be obtained if the possible presence of signal particles in the bins was simply neglected. For this purpose, the absolute deviation of the estimated number of neutral soft QCD particles from the true value, normalised to the true number of signal particles, $\\left|\\hat{n}_b-n^*_b\\right|\/n^*_s$, is displayed in Fig. (b) above $0.4~\\mbox{GeV}\/c$ particle $p_T$ in those ($\\eta, p_T$) bins that contain at least 1 signal particle. As it can be seen, $\\left|\\hat{n}_b-n^*_b\\right|\/n^*_s \\lesssim 1$, i.e.\u00a0the absolute deviation of the estimated number of background particles from the true number is lower than the number of signal particles across the kinematic space in the event, corresponding to the pile-up rate considered.\n\nAs anticipated, Fig. (b) and Fig. relate to one single event. In order to verify the precision and the accuracy of the algorithm in more detail, the corresponding heat maps were obtained over all events generated. Fig. (a) and Fig. (b) display $\\left<\\sigma_{n_b}\\right>$ and $\\left<\\left|\\hat{n}_b-n^*_b\\right|\/n^*_s\\right>$ across the $(\\eta, p_T)$ kinematic space, respectively, where the average is taken over events. The plots confirm that the algorithm produces consistent results, with $\\left<\\sigma_{n_b}\\right>$ below 1 particle and $\\left<\\left|\\hat{n}_b-n^*_b\\right|\/n^*_s\\right>$ significantly lower than 1 across the ($\\eta, p_T$) space.\n\nIt is worth noticing that the present investigation relies on a particle-level kinematic comparison between soft QCD interactions and a specific hard scattering process, namely $gg\\rightarrow t\\bar{t}$. Choosing a different signal process will normally change the final-state particle kinematics, although the difference between particles originating from a hard scattering and soft QCD particles is generally expected to be more pronounced than differences across signal processes. As discussed in section , this is supported by the study documented in , where this method was applied to vector boson fusion Standard Model Higgs production. In any case, the potential dependence of the performance of this technique on the choice of signal process deserves further investigation, in order for the results presented in this article to be generalised.\n\n# Conclusion\n\nThe contamination, or background, from particles produced by low-energy strong interactions is a major issue at the Large Hadron Collider. Although well-established correction procedures are in use at the experiments, new techniques are also being investigated in the field. The primary objective of this new line of development is to meet the requirements that will be posed by the upcoming higher-luminosity operational regimes of the accelerator.\n\nWe have investigated a different perspective to mainstream methods, thereby estimating the kinematics of background particles inside individual collision events in terms of their frequency distribution. Whereas the use of probability distributions is traditional, our emphasis on the frequency distributions is, to the best of our knowledge, a distinctive and unique feature of our approach.\n\nOur hope and expectation is that the ability to describe the kinematic frequency distribution of the contaminating particles collision by collision will help towards the development of improved subtraction methods at higher luminosity. In particular, we expect that our method will become useful as a complement to existing correction algorithms applied to jets, i.e.\u00a0to collections of final-state particles interpreted as originating from the same scattered parton.\n\nThe preliminary results discussed in this article suggest that our method can produce more accurate estimates of the number of contaminating particles in different kinematic regions inside collision events than would be possible by relying exclusively on the expected numbers. Although the possible impact of mismodelling remains to be investigated in more detail, this encourages further studies in this direction.\n\nHowever, it should be stressed that a proper quantitative test of the proposed method will require a detailed assessment on specific observables, which is beyond the scope of the present feasibility study.\n\nIt should also be emphasised that the algorithm is inherently parallel. In fact, different kinematic regions inside events can be processed independently, and the calculation of the only global variable, i.e.\u00a0of the average fraction of contaminating particles per event, can be performed in advance on a control sample. For this reason, this method is potentially suitable for inclusion in future particle-level event filtering procedures upstream of jet reconstruction.\n\nThe possible complementarity between our method and the particle weighting techniques that have recently been proposed at the Large Hadron Collider will also be worth exploring. In fact, our estimate of the probability for individual particles to originate from low-energy QCD processes as opposed to the high-energy signal scattering is based on information that is currently not employed by any particle weighting algorithms. In this context, we anticipate that multivariate combinations of different weighting schemes can prove beneficial with a view to improving further on the rejection of contaminating particles.\n\nFinally, it will be useful to study the impact of this method, when used in conjunction with existing techniques, on the resolution of the missing transverse energy as well as on estimates of particle isolation, with a view to quantifying the associated benefit at the level of physics analysis. The source code used for the purpose of this study is made available upon request.\n\n# Competing interests\n\nThe authors declare that there is no conflict of interest regarding the publication of this paper.\n\n[^1]: Email: firstname.lastname@example.com","meta":{"dup_signals":{"dup_doc_count":12},"filename":"out\/1610.04749_extract_khan_v3_R1_arxiv.tex.md"},"subset":"arxiv"} +{"text":"abstract: Cybersecurity attacks are a major and increasing burden to economic and social systems globally. Here we analyze the principles of security in different domains and demonstrate an architectural flaw in current cybersecurity. Cybersecurity is inherently weak because it is missing the ability to defend the overall system instead of individual computers. The current architecture enables all nodes in the computer network to communicate transparently with one another, so security would require protecting every computer in the network from all possible attacks. In contrast, other systems depend on system-wide protections. In providing conventional security, police patrol neighborhoods and the military secures borders, rather than defending each individual household. Likewise, in biology, the immune system provides security against viruses and bacteria using primarily action at the skin, membranes, and blood, rather than requiring each cell to defend itself. We propose applying these same principles to address the cybersecurity challenge. This will require: (a) Enabling pervasive distribution of self-propagating securityware and creating a developer community for such securityware, and (b) Modifying the protocols of internet routers to accommodate adaptive security software that would regulate internet traffic. The analysis of the immune system architecture provides many other principles that should be applied to cybersecurity. Among these principles is a careful interplay of detection and action that includes evolutionary improvement. However, achieving significant security gains by applying these principles depends strongly on remedying the underlying architectural limitations.\nauthor: Blake C.\u00a0Stacey and [Yaneer Bar-Yam](http:\/\/necsi.edu\/faculty\/bar-yam.html)\ndate: June 1, 2008 \/ public February 28, 2013\ntitle: [Principles of Security: Human, Cyber, and Biological](http:\/\/www.necsi.edu\/research\/military\/cyber\/)\n\n$$\\begin{array}{lccccccr}\n\\includegraphics[height=5cm]{n_logo.pdf} & \\qquad & \\qquad & \\qquad & \\qquad & \\qquad & \\qquad & \\qquad \\includegraphics[height=5.5cm]{SSG_logo.pdf} \\\\\n\\end{array}$$\n\n**Principles of Security: Human, Cyber and Biological**\n\nBlake Stacey and Yaneer Bar-Yam New England Complex Systems Institute 24 Mt. Auburn St. Cambridge, MA 02139\n\nReported to William G. Glenney, IV Chief of Naval Operations Strategic Studies Group\n\nPreface\n\nIn developing revolutionary concepts of Naval Warfare, the Chief of Naval Operations Strategic Studies Group (SSG) has anticipated concepts that are being now adopted widely. Among these concepts are Network Centric Warfare and FORCEnet. Modern technology enables a new focus on the interactions and relationships of action comprising a network of human beings and technology as central to advantage in military conflict. More significantly, the challenges of 21st century warfare require significantly greater levels of capability for which networks appear to be well suited, and even necessary.\n\nIn support of the SSG, Yaneer Bar-Yam, president of the New England Complex Systems Institute (http:\/\/www.necsi.edu), has provided a fundamental scientific basis using multiscale representations for analysis of the capabilities of organizations. \\[12-21\\] His analysis characterizes the capabilities and limitations of traditional and networked military organizations.\n\nThrough periodic lectures at the SSG beginning in January 2000 that have informed SSG reports, and through papers addressing specific questions in military force organization and transformation, a basis for understanding is being developed of modern military conflict that enables generalization from experience gained to new and novel military responsibilities.\n\nOne of the central insights provided by multiscale representations is that there are distinct types of networks that are relevant to military operations. One of these is a network of agents that are individually capable of action, e.g. warfighters in a battlespace. A second is a network of decision makers that gathers information widely and makes decisions collectively about highly specific actions to perform, where the action is then executed, perhaps by a large and centrally directed force. The former can be considered in analogy to the human immune system where the individual agents are the blood cells that battle harmful agents such as infections within the human body. The second can be considered in analogy to the human neuro-muscular system where the individual agents of the network are the nerve cells whose collective decision making ability far exceeds that of a single cell, and the decision making process results in the actions performed by the muscles.\n\nDistributed networks are particularly needed when facing enemies that are themselves distributed networks\u2014as is apparent in counter terrorism and in cyber security. Both are growing challenges. The attached report focuses on cyber security. Today, cyber exploits\u2014spam, malware, denial of service, and security breaches\u2014are widespread. As cyber space becomes an increasingly integral part of the functioning of all human activities security in this domain becomes an increasingly critical and unmet challenge. This report presents a framework for understanding the inherent limitations of existing cyber security and how to fundamentally improve its capabilities.\n\nExecutive Summary\n\nIncreasing global interdependence has resulted in distributed human terror networks and cyber security challenges. Actions by and on networks can result in major damage even though individual agents causing it are difficult to locate or identify. Traditional methods of centralized control and local response cannot meet these challenges due to the fundamental constraints on distributed information gathering, detection and coordinated action. Distributed networked security systems are necessary for effective response. Rather than addressing specific challenges of today, the vulnerability caused by global communication and transportation must be met by creating security systems with necessary capabilities to meet a wide range of challenges that are possible in these systems. With traditional approaches to security cyber systems are particularly at risk given the rapidity of action and necessary response.\n\nSecurity challenges follow certain general patterns. Recognizing the principles of effective security response provides guidance to meeting new high complexity challenges in global terror networks and cyber security. Key principles can be found from analyzing security systems in other domains. In this paper we analyze the biological immune system which is responsible for analogous security problems within human physiology. The immune system has evolved a dispersed distributed-control system that can successfully respond to generic and novel threats and can distinguish threat agents from self and attack threats\/invaders with necessary force.\n\nThe human immune system consists of billions of cells that coordinate response to security challenges in the human body. The activity of the immune system cannot be understood as a centrally managed process, but rather as arising from a large number of local communications among specialized components to achieve an emergent response. It achieves high success rates, is robust, scalable, flexible, and is generally capable of distinguishing self from non-self in actions. It dynamically refines its ability to detect pathogens by evolutionary selection.\n\nActions of the immune system can be divided into three layers. The first layer consists of barriers between security domains, including the skin and membranes between compartments of the body. The second layer consists of responses to damage of the system affecting many cells, including repairing barriers and tissues. The third layer, the adaptive immune system, responds to the finest scale challenges, detection and response to cells or molecules distributed through the system. Of particular concern are the viruses and bacteria that can replicate rapidly from a small to large number. All layers of the immune system have analogs in human and cyber security. The most difficult to understand and therefore the focus of our attention is the adaptive immune system.\n\nThe central capabilities of the adaptive immune system are detection and action, whose generalized application must be found in any security system. Detection is achieved by mapping the space of observed structures or behaviors onto a reduced set of possibilities, which are partitioned into threat and non-threat, i.e. non-self and self. When a match is found in the environment to a non-self template, an action is triggered. Detection and action are separated functions of components of the system, requiring careful regulatory interplay between them to enable action without damage to self, corresponding to collateral damage. Specific details of communication protocols and interplay of detection and action agents reveal the strategies the immune system employs in a system wide response whose detection ability increases rapidly in order to eliminate even individual molecular or cellular harmful agents. This requires the ability to replicate and distribute the detection mechanisms and action response rapidly throughout the system. The rapid development of improved response involves progressive refinement of detection by a process corresponding to evolutionary selection, similar to breeding of desired features in agriculture. Improved detection templates developed for a particular threat are distributed throughout the system to enable effective local response.\n\nThe relevance of the immune system model to cyber security arose when computer communication systems transitioned from infrequent and low bandwidth processes to increasingly pervasive, persistent and high speed networks. The importance of security is growing as the Internet is used not just as an adjunct to activities for communication about them, but rather as a necessary component of a wide range of economic and social functions. The need for security is apparent in the large impact of spam, spyware, phishing, zombie networks, denial of service attacks, internet fraud, identify theft, and breaches of high security systems. The underlying cause of the need for security is the transition of the Internet connected computers to behavior as a tightly linked system analogous to a multicellular organism.\n\nExisting cyber security systems have parallels to the immune system but these parallels are incomplete. Corresponding to the barriers found in the first level of security are firewalls and separation of distinct networks, e.g. ATM and bank transaction networks where these are separated from the Internet. The second layer of security consists of response to widespread exploits, including Domain Name Server Black Lists (DNSBLs) for blocking spam from their sources. The third layer of security includes virus scanners and e-mail filters that detect malware or spam on an individual computer. Detection makes use of a characterization of programs or e-mail by features (bit strings and logical operations on these bit strings) that provide signatures similar to the detection templates of the immune system. Considering the correspondence with the immune system reveals that the response to detection in cyber security is not pervasive throughout the system.\n\nOur analysis of the immune response identifies the principles of successful response. While there are similarities to the immune system response in cyber security, we find two major gaps in the architecture of the Internet. Without addressing these issues, improved cyber security will be difficult or impossible to attain. First, there is no mechanism for distribution of a response throughout the system. While individuals can subscribe to security systems that are distributed to some degree, it is optional. Moreover, local information about detection is not redistributed throughout the system. Note that malware does have the capability of replication and distribution. The absence of parallel capabilities in security agents enables attackers an inherent advantage over security efforts. Second, the existing security system is not a collective security system in that it does not protect the Internet but rather protects individual components of the Internet. The ability of any part of the Internet to send messages to any other part of the Internet without encountering security systems implies that weakest elements can be attacked, compromised, or controlled to enable progressively larger infestation of the system. Conversely, the inability to mount a pervasive defense is a fundamental limitation on security which is diametrically counter to the corresponding processes of immune system response. Without a collective security system, the only methods for progress are to harden each element of the Internet, a much larger security burden destined to be ineffective. Specifically, the only means are to improve the security of the operating system on each computer. Even so, computers that are operated by individuals that desire to cause harm would continue to be security risks. The implications of these gaps in the security system are that there is no mechanism within cyberspace for security actions to counter-attack the sources of attacks. Moreover, there is no mechanism for blocking their attacks at point of entry into the Internet rather than at point of attack at another node.\n\nAddressing cyber security would require either or both: (a) Making pervasive distribution of self-propagating but non-destructive security ware acceptable and create a developer community for such security ware. (b) Modifying the protocols of internet routers to accommodate adaptive security software that would regulate internet traffic of other kinds and self-regulate. These modifications would alter the perspective of the \"rights\" of the Internet, the right of transmission and the right of any node to communicate to any other node of the system. An effective security system requires that this right be limited, as best as possible, to those who do not cause damage to the system.\n\nThe analysis of the immune system architecture provides many other principles of security that can be applied to cyber security. Among these principles is a careful interplay of detection and action that includes evolutionary improvement of the detection and response capability. However, any advances in applying such principles will have minimal impact as long as the underlying architectural limitations persist.\n\n# Overview on Security\n\nThe current strategies used in human and cyber security are not capable of handling threats in our increasingly interdependent world. Challenges in human security are changing through global terror networks. Cyber security, by virtue of its rapid and hidden processes is arguably an even greater challenge that is poorly met by existing systems. Severe exploits and massive system burden from actors in cyberspace are common today. Were this extent of malware (viruses, trojans, spyware) and spam found at the human level, it would be considered a major breakdown of a social system.\n\nThe demands of addressing current challenges in human and cyber security are motivating the development of fundamentally new approaches. An essential feature of new challenges is their distributed nature. Global transportation and communication systems enable distributed groups of individuals to cause major physical or informational damage, elevating the global challenge of maintaining security at any location. On the one hand, traditional police forces with solely local authority cannot respond to global relationships and associations. On the other hand, the many possible actions of diverse forms can overwhelm centralized responses due to the inability to gather and process information, determine courses of local action, and distribute control messages appropriately. The distributed nature of these challenges demands a distributed, and correspondingly complex, response.\n\nTraditional security revolves around localized agents of various sizes, from individual criminals to national armies. Such agents manifest in the local damage they cause. To combat such agents, the source of the damage must be identified and an appropriate local response mounted. In general, the possible damage is limited to the scope of the agent. Small actions are harder to detect, but are commensurately responsible for less damage. Larger actions responsible for larger-scale damage are easier to observe by virtue of their size. From this perspective, coordinated, distributed actions present a new challenge in that any one of them is difficult to detect, the impact of any one agent may be in a different location, and the collective effect of multiple agents can be large. Moreover, an additional universal aspect of these challenges is their ability to proliferate so that a small scale challenge can grow by recruitment to a large scale over a short period of time.\n\nAn essential aspect of distributed, coordinated action is the availability of necessary communication and transportation mechanisms. It is essential to recognize that modern social and technological changes create the opportunities for communication, transportation and recruitment by virtue of the structures that are developed for the functioning of society itself. Changes in global connectivity are driving vulnerability to security challenges that are inherent in the structure of the system.\n\nRecognizing that security challenges stem from changes in social connectivity is important. Otherwise we may be fooled into addressing only specific current challenges, and fail to meet the next challenge that arises. Solving the current challenge will not eliminate the general vulnerability. Instead, it is necessary to create systems that are able to address challenges of various kinds that surely will arise in the future.\n\nThe changes in global communication and transportation systems are easily recognized in the human sphere given the increasing ease and frequency of global transportation and communication among people. However, it is even more dramatic in the cyber space context where changes can be more rapid and the time scale of response may be shorter.\n\nA useful paradigm for the study of distributed response security systems is the biological immune system. The immune system is the biological system responsible for internal security. The analogy between biological attackers and computer attackers is well established in the notion of \"computer viruses.\" Still, the key to developing effective systems of security is understanding the principles of immune system function so they can be generalized. In this paper, we frame the essential issues to guide a more general approach to using the structure and processes of the immune system to inform distributed human and information technology security. We focus on the part of the immune system designed to address the most complex distributed challenges corresponding to the highly complex distributed challenges in human and cyber security. In this context we detail how the distributed structure and the key interactions between agents enables the remarkable level of success achieved in a demanding security environment. This success is measured by the ability to eliminate to the last one adversaries that are capable of rapid proliferation if left unchecked.\n\nThis paper is organized upon the concept that identifying principles from one security system can serve as a foundation for modeling\/organizing other security systems with comparable complexity. Understanding principles of security from the immune system is not the same as making an analogy between biological and social or technological systems. Principles embody correspondences that reflect essential logical or mathematical relationships between elements, structures and behaviors. While we do not provide here the formal correspondence, we strive to make the nature of the formal relationships clear in the presentation.\n\nOur primary conclusions are that the existing structure of the Internet does not allow for a security system that is able to address its security challenges. This architectural problem supersedes all specifics, as it is necessary to address the underlying architectural problems in order to enable implementation of effective security.\n\nIn order to establish the fundamental gap in the ability of the current security systems we explain the functioning of the biological immune systems solution to its own security problem. We then discuss the mechanisms that have been used to address cyber security, why they fail, and what changes are necessary to enable success.\n\n# The Immune System\n\n## Introduction to the immune system\n\nHuman beings possess an immune system responsible for the well-being of trillions of cells, itself comprised of billions of interacting pieces, with no single central focus of control. The immune system is a medically significant arena of study, as it provides our defense against diseases, and failures of its operation lead to auto-immune disorders. It also provides an important context for developing our understanding of universal principles that apply across diverse complex systems including social and technological ones.\n\nThe immune system can be understood as a system with emergent response. The many components of the immune system function by collectively responding to challenges. The natural scale of its response locally may involve only a few cells, throughout the system it involves many cells responding to potentially distributed challenges in a coordinated fashion. When necessary, the response in any one location can increase dramatically to achieve a macroscopic response as is found in visible infections. Collective actions are distributed among multiple cells, achieving a balance where each cell is important but generally not essential. This emergent behavior cannot be understood by description of individual agents. It must be understood by the coupling of individual cell actions through inter-cellular interaction.\n\nThe immune system has properties that are desirable in any security system: first, it is *robust and resilient,* as individual components of the system can be removed without compromising the functionality of the whole. Second, the system is naturally *scalable* without substantial modification. Third, it is *flexible,* often able to cope with pathogens which have never been seen before. Fourth, it displays *specialization,* with different cell types performing the different functions necessary to provide system functionality. Fifth, it is able to distinguish *self from nonself,* even when the molecules it produces during its normal affairs are themselves novel chemical structures.\n\nThe immune system exploits the dynamics of evolution to achieve its remarkable success. Evolution is a central process in the formation of any complex system and is fundamentally necessary to achieve success in complex tasks. Evolution enables improvement of a system to face challenges that cannot be anticipated. Understanding how this works is therefore necessary for our ability to address many challenges. In particular, it is important to understand that not only is the immune system itself a product of evolution, but it also applies evolution to address specific immune responses. This latter evolution takes place within a single organism.\n\n## Immune system architecture\n\nThe biological immune system guards against infection in multiple ways, providing a \"layered defense\" with increasing specificity. Pathogens which penetrate system barriers (*e.g.,* skin) are challenged by the *innate immune system,* and the *adaptive immune system.* These mechanisms are constantly active, meeting a dynamically changing and large set of challenges at all times. Significant illness occurs only when they fail.\n\nThe first layer of security consists of the barriers themselves. Barriers distinguish the self from other at the large scale. They separate space to that controlled and not controlled, or subject to different levels of control. The immune system includes the skin and other membranes that separate specific compartments of the body. The skin is the boundary that separates the internal space where the immune system controls the fine scale security, from the space outside the system which is not subject to security.\n\nThe second layer of the immune system is designed to meet failures of the first layer. This includes disruptions of the barrier itself, repairing the skin, and responding to challenges that occur when there are significant breaches of the skin. A principle of innate immunity is to characterize damage and threats that are of intermediate scale \u2014 involving many cells \u2014 and therefore generic features of large classes of hazards rather than specific to the attacker types. These include response to blood loss (clotting) or to tissue damage (inflamation).\n\nThe third layer of the immune system addresses the finest scale challenges. In particular, the role of adaptive immunity is to identify specific individual threats that do not trigger the innate immune response: individual molecules, cells or loosely associated groups of molecules or cells distributed through the system.\n\nWhile there are many potential causes of harm that the immune system guards against, there are two that are of particular note because they are able to self-replicate and thus require a response that both can address a large number of such attackers and yet detect a very small number, even down to a single individual. There are two primary types of such foreign agents: molecular attackers, viruses, and cellular attackers, commonly bacteria but also other replicating single celled organisms.\n\n## Specific fine scale (adaptive) response\n\nOur focus is on the adaptive immune system whose role is to identify and attack specific intruders or self cells that change behavior to become a threat to survival. Any security system must perform certain key tasks.\n\n1. **Detection.** Hostile elements introduced into the system or arising within the system must be noted, distinguished from non-hostile elements and identified before action can be taken.\n\n2. **Action.** Once a threat has been identified and characterized, a response must be mounted which is appropriate to both the quantity and the nature of the threat.\n\nWe now address these points in more detail.\n\n# Adaptive Immune Response\n\n## Identification\n\nThe essence of recognizing a threat is distinguishing signatures of its structure or behavior. The identification of a foreign \"antigen\" in the immune system is analogous to distinguishing undesirable from desirable elements in any system. While this is a commonly necessary task, this form of pattern recognition is fundamentally difficult. Understanding how it is accomplished is key to the design of any security system.\n\nCentral to identification is a way to map any structure or behavior that is to be characterized onto a smaller and better defined set of \"possibilities.\" This set of possibilities is then partitioned, perhaps dynamically, into a set that are considered a threat and those that are not a threat.\n\nThe immune system performs this distinction by starting from a large set of prototypes within the space of possibilities, and rejects from this set those that are matches to components of self. The remaining set are used for identification of antigens. Also, perhaps, damage caused may be used as evidence. When an antigen is detected, action is initiated, and the prototypes are refined by evolutionary processes to achieve better matching, and thus more rapid response.\n\n## Capacity for action\n\nThe identification of the presence of a threat must trigger a response that will be effective in meeting that threat. The response could be of various kinds, but it must modify to render harmless, or eliminate the threat from the system.\n\nIn a distributed system, such as the immune system, an appropriate level of response must be recruited from nearby or from far away depending on the size of the response needed. Once the existence of a threat has been identified, the ongoing need for detection should be simplified for the responding agents. This enables action to be performed by components that are not as capable of performing differentiation by themselves, but are instead specialized for action against a threat.\n\nThe existence of responders who are then less capable of discrimination, however, increases the likelihood of actions that are against self rather than against threats. This challenge must be met by careful regulatory processes for triggering action. The need to identify threats, and then selectively trigger action requires balanced protocols to safeguard self while defeating threats. Examples of this issue in human security include the problems of \"friendly fire\" and \"collateral damage\" in warfare, which constitute harm to responders and to bystanders respectively. While these terms imply unintended harm, we can also include in this category intentional actions by individual agents that are not consistent with collective goals, such as war crimes. There are two types of methods for avoiding self-damage. The first are within an individual agent, analogous to intention of the individual, and the second involve regulatory interactions between agents at the time of action.\n\nIn computer security, similar issues of self-inflicted damage today include issues of \"false positives\" such as the blocking of desired e-mails as spam, or blocking of legitimate users from accessing a system. The extra effort involved in forgotten or mistyped passwords and password systems can be included as well.\n\nMultiple types of cell and multiple cells of each type are involved in each action of the immune system. Biological and biochemical details will be discussed in the Appendix; for the moment, we note that the ability to spread messages and agents throughout the body via the circulatory system allows initial signals to provoke a systemic reaction to resolve a challenge. This systemic reaction is not just a call for others to respond to a particular place, but a pervasive distribution throughout the system of the ability to respond to attackers that can appear in a distributed fashion and replicate.\n\n## Evolution as method\n\nThe central principle of evolution is the \"non-random survival of randomly varying replicators.\" In each generation, the replicators which are better able to survive are able to leave more offspring, and so advantageous traits can spread throughout the population. If the environment changes, the mixture of traits within the population will change in response; populations which cannot adapt in this fashion will quite generally be supplanted by those who can. We emphasize that the variation that takes place in the traits through, e.g. mutations, is random, while the selection which acts upon replicators has a much more deterministic character.\n\nSelection creates a functionality in the replicators over time, where that functionality can be defined by the ability to meet the criteria of selection. By arranging criteria of selection to serve a particular purpose, the evolutionary process can serve to induce that purpose in the replicators.\n\nSuch a directed evolutionary process has been variously used both historically and today. The breeding of animals is an artificial way of applying selection criteria to serve a purpose that human beings determine. In technological applications, so called genetic or evolutionary algorithms, specify an artificial measure of \"fitness\" of a computer-based replicator and apply random variation and selection to improve the replicators in meeting that measure. In this context the process of evolution can be considered a form of optimization. Still, computer based evolutionary processes only incorporate some features of the evolutionary process in nature.\n\nIf the goal of a system is relatively easy to specify but the path to achieve that goal is unclear, then a properly implemented evolutionary process is an approach that can be effective.\n\nThe process of evolution is used by the immune system to improve the most information intensive aspect of the response process, detection. This process requires effective use of the existence of some detection to initiate the improvement process so that progressive improvement of detection is possible. Subsequently, improved detection accelerates the ability of the system to respond. This requires rapid distribution of the improved capabilities throughout the system through global dissemination.\n\n# The Cyber Security Challenge\n\nWhen early \"mainframe\" computers were replaced by personal computers, an individual computer still remained largely isolated, interacting with its owner. Programs or data were transferred via floppy disk or, sometimes, over telephone lines. As networking became ubiquitous, the rate of ongoing interactions created a global system in which most of the communications are not observed directly by human beings and individual human response times are insufficient to monitor the dynamics of the communications. Moreover, the extent and rate of communication continues to grow.\n\nThe widespread use of the Internet for communication and commerce has increased the need for cyber security. It is important to recognize the change from a system that is an adjunct and a convenient substitute to more conventional communications, to when such interactions are an integral part of society. As commerce is increasingly done through the Internet, the disruption of economic activity through the disruption of such communications becomes increasingly important. The next stage of development, already occurring in many contexts, is the integration of the Internet into response systems. Thus the operation of almost every organization is undergoing a transition from isolated computer stations, to communications, to real time interactions, to integrating essential responses through the network. Increasingly, individual actions require information from multiple sources distributed through the network, and actions themselves become distributed among multiple individuals interacting through the network. Potential disruptions of the network system increase in impact as this occurs.\n\nFor example, computers introduced into medical operations might first be used for tracking appointments and keeping financial records. Then they might be used for sending prescriptions from physician to pharmacy. Third, they can be used for real time monitoring of procedures. Finally, they can be used for remote controlling of procedures. Once the final stage is reached, the disruption of service has real time impacts not only on the communication about the procedure but the procedure itself.\n\nThe need for security has become apparent in the success of Internet fraud, including breaches of high security systems and theft of personal records. The extent and variety of \"cyber\" actions has increased with the ubiquity of spam, spyware, phishing, zombie networks, denial of service attacks, etc. A spam-blocking service reports over 4.7 billion spam messages intercepted since November 2005, almost ten times the amount of legitimate traffic over that period. This includes only the spam which was intercepted. Many of the spam messages advertise fraudulent products or otherwise attempt to defraud their readers; they contain links to unlawful sites\u2014which also serves to skew search engines like Google in their favor; and they often originate from otherwise legitimate computers whose security has been compromised.\n\nThe need for the biological immune system arose when single celled organisms evolved to form multicellular organisms. As with the change in human social and cyber connectivity, the connectivity in multicellular organisms leads to a collective vulnerability and a need for a system to guard against it.\n\n# Current Cyber Security\n\nThere are a number of generally used cyber security systems that are parallel in some way to the immune system operations. In addition, there are specific efforts to adapt concepts from the immune system for cyber security.\n\n## Layered defense\n\nThe first layer consisting of barriers in cyber security includes firewalls and the separation of distinct networks for e.g. ATMs and bank transactions. These security systems prevent malware from entering a system as skin protects an organism. Barriers within a system, just as membranes within the body, generally are semi-permeable, using mechanisms to differentiate what can pass. In cyber security these include password authentication and S\/Key challenges.\n\nThe second layer of cyber security includes detection of exploits and generic responses to them. This includes Domain Name Server Black Lists (DNSBL) often called RBLs (Real-time Blackhole Lists). These are services that gather and provide lists of IPs that are sources of spam and other malware. Institutional mail servers can automatically implement policies that use this information to block domains of the Internet that are sources of spam. The sources of spam may include servers set up for this purpose, or zombies which are computers that have been compromised by malware so that they transmit spam and malware on behalf of others. In effect, zombies are the analog of virus infected cells that become factories for viruses and other pathogens. The large number of these exploits today results in a response which is akin to a generic immunity response.\n\nThe third layer of cyber security includes virus scanners and e-mail filters are the analogs of the adaptive immune system. These applications search programs stored on disk or incoming e-mail messages for signatures of malware and spam. If the detection system is not specific enough, programs that are valid, and e-mail that is valid are rejected. Alternatively, malware or spam may not be rejected. The desired versus undesired categories are analogous to the discrimination of \"self\" from \"other\" in the adaptive immune system. Where self consists of legitimate software and desired e-mail, and other is the malware (virus, Trojan Horse, etc.) which would compromise the system and spam. The existence of false-positives and false-negatives that misidentify whether the spam or malware are legitimate is similar to errors of classification in the immune system as well.\n\n## Detection\n\nA computer immune system must detect both known and hitherto unknown viruses or spam. For this purpose a program fragment, or small piece of data from a larger set can be used as a detection template. This extracted data can be compared with correspondingly extracted data from a virus or spam. The latter is known as the \"signature\" of the virus or spam. Finding the \"signature\" within a piece of software indicates that the software has been infected. Various signatures can be constructed based upon procedures specified by individuals (heuristic rules), or statistical pattern detection (Bayesian filtering), and collaborative identification (when voluntary human communities manually specify spam signatures that are shared).\n\nSome detection systems are local in that the software itself learns from labeling by the user what is spam and what isn't. In this case the user manually identifies spam and non-spam, and signatures are extracted automatically that differentiate between the spam and non-spam by the software. Others are centrally directed by service providers who provide the software and revise it to add templates for new malware. Revisions arise after reports due to the detection of the virus. Such detection occurs when individuals observe activity of processes on computers outside of normal operations, or of damage due to such processes.\n\nSimilar issues arise at the level of computer network operation security. Such security systems operate at the level of the pattern of traffic rather than the content of individual messages. To detect a deviant computer system that may be the source of other attacks, a pattern detection system has a representation of the types of patterns that can arise. Among these a set of \"self-patterns\" is created, representing the legitimate ways in which traffic can flow amongst the computers of a Local Area Network (LAN). Abnormal traffic, such as a computer suddenly sending thousands of e-mail messages to the external Internet, is a \"non-self pattern\" and is considered a sign of infection (in this example, the computer may have been co-opted by a spammer and used as a \"zombie\" to spread spam and malware).\n\n# Why Current Cybersecurity Cannot Work\n\nThe examples of Internet based security suggest that existing systems have some similarities with immune system operations. Still, they do not capture the dynamics of communication and interplay of detection and action that should provide better security and better self-protection. These limitations include the manner of detection and sharing of signatures of malware (local, centralized and limited distribution systems), as well as the limits in implementing actions to prevent or stop attacks and exploits.\n\nIndeed, while cyber security systems today provide some protection for malware and spam, the ongoing presence of large volumes of spam and malware, and exploits, suggests that the existing protections are too limited in their abilities and greater attention to the principles of security as embedded in immune system operations would give rise to improved outcomes.\n\nThere are two fundamental reasons that the current approaches to cybersecurity cannot work effectively.\n\nFirst: There is no mechanism for rapid pervasive distribution of security processes that can respond to new types of malware or spam. The voluntary use of community or centrally generated malware or spam signatures, while widespread, is not pervasive. One way to understand the ineffectiveness of security distribution is to compare the distribution of security with that of malware. Malware is much more pervasively and rapidly distributed than the security that is designed to guard against it. In the immune system, viruses can be considered to be analogs of antibodies. Both are able to replicate rapidly using cellular mechanisms and both are able to attack cells or other molecules. This correspondence makes viruses and antibodies have similar capabilities. Furthermore, T\u00a0cells are similar to bacteria in their ability to replicate and attack other cells. This correspondence of defenders and attackers implies that there are no major gaps in the capabilities of the attackers as compared to the defenders. By contrast, there is no defensive analog of malware, in that the anti-malware software is centrally controlled instead of being distributed in origin. A better correspondence would be a security system that would operate on the basis of a peer-to-peer protocol. As computer viruses and other malware develop, so would defenders that would be redistributed automatically across the web. Such a peer-to-peer system would open the door to more opportunities for malware, but this architecture would give attackers and defenders equal capabilities, unlike the current situation where attackers have a wider range of options, with potentially much greater capabilities.\n\nThe closest system to the architecture of the immune system from this perspective is an existing distributed collective response system where multiple users share signatures of spam. Still, unless these systems are universally adopted, they do not prevent widespread infection and thus widespread attack \u2014 i.e. they are not collective security systems but rather localized security systems.\n\nSecond: The current architecture of the Internet is based upon a protocol (IP ) that transmits messages independent of their content. The basic premise is that any valid message is delivered to its destination. The delivery process involves transfer from the point of origin through a sequence of nodes (routers) of the internet. Each router reads the target destination specified by the message (packet) identifies a node to transfer the message to that will enable eventual delivery to the destination, and transfers it. In this proces, there is no evaluation of what the mesage contains. Individual messages may be lost in transmission due to network overload, but not due to evaluation of the contents of the message. This implies that as far as the sender and receiver are concerned the network is transparent. Any message is transferred from one node of the network to any other node of the network without intervention. In considering the transfer of messages, it is important to recognize that a message is also an action that can be harmful. If we consider each destination node of the network to be like a \"home,\" and the network to be like the streets, then from the point of view of security, this is equivalent to having no police on the streets or military at the borders. Each household, or individual, must defend him or herself, using means of protection (e.g. guns, etc.) purchased on the market. That the protection is left to the individual home reflects the open nature of the Internet. The corresponding system in a biological sense would enable any cell to approach and attack any other cell of the body. Under these untenable conditions there would be no collective security in the medium in which cells are located. Analogously, there is no protection in the medium of the Internet.\n\nThe two fundamental limitations of the architecture of the Internet from a security perspective imply that there is no mechanism for a security system to prevent actions consisting of nodes attacking other nodes in the Internet. Attacks that are launched cannot be stopped before they arrive at their destination. There is also no process that enables removal or elimination of the originators of an attack. Security systems do not have access to nodes of the Internet except as they voluntarily participate in security actions by subscribing to security communities or services. Thus, preventive action or removal is only possible if the originating node voluntarily participates in a security action. Without such participation, the best that can be done is to protect from attack at its destination. This implies that an intentional attacker can gain strength and evolve increasingly sophisticated attacks. Attacks can be focused on particular nodes, either because of their vulnerability or value. In contrast, to be effective, security must defend all parts of the system it wishes to protect.\n\nIn order to develop an effective collective security system similar to the immune or human security systems, substantial architectural changes must be implemented. Collective security preventing attacks would require that the routers of the Internet themselves would need to have protocols that allow refusal of transmission based upon content or extrinsic information such as point of origin. The routers of the Internet serve as the transmission medium for the nodes of the Internet. This corresponds to the intercellular fluids, including the blood, of physiological systems, the primary locus of immune system activity. Such an approach was implemented against spam transmission early in the history of DNSRBLs ; however, it appears to be abandoned. A router based security system would curtail the \"right of transmission,\" which may be considered fundamental in discussions of Freedom of speech.\n\nAbsent a router based security system, the second alternative is to enable automatic transmission of security software among all terminal (non-router) nodes of the internet. This would enable rapid and pervasive distribution throughout the system. This is a similar propagation to that of viruses and other malware. Such automated transmission might be considered to be less desirable than router based security, as it involves partial loss of control by owners of the activities on their computers in favor of security operations. Corresponding software capabilities exist in peer-to-peer systems, and in existing voluntary security communities.\n\nThus far we have not discussed the use of human legal systems to pursue human originators of malware and spam. In this regard there are difficulties inherent in international law for pursuing such attacks as crime. Perhaps more critically, the different domains and time scales of Internet activity and human legal activity suggest that this approach will be difficult to utilize to any significant effect. Criminal prosecution is a high cost and time effort that can be effective in disrupting non-normative activities but not in curtailing widespread actions. Indeed, the existing success in prosecuting Internet crime is limited.\n\n# Conclusions\n\nWe performed an analysis of principles of security using the immune system as a model that provides insight into the functioning of any security system. We identified central functional attributes of a security system. However, we found that these properties cannot be effectively implemented in cyber security because of current limitations of the Internet structure. This is not a limitation of the security principles but rather of the architecture of the Internet.\n\nWithin the current limitations on security systems, security is largely in the hands of individual nodes. Improving the resistance of individual nodes to attack is critical because a compromised node will be the source of attacks on other nodes. The main source of weakness in the system is the vulnerability of operating systems on individual computers. Vulnerable operating systems should be avoided not just for the security of the individual node but for the security of the system as a whole.\n\nA focus on individual nodes, however, will miss the essential nature of a collective system and its vulnerabilities that arise due to the rapid communication. We therefore recommend that major modifications will be considered that allow the ability to perform security within the router systems of the Internet rather than in the end nodes. This would entail router protocols that can be used to reject messages based upon content, with distributed detection systems to identify which content corresponds to harmful acts. Such a security system must be carefully designed to avoid harm to self through blocking valid Internet traffic. Indeed, it is the fear of such blocking that has limited the implementation of such a system. The use of principles of security obtained from analysis of other security systems, specifically the immune system, can provide guidance to enable effective processes that minimally impede or interfere with valid traffic. To this end we summarize our findings of immune system security.\n\nWe found that the three subsystems of the immune system are designed to address challenges at three scales. The largest scale is the separation of internal and external domains, i.e. the skin, as well as other partitions of the system by membranes to separate different security domains. The intermediate scale corresponds to manifest system damage, including damage to the boundaries or tissues, that require repair. The finest scale and highest complexity system involves responding to distributed collections of molecules or cells that are difficult to identify in the context of the complex physiology of the body, and are often able to replicate to create larger scale damage, e.g. viruses and bacteria.\n\nThe most specific and distributed system for security involves remarkable level of emphasis on detection. Detection can be described in a general fashion that is relevant to security systems at other levels of organization including human societal, and cyber security. Detection involves a standard set of templates that are able to characterize sufficiently broadly potential invaders, but have specificity sufficient to distinguish adversaries from self. Once detection occurs, multiple processes are used to enhance the ability to detect the adversaries rapidly and in small numbers.\n\nThe detection process is intimately linked to action even though different agents perform detection and action. The interaction of action agents and detection agents is a local process involving identifying individual elements as adversaries and mounting a proportionate response. These local interactions are carefully designed to avoid self-inflicted damage.\n\nAll of the immune system actions utilize widespread communication and transportation to transport the detection mechanisms throughout the system, to recruit additional defenders, and to communicate ongoing improvements in detection.\n\nA review of the current major systems for cyber security suggests that there are aspects of cyber security systems that correspond to the immune system. However, the fundamental differences in architecture prevent communication and refinement of responses to adversaries throughout the system. These constraints also severely limit the possible counter attacks, including the ability to block sources of attacks at the points of origin. With such limitations overall success in suppressing the large numbers of cyber attacks will be limited. Since the transportation and communication systems can be exploited by attackers to achieve widespread damage, they must also be used as part of successful response by defenders.\n\n# Appendix: Detection and Response in the Immune System\n\nIn this appendix we provide details about the immune system response to identify the mechanisms by which it provides a capacity for detection and for corrective action. These are key aspects of immune system response but are generally not currently applicable to cyber security because of the inherent architecture of the Internet. As explained in the main paper, should the structure of the Internet be modified in a manner to allow for improved security, these mechanisms can provide improved guidance for how to proceed.\n\nThe adaptive immune system has a high degree of specificity: a \"non-self\" antigen (a signature of an adversary) is recognized and responses are tailored to that particular pathogen (adversary). We will show how the biological system identifies adversary signatures, transmits information to agents who can take action, determines which actions are appropriate, and how memory functions.\n\nThe cells of the adaptive immune system are known as lymphocytes, which are special types of white blood cells (*leukocytes*). The two principal categories of lymphocyte, are *B cells* and *T cells,* are further divided into subcategories as explained below. Both derive from stem cells found in bone marrow, the *hematopoietic* or blood-producing cells.\n\nB cells have a primary role in identification of adversaries, and T\u00a0cells are the primary agents of action in response to the adversaries. The tight coupling of identification and action results in mutual regulation of activity by B and T\u00a0cells. When detection occurs, B\u00a0cells trigger T\u00a0cell response, and T\u00a0cells increase the action of B\u00a0cells to accelerate detection as necessary.\n\n## Detection mechanism\n\nDetection starts with pattern matching between a detection template and entities in the environment that might be characterized as adversaries. One type of cell, B\u00a0cells, and one type of molecule, antibodies, are the biological agents primarily responsible for detection of threats.\n\nThe primary template in the adaptive immune system is the antibody. The antibody is a molecule that binds to anything whose shape matches (complements) the antibody binding site. Antibodies can be found in two primary forms. First, as part of a cell where it is attached on the surface membrane of a cell to the cell signaling system so that a match between the antibody and something in the environment triggers a cell response. Second, antibodies can be released into the blood and when floating around attach themselves to matching entities.\n\nThe key to the use of the antibody as a pattern matching template is the possibility of generating many different shapes of the molecule binding site. This ability arises because the antibody is formed of several parts, the parts can be combined together in flexible ways, and the parts themselves can be varied. An antibody is comprised of four peptide chains arranged to form a Y-shape. Each of the Y-shape's two \"arms\" has a variable region.\n\nThe matching between an antibody through binding to another molecule (called an antigen) occurs at the molecular level: the shape of an antibody is complementary to that of the antigen to which it binds. More precisely, it will bind to a part of the antigen called the *epitope,* typically about 600 \u00c5$^2$ in area.\n\nAntibodies are generated internally by B\u00a0cells. The mechanisms for their manufacture are through the read-out of genetic templates for each of the components and their combination into a single molecule. B\u00a0cells also are designed to carry the antibodies they generate on surface membrane receptors as a detection mode. When the antibody binds to something in the environment the B\u00a0cell can change its behavior accordingly.\n\nA B\u00a0cell identifies a threat when the antibodies present on its surface bind to a specific foreign antigen. B\u00a0cells that have identified an adversary by binding to it perform a series of actions to improve future detection of similar adversaries and initiate response. The actions they take to initiate response will be discussed in the next section.\n\n## Detection templates\n\nB cells determine which templates to use for detection of invaders through a process which requires multiple steps, each of which is central to the functioning of the immune system.\n\nThe first step is that multiple templates can be generated by cellular mechanisms. By recombining genetic elements from a \"toolbox\" of short sequences, a large volume in the shape space of all possible antigens can be covered.\n\nThe second is the process of generating the set of templates that are found in the body as B\u00a0cells are formed, mostly in bone marrow. B\u00a0cells are produced from a type of stem cell. They progress through several developmental stages marked by changes in the genetic sequences which define the active sites of their antibody molecules. This variation during their development gives rise to a remarkable diversity of templates associated with distinct B\u00a0cells. This large array of prototype templates don't exhaustively cover the set of possibilities, but are widely dispersed throughout the set of possibilities.\n\nThe third is eliminating templates that correspond to self molecules. Mounting an immune response to the body's own molecules and cells is extremely undesirable. The body avoids this (not always perfectly) by inactivating lymphocytes which happen to be self-reactive after they are generated. During the last stages of their maturation, B cells are presented with self molecules. B cells which react to these molecules are triggered into programmed cell death (apoptosis) or modifying their receptors. This process occurs in part after developing B cells move to the spleen, where they complete their maturation process.\n\nThe fourth is enhancing the presence of B\u00a0cells whose antibodies have responded in the past to antigens. This serves as a form of memory. Cells that have not been exposed to antigen develop into different forms once they have encountered their target antigen. Some become *effector cells,* which go on to play an active role in immune response; others become *memory cells,* which do not immediately participate in anti-pathogen defense but are sensitized to an antigen so that a second encounter will stimulate them to become effector cells. Over the short term their stimulation by antigens that are present in the current attack contributes to the strength of the response. Memory cells are much longer-lived than effector cells, persisting sometimes until the death of the animal instead of dying within days, so they can facilitate rapid immune responses to threats which have been encountered before.\n\nThe fifth is the enhancement of templates matching to current invaders by the process of evolution, discussed in the next section.\n\nThe sixth is the communication between B\u00a0cells that enables multiple distinct antibodies to be formed to the same adversary. This is discussed below in the section on B-cell to B-cell communications.\n\nThe seventh is the use of indicators of cellular damage that may reduce the threshold for detection or trigger it.\n\n## Evolution of detection\n\nWhen a body is attacked by a reproducing agent, such as a virus or a bacterium, the rate of detection of that invader is critical to the success of the response. Detection of even individual ones is necessary before they can reproduce, requiring remarkably fast and pervasive detection. The detection can be enhanced if the detection template is improved to be a better match to the invader. This is accomplished by a process of progressive improvement through selecting improved versions, i.e. evolution.\n\nThe evolutionary process occurs primarily in specific organs of the body called lymph nodes. When B\u00a0cells detect an invader with their antibodies, they take the piece of the invader that binds to the antibody. They travel to a lymph node and place the antigen in the wall of the lymph node. B\u00a0cells reproduce very rapidly in a lymph node. In doing so they vary by high rates of mutation their own DNA which codes for the antibody they produce. They use their antibody to test its binding to the antigens in the wall of the lymph node. The less successful ones rapidly die by programmed cell death. The ones that are most successful, i.e. the best detectors, are sent out into the blood stream to engage with the adversary.\n\nAs B\u00a0cells bring pieces of the invaders into the lymph nodes, and the process of selecting improved binders continues, the B\u00a0cells released back into the blood stream are continually improved in their ability to detect the invader. This leads to a dramatic change in the sensitivity of the immune system to that invader, enabling the system to completely eliminate the invader.\n\n## B-cell to B-cell communication: Improving detection\n\nOnce an adversary has been detected, identifying multiple ways to detect it both increases the ability of detection and reduces the possibility of an adversary acting to avoid detection by modifying their structure and behavior.\n\nB cells that have identified an invader by binding to it, use this detection to enhance the ability of other cells to identify the invader. The method for doing this is to make less specific parts of the invader to which other B\u00a0cells may be able to bind. The B\u00a0cell breaks parts of the invader into smaller components and displays these components on the B\u00a0cell surface. Other B\u00a0cells then bind to these molecules triggering their own response systems. This creates additional methods for invader detection in addition to the original one detected by the first B\u00a0cell.\n\nSince adversaries are not individual molecules or cells, but rather are replicating viruses and bacteria, the immune system utilizes massive parallelism: many types of pattern detection templates (antibodies) can be tested simultaneously across the system. This enables using multiple methods to detect the same type of adversary.\n\nThe communication by B\u00a0cells to other B\u00a0cells in this way is a key form of cooperation between cells of the adaptive immune system. The network of interactions also includes communication from B\u00a0cells to T\u00a0cells, and reciprocally from T\u00a0cells to B\u00a0cells.\n\n## B-cell to T-cell communication and intelligence\n\nCell to cell communication enables B\u00a0cells to communicate the discovery of antibodies to the cells responsible for action \u2014 T\u00a0cells. Cell to cell communication is also important for information gathering by T\u00a0cells from non-immune cells \u2014 the analog of human intelligence.\n\nT\u00a0cells use a particular \"secure\" communication channel to receive messages from B\u00a0cells and other cells of the body. The security of the communication is provided by MHC molecules. These are a diverse set of molecules that are highly specific to each individual. The specific nature of these molecules is maintained by high genetic variability of the MHC molecules over evolutionary time and within the human population at any one time. Secure communications are important to prevent invaders from \"spoofing\" messages to the T\u00a0cells.\n\nWhen a B\u00a0cell communicates to T\u00a0cells, the \"information,\" is in the form of a molecular fragment on the surface of the B\u00a0cell bound to a MHC molecule. T\u00a0cells have the ability to bind to and recognize the MHC molecule and the molecular fragment that is bound to it.\n\nAs described before, a B cells whose antibodies bind to something in the environment (the antigen) takes the antigen\/antibody complex and breaks it into small units, which are returned to the surface. At the surface the fragments are combined with a MHC molecule. The surface display of the MHC with the parts of the antigen\/antibody complex are then detected by T\u00a0cells in the vicinity triggering T\u00a0cell action.\n\nSimilarly, any cell of the body may display MHC molecules with fragments of molecules that are found inside the cell. This enables T\u00a0cells to check for normalcy or cell infection (e.g. by viruses) from the surface display on a cell.\n\n## Action agents: T\u00a0cells and phagocytes\n\nThe primary agents responsible for action against threats in the immune system are T\u00a0cells. T\u00a0cells both secrete molecular messengers which can kill other cells or can increase the growth rate of other cells.\n\nT\u00a0cells can secrete cytotoxins which can kill other cells. T\u00a0cells target foreign cells or self cells that have been infected by foreign agents such as viruses. T\u00a0cell action is triggered by their binding to complexes of antibody and MHC molecule, i.e. they are triggered to action by B\u00a0cell signaling.\n\nT\u00a0cells also are responsible for signaling B\u00a0cells into a more active state. This active state results in two effects. The first is rapid growth and proliferation. The second is the manufacturing of more antibodies and their release into the blood and other intercellular fluids so that they bind to antigens throughout the system and not just when a B\u00a0cell encounters them. This is a process that makes the detection itself a form of action. The binding of the antibodies can disable the pathogens directly or mark them for destruction. The former is accomplished by interfering with the surface molecules which pathogens use to infect cells or by binding to the toxins produced by bacteria.\n\nThe two different functions of T cells are divided between killer T cells and helper T cells. The dynamics of killer and helper T-cell response is regulated in part by their responsiveness to distinct MHC molecules.\n\nFinally, a method is necessary to clean up both the dead cells and molecules, and even live but marked cells, from the system. This is accomplished by phagocytes. Phagocytes, are cells that can engulf and use digestive juices to, in effect, eat other cells and molecules. Their action is triggered by the presence of the antibodies bound to molecules and cells after their release by B\u00a0cells.\n\n## Post response: Reducing the cohort and memory\n\nThe rapid growth of the number of B\u00a0cells, antigens and T\u00a0cells in response to an adversary also requires a mechanism to reduce that number when the response is successful. As the number of antigens declines, the short lifespan of the B\u00a0cells and the absence of triggers to produce additional ones leads to their decline. Additional triggers to accelerate this process may also take place but must be carefully designed to avoid exploitation.\n\nThe immune system is costly, in that providing an immunological defense against foreign pathogens requires resources \u2014 such as energy and nutrients \u2014 which are also needed for muscular and neural activities, reproduction, growth and other biological processes.\n\nStill, once a specific threat has been identified, it is more likely to reappear. This justifies retention of identification templates, in antigen and B\u00a0cell form, for future use. This retention accelerates the response when the same threat reappears.\n\nSome B\u00a0cells preserve the memory of prior responses by continuing to survive for extended periods of time after a response. Since some death is unavoidable, they also replicate without mutation to preserve their antibodies in their offspring. These \"Memory cells\" maintain the capacity to generate their specialized responses, so that a later infection by the same pathogen can be countered quickly.\n\nOrganisms without an adaptive immune system lack such immunological memory, whereby a pathogen can be \"remembered\" by its signature antigens.\n\n## Security failures\n\nThe immune system, like other systems, has modes of failure that demonstrate how its processes work and fail to work in addressing key challenges.\n\nThe first type of failure is when an invading organism cannot be overcome by the immune system. Bacteria are significant challenges to the immune system, and before antibiotics were available many people died due to failure of the immune system to respond sufficiently effectively to bacterial diseases. Such failures reflect the tight competitive balance between bacterial attackers and immune system response. When the immune system is weakened due to lack of nutrition, compound diseases or particularly effective pathogenic attackers the immune system response may not be sufficient to prevent growth and replication of the bacteria.\n\nThe second type of failure is auto-immune disease. This is a failure to distinguish self from non-self which causes the immune system to attack self cells. This is most common for the case of some special types of cells, such as insulin-producing cells. It is believed that the small number of such cells leads to their vulnerability as the immune system may not sufficiently be exposed to these cells and therefore is more likely to identify them as \"other\" and attack them, thereby giving rise to some types of diabetes.\n\nThe significance of both of these failure modes for other forms of security is that even with the best system possible, success is not guaranteed.","meta":{"dup_signals":{"dup_doc_count":11,"dup_dump_count":2,"dup_details":{"curated_sources":2,"unknown":9}},"filename":"out\/1303.2682_extract_comp-immune15.tex.md"},"subset":"arxiv"} +{"text":"abstract: I argue that the current financial crisis highlights the crucial need of a change of mindset in economics and financial engineering, that should move away from dogmatic axioms and focus more on data, orders of magnitudes, and plausible, albeit non rigorous, arguments. An edited version of this essay appeared in Nature.\nauthor: JP Bouchaud, Science & Finance, Capital Fund Management, \n6 Bd Haussmann, 75009 Paris, France\ntitle: Economics needs a scientific revolution\n\nCompared to physics, it seems fair to say that the quantitative success of the economic sciences is disappointing. Rockets fly to the moon, energy is extracted from minute changes of atomic mass without major havoc, global positioning satellites help millions of people to find their way home. What is the flagship achievement of economics, apart from its recurrent inability to predict and avert crises, including the current worldwide credit crunch?\n\nWhy is this so? Of course, modelling the madness of people is more difficult than the motion of planets, as Newton once said. But the goal here is to describe the behaviour of large populations, for which statistical regularities should emerge, just as the law of ideal gases emerge from the incredibly chaotic motion of individual molecules. To me, the crucial difference between physical sciences and economics or financial mathematics is rather the relative role of concepts, equations and empirical data. Classical economics is built on very strong assumptions that quickly become axioms: the rationality of economic agents, the 'invisible hand' and market efficiency, etc. An economist once told me, to my bewilderment: *These concepts are so strong that they supersede any empirical observation*. As Robert Nelson argued in his book, *Economics as Religion*, the marketplace has been deified.\n\nPhysicists, on the other hand, have learned to be suspicious of axioms and models. If empirical observation is incompatible with the model, the model must be trashed or amended, even if it is conceptually beautiful or mathematically convenient. So many accepted ideas have been proven wrong in the history of physics that physicists have grown to be critical and queasy about their own models. Unfortunately, such healthy scientific revolutions have not yet taken hold in economics, where ideas have solidified into dogmas, that obsess academics as well as decision-makers high up in government agencies and financial institutions. These dogmas are perpetuated through the education system: teaching reality, with all its subtleties and exceptions, is much harder than teaching a beautiful, consistent formula. Students do not question theorems they can use without thinking. Though scores of physicists have been recruited by financial institutions over the last few decades, these physicists seem to have forgotten the methodology of natural sciences as they absorbed and regurgitated the existing economic lore, with no time or liberty to question its foundations.\n\nThe supposed omniscience and perfect efficacy of a free market stems from economic work in the 50's and 60's, which with hindsight looks more like propaganda against communism than a plausible scientific description. In reality, markets are not efficient, humans tend to be over-focused in the short-term and blind in the long-term, and errors get amplified through social pressure and herding, ultimately leading to collective irrationality, panic and crashes. Free markets are wild markets. It is foolish to believe that the market can impose its own self-discipline, as was promoted by the US Securities and Exchange Commission in 2004 when it allowed banks to pile up new debt.\n\nReliance on models based on incorrect axioms has clear and large effects. The 'Black-Scholes model' was invented in 1973 to price options assuming that price changes have a Gaussian distribution, i.e. the probability extreme events is deemed negligible. Twenty years ago, unwarranted use of the model to hedge the downfall risk on stock markets spiraled into the October 1987 crash: -23% drop in a single day, dwarfing the recent hiccups of the markets. Ironically, it is the very use of the crash-free Black-Scholes model that destabilized the market! This time around, the problem lay in part in the development of structured financial products that packaged sub-prime risk into seemingly respectable high-yield investments. The models used to price them were fundamentally flawed: they underestimated the probability of that multiple borrowers would default on their loans simultaneously. In other words, these models again neglected the very possibility of a global crisis, even as they contributed to triggering one. The financial engineers who developed these models did not even realize that they helped the credit mongers of the financial industry to smuggle their products worldwide \u2013 they were not trained to decipher what their assumptions really meant.\n\nSurprisingly, there is no framework in classical economics to understand 'wild' markets, even though their existence is so obvious to the layman. Physics, on the other hand, has developed several models allowing one to understand how small perturbations can lead to wild effects. The theory of complexity, developed in the physics literature over the last thirty years, shows that although a system may have an optimum state (such as a state of lowest energy, for example), it is sometimes so hard to identify that the system in fact never settles there. This optimal solution is not only elusive, it is also hyper-fragile to small changes in the environment, and therefore often irrelevant to understanding what is going on. There are good reasons to believe that this complexity paradigm should apply to economic systems in general and financial markets in particular. Simple ideas of equilibrium and linearity (the assumption that small actions produce small effects) do not work. We need to break away from classical economics and develop altogether new tools, as attempted in a still patchy and disorganized way by 'behavioral' economists and 'econophysicists'. But their fringe endeavour is not taken seriously by mainstream economics.\n\nWhile work is done to improve models, regulation also needs to improve. Innovations in financial products should be scrutinized, crash tested against extreme scenarios and approved by independent agencies, just as we have done with other potentially lethal industries (chemical, pharmaceutical, aerospace, nuclear energy, etc.). In view of the present mayhem spilling over from the financial industry into every day life, a parallel with these other dangerous human activities seems relevant.\n\nMost of all, there is a crucial need to change the mindset of those working in economics and financial engineering. They need to move away from what Richard Feynman called *Cargo Cult Science*: a science that follows all the apparent precepts and forms of scientific investigation, while still missing something essential. An overly formal and dogmatic education in the economic sciences and financial mathematics are part of the problem. Economic curriculums need to include more natural science. The prerequisites for more stability in the long run are the development of a more pragmatic and realistic representation of what is going on in financial markets, and to focus on data, which should always supersede perfect equations and aesthetic axioms.","meta":{"dup_signals":{"dup_doc_count":25,"dup_dump_count":2,"dup_details":{"curated_sources":2,"unknown":23}},"filename":"out\/0810.5306.tex.md"},"subset":"arxiv"} +{"text":"abstract: In this paper, we explore and compare multiple solutions to the problem of data augmentation in image classification. Previous work has demonstrated the effectiveness of data augmentation through simple techniques, such as cropping, rotating, and flipping input images. We artificially constrain our access to data to a small subset of the ImageNet dataset, and compare each data augmentation technique in turn. One of the more successful data augmentations strategies is the traditional transformations mentioned above. We also experiment with GANs to generate images of different styles. Finally, we propose a method to allow a neural net to learn augmentations that best improve the classifier, which we call neural augmentation. We discuss the successes and shortcomings of this method on various datasets.\nauthor: Jason Wang \nStanford University \n450 Serra Mall \n`firstname.lastname@example.com`; Luis Perez \nStanford University \n450 Serra Mall \n`email@example.com`\nbibliography: egbib.bib\ntitle: The Effectiveness of Data Augmentation in Image Classification using Deep Learning\n\n# Introduction\n\nWe propose exploring the problem of data augmentation for image and video classification, and evaluating different techniques. It is common knowledge that the more data an ML algorithm has access to, the more effective it can be. Even when the data is of lower quality, algorithms can actually perform better, as long as useful data can be extracted by the model from the original data set. For example, text-to-speech and text-based models have improved significantly due to the release of a trillion-word corpus by Google . This result is despite the fact that the data is collected from unfiltered Web pages and contains many errors. With such large and unstructured data sets, however, the task becomes one of finding structure within a sea of unstructured data. However, alternative approaches exist. Rather than starting with an extremely large corpus of unstructured and unlabeled data, can we instead take a small, curated corpus of structured data and augment in a way that increases the performance of models trained on it? This approach has proven effective in multiple problems. Data augmentation guided by expert knowledge , more generic image augmentation , and has shown effective in image classification . \nThe motivation for this problem is both broad and specific. Specialized image and video classification tasks often have insufficient data. This is particularly true in the medical industry, where access to data is heavily protected due to privacy concerns. Important tasks such as classifying cancer types are hindered by this lack of data. Techniques have been developed which combine expert domain knowledge with pre-trained models. Similarly, small players in the AI industry often lack access to significant amounts of data. At the end of the day, we've realized a large limiting factor for most projects is access to reliable data, and as such, we explore the effectiveness of distinct data augmentation techniques in image classification tasks. \nThe datasets we examine are the tiny-imagenet-200 data and MNIST . Tiny-imagenet-200 consists of 100k training, 10k validation, and 10k test images of dimensions 64x64x3. There are a total of 500 images per class with 200 distinct classes. MNIST consists of 60k handwritten digits in the training set and 10k in the test set in gray-scale with 10 classes with image dimensions of 28x28x1. To evaluate the effectiveness of augmentation techniques, we restrict our data to two classes and build constitutional neural net classifiers to correctly guess the class. \nIn particular, we will train our own small net to perform a rudimentary classification. We will then proceed to use typical data augmentation techniques, and retrain our models. Next, we will make use of CycleGAN to augment our data by transferring styles from images in the dataset to a fixed predetermined image such as Night\/Day theme or Winter\/Summer. Finally, we explore and propose a different kind of augmentation where we combine neural nets that transfer style and classify so instead of standard augmentation tricks, the neural net learns augmentations that best reduce classification loss. For all the above, we will measure classification performance on the validation dataset as the metric to compare these augmentation strategies.\n\n# Related Work\n\nThis section provides a brief review of past work that has augmented data to improve image classifier performance. The problem with small datasets is that models trained with them do not generalize well data from the validation and test set. Hence, these models suffer from the problem of over-fitting. The reduce overfitting, several methods have been proposed . The simplest could be to add a regularization term on the norm of the weights. Another popular techniques are dropout. Dropout works by probabilistically removing an neuron from designated layers during training or by dropping certain connection . Another popular technique is batch normalization, which normalizes layers and allows us to train the normalization weights. Batch normalization can be applied to any layer within the net and hence is very effective , even when used in generative adversarial networks, such as CycleGAN (. Finally, transfer learning is a technique in which we take pre-trained weights of a neural net trained on some similar or more comprehensive data and fine tune certain parameters to best solve a more specific problem. \nData augmentation is another way we can reduce overfitting on models, where we increase the amount of training data using information only in our training data. The field of data augmentation is not new, and in fact, various data augmentation techniques have been applied to specific problems. The main techniques fall under the category of *data warping*, which is an approach which seeks to directly augment the input data to the model in *data space*. The idea can be traced back to augmentation performed on the MNIST set in . \nA very generic and accepted current practice for augmenting image data is to perform geometric and color augmentations, such as reflecting the image, cropping and translating the image, and changing the color palette of the image. All of the transformation are affine transformation of the original image that take the form: $$y = Wx + b$$\n\nThe idea has been carried further in , where an error rate of $0.35\\%$ was achieved by generating new training samples using data augmentation techniques at each layer of a deep network. Specifically, digit data was augmented with elastic deformations, in addition to the typical affine transformation. Furthermore, data augmentation has found applicability in areas outside simply creating more data. It has shown to be helpful in generalizing from computer models to real-word tasks. \nGenerative Adversarial Nets (GANs) has been a powerful technique to perform unsupervised generation of new images for training. They have also proven extremely effective in many data generation tasks, such as novel paragraph generation . By using a min-max strategy, one neural net successively generates better counterfeit samples from the original data distribution in order to fool the other net. The other net is then trained to better distinguish the counterfeits. GANs have been used for style transfer such as transferring images in one setting to another setting (CycleGAN). These generated images could be used to train a car to drive in night or in the rain using only data collected on sunny days for instance. Furthermore, GANs have been effective even with relatively small sets of data by performing transfer learning techniques. Additionally, they have shown to be extremely good at augmenting data sets, such as increasing the resolution of input images . \nFinally, we explore methods where we train the neural net to both augment and classify simultaneously. A similar approach was tried in , though the approach there learned different weights for combining already existing techniques. In our case, we can train a style transfer network to learn how to best generate data augmentations. The goal is to not only reduce over-fitting via augmentation but also to augment data in a way such that to best improve the classifier. These methods do not necessarily generate images that resemble the training set as techniques like affine transformation or GANs would. Therefore, it saves the effort of needing manual transformations or correlations between the generated images with a method like GANs and the original image.\n\n# Methods\n\nWe propose two different approaches to data augmentation. The first approach is generate augmented data before training the classifier. For instance, we will apply GANs and basic transformations to create a larger dataset. All images are fed into the net at training time and at test time, only the original images are used to validate. The second approach attempts to learn augmentation through a pre-pended neural net. At training time, this neural net takes in two random images from the training set and outputs a single \"image\" so that this image matches either in style or in context with a given image from the training set. This output, which represents an augmented image produced by the network, is fed into the second classifying network along with the original training data. The training loss is then back-propagated to train the augmenting layers of the network as well as the classification layers of the network. In test time, images from the validation or test set is ran through only the classification network. The motivation is to train a model to identify the best augmentations for a given dataset. The remainder of this section will go into detail of the data augmentation tricks we tried. \n\n## Traditional Transformations\n\nTraditional transformations consist of using a combination of affine transformations to manipulate the training data . For each input image, we generate a \"duplicate\" image that is shifted, zoomed in\/out, rotated, flipped, distorted, or shaded with a hue. Both image and duplicate are fed into the neural net. For a dataset of size $N$, we generate a dataset of $2N$ size.\n\n![image](simpleTransform.png) Figure I: Traditional Transformations\n\n## Generative Adversarial Networks\n\nFor each input image, we select a style image from a subset of 6 different styles: Cezanne, Enhance, Monet, Ukiyoe, Van Gogh and Winter. A styled transformation of the original image is generated. Both original and styled image are fed to train the net. More detail about the GANs and style transfer can be viewed on the cited paper .\n\n![image](gans) Figure II: Style Transformations via GANs\n\n## Learning the Augmentation\n\nDuring the training phase, there are two parts to the network. The augmentation network takes in two images from the same class as the input image and returns a layer the same size as a single image. This layer is treated as an \"augmented\" image. The augmented image as well as the original input image are then passed into the second network, the classification network. The classification loss at the end of the network is a cross entropy loss on the sigmoids of the scores of classes. An addition loss is computed at the end of the augmentation network to regulate how similar the augmented image should be to the input image. The overall loss is a weighted sum of these two losses. We try three different approaches:\n\n1. Content loss\n\n2. Style loss via gram matrix\n\n3. No loss is computed at this layer\n\nMore details about the architecture of the layers will be described in the Experiments section. We implement a small 5-layer CNN to perform augmentation. The classifier is a small 3-layer net with batch normalization and pooling followed by 2 fully connected layers with dropout. This is much similar to VGG16 in structure but smaller in interest of faster training for evaluation. We aren't aiming for the best classifier. We are exploring how augmentation tricks improve classification accuracy, reduce over-fitting, and help the networks converge faster.\n\n# Datasets and Features\n\nThere are three sets of images that we experimented with. Each dataset is a small dataset with two classes. A small portion of the data is held aside for testing. The remaining images are divided by a 80:20 split between training and validation. \nOur first data set is taken from tiny-imagenet-200. We take 500 images from dogs and 500 images from cats. 400 images for each class is allocated to the training set. The remaining 100 in each class forms the validation set. The images are 64x64x3. RGB values are also normalized for each color in the preprocessing step. \nThe second data set is also taken from tiny-imagenet-200 except we replace cats with goldfish. The reason for this change is that goldfish look very different from dogs whereas cats visually are very similar. Hence CNNs tend to have a harder time distinguishing cats. Finally, cats and dogs have similar styles whereas images from the goldfish tend to have very bright orange styles. \nLastly, the final dataset is 2k images from MNIST, 1000 from each class. We perform the task of distinguishing 0's from 8's. MNIST images are 28x28x1 and are in gray scale. Again, images are normalized in the preprocessing step. MNIST is much more structured than imagenet so that digits are always centered. The motivation is that MNIST provides a very simple dataset with simple images. Are patterns in the more complex images also observed in simpler images?\n\n# Experiments\n\nTo test the effectiveness of various augmentation, we run 10 experiments on the image-net data. The results of the experiments are tabulated in the following table. All experiments are run for 40 epochs at the learning rate of 0.0001 using Adam Optimization. The highest test accuracy at all the epochs is reported as the best score. \nOnce we obtained the augmented images, we feed them into a neural net that does classification. We name this neural net the SmallNet since it only has 3 convolution layers paired with a batch normalization and max pool layer followed by 2 fully connected layers. The output is a score matrix for the weights for each class. The layers of the network is detailed below although the specific net is not very important. Any net that can reliably predict the classes suffices. Hence, one can replace this net with VGG16 with fine-tuning on the fully connected and last convolution layers to allow for sufficient training. \n\n**SmallNet**<\/u>\n\n. Conv with 16 channels and 3x3 filters. Relu activations. \n2. Batch normalization. \n3. Max pooling with 2x2 filters and 2x2 stride. \n4. Conv with 32 channels and 3x3 filters. Relu activations. \n5. Conv with 32 channels and 3x3 filters. Relu activations. \n6. Batch normalization. \n7. Max pooling with 2x2 filters and 2x2 stride. \n8. Fully connected with output dimension 1024. Dropout. \n9. Fully connected layer with output dimension 2. \n\nAugmenting data via a neural net is achieved by concatenating two images of the same class to create an input of 6 channels deep (2 if gray scale). The goal of this layer is to use a CNN to generate an image of the same height and width of the input with 3 channels deep. We can also add an additional loss term at the end of this layer that compares the output of the augmented layers to a third image from the same class. In this arrangement, the augmented layer generates outputs that are similar to the third image, which acts as a regularizer. Regardless, the motivation behind this idea that we can use paired data from a small data set to create new \"images\" to perform better training. A dataset of size $N$ can create $N^2$ pairs, a magnitude increase. The architecture of the augmentation network is detailed below.\n\n**Augmentation Network**<\/u>\n\n. Conv with 16 channels and 3x3 filters. Relu activations. \n2. Conv with 16 channels and 3x3 filters. Relu activations. \n3. Conv with 16 channels and 3x3 filters. Relu activations. \n4. Conv with 16 channels and 3x3 filters. Relu activations. \n5. Conv with 3 channels and 3x3 filters. \n\nAt training time, we generate a batch of images called the training batch. This image is fed into SmallNet and gradients are back-propagated to help improve SmallNet. Subsequently, pairs of images are randomly sampled from the same class and fed into AugNet to generate an augmented image which is passed into SmallNet. The weights of both neural nets are updated. At test time, images are fed into SmallNet, which does all the work to classify the image.\n\n![image](trainModel.png) Figure III: Training model\n\n![image](testModel.png) Figure IV: Testing\/Validation model\n\nTo determine the loss, we wanted a combination of a classification loss, $L_c$, and an augmentation loss, $L_a$. There are two augmentation losses we considered. The content loss is the mean squared error between the augmented image $A$ and the target image $T$, where $D$ is the length of images $A$ and $T$. $$L_a^{content} = \\frac{1}{D^2} \\sum_{i,j}(A_{ij} - T_{ij})$$\n\nThe style loss is a content loss on the gram matrix of the augmented image $A$ and target image $T$. The gram matrix of feature map $F$ is defined below. We apply the gram matrix on the raw images.\n\n$$G_{ij} = \\sum_k F_{ik} F_{jk}$$\n\nThen the loss is defined below where $C$ is the number of channels.\n\n$$L_a^{style} = \\frac{1}{C^2} \\sum_{i,j}(G_{ij}^A - G_{ij}^T)$$\n\nFinally, we consider the case where no loss is computed. Classification loss is a multi-class cross entropy loss on the sigmoids of the scores produced by SmallNet. The final loss is weighted sum of the two losses. Note that setting $\\beta =0$ is equivalent to having no augmentation loss.\n\n$$\\alpha L_c + \\beta L_a$$\n\nModels are trained with learning rate 0.0001 with ADAM optimization. Tensorflow on GPU was used to train the models although they train reasonable fast on CPU as well (around 30 seconds per epoch).\n\n# Results\n\nThe first set of experiments involve classifying dogs vs cats. Several experiments were done to explore the effectiveness of augmentation techniques. \n. EXPERIMENTS ON TRADITIONAL TRANSLATION\n\nAfter manually performing traditional augmentation on images from each class, we train SmallNet on all images. \n. EXPERIMENTS ON GANS\n\nRandomly select a style to generate via GANs for each image and train SmallNet. \n. EXPERIMENTS ON NEURAL NET AUGMENTATION\n\nFor each image, we select two random images from the same class and concatenate them so that the first 3 channels are image 1 and the last 3 channels are image 2. This is inputted into the Augmentation net which returns a augmented image of the size 64x64x3. Another random image is selected and an augmentation loss is computed. We explore content and style loss, explained in the Experiments section, between the randomly selected image and the output of the Augmentation net. Both images are separately used to train SmallNet and a classification loss is computed. The overall loss is a linear combination of the two losses. We found that the exact ratio didn't matter, but we selected $\\beta = 0.25$ weight for augmentation loss and $\\alpha = 0.75$ for classification loss. We also explored not using any augmentation loss at all. \n. SAME EXPERIMENTS ON DOG VS GOLDFISH\n\nThe 3 experiments above were replicated on goldfish vs dog classfication. This classification problem is an easier one than dogs and cats since goldfish typically look distinctively different with lots of orange color. We want to see if the augmentation strategies are robust to different data. \n. CONTROL EXPERIMENT\n\nA possible explanation for any improved validation is that we used a more complicated net, which typically prevents overfitting and allows for finer training. To control against this, we input the same two images into Augmentation Net. The result of the net and the input image are both used to train SmallNet. The effect is a proxy to training a large 10-layer net on a single image without augmentation (pairing of different images). The control experiment is performed on both dogs vs cats and dogs vs fish. \n. EXPERIMENTS WITH MNIST\n\nWe train MNIST data to explore neural augmentation strategies and look qualitatively at how data is augmented. These images should provide us an insight into how neural augmentation works with structured data since digits are centered and relatively the same size. Imagenet data is unstructured and the key features of each data image appears in many orientation and positions. Hence, loss functions such as style loss and content loss tend to produce \"mean\" or \"average\" images that are duller and noisy. Finally because we're using greyscale data, the gram matrix wouldn't produce any meaning so we only experiment with the effects of content loss. \nThe results of the experiments are tabulated below. The best validation accuracy in 40 epochs is recorded. Table 1 shows all experiments on dog vs goldfish data. Table 2 shows the results of experiments on dog vs cat data. Table 3 explores neural augmentation on MNIST data. \n\n| Dogs vs Goldfish | |\n|:----------------------|:-----------------|\n| Augmentation | Val. Acc. |\n| None | 0.855 |\n| Traditional | 0.890 |\n| GANs | 0.865 |\n| Neural + No Loss | **0.915**<\/u> |\n| Neural + Content Loss | 0.900<\/u> |\n| Neural + Style | 0.890<\/u> |\n| Control | 0.840 |\n\n \nTable I: Quantitative Results on Dogs vs Goldfish\n\n| Dogs vs Cat | |\n|:----------------------|:-------------|\n| Augmentation | Val. Acc. |\n| None | 0.705 |\n| Traditional | **0.775** |\n| GANs | 0.720 |\n| Neural + No Loss | 0.765<\/u> |\n| Neural + Content Loss | 0.770<\/u> |\n| Neural + Style | 0.740<\/u> |\n| Control | 0.710 |\n\n \nTable II: Quantitative Results on Dogs vs Cats\n\n| MNIST 0's and 8's | |\n|:----------------------|:-----------------|\n| Augmentation | Val. Acc. |\n| None | 0.972 |\n| Neural + No Loss | **0.975<\/u>** |\n| Neural + Content Loss | 0.968<\/u> |\n\n \nTable III: MNIST\n\nNeural augmentation performs remarkably better than no augmentation. In the dogs vs cats problem, the neural augmentation performs the best with a 91.5% to 85.5% compared to no augmentation. In the dogs vs fish problem, the neural augmentation performed the second best with 77.0% to 70.5%. While the traditional augmentation performs almost as well at a smaller time expense, this doesn't preclude us from combining different augmentation strategies. A strategy that first performs traditional augmentations, then pairs up data for neural augmentation could potentially beat all experiments we tested. \nOnly the control (see experiment 5) does worse than no augmentation. Perhaps we are dealing with a larger net and are using the wrong learning rate. This hypothesis is supported by the inability for the net to converge on the training data (loss doesn't decrease to zero and\/or inability to perfectly classify the training data). The lack of improvement provides evidence that adding layers for a classification data with little data doesn't reduce overfitting or help the model generalize better. \nWe also note that neural nets with various augmentation loss (or none) perform relatively the same. In fact, during training, the content loss and style loss didn't decrease at all. Content loss decreased from about 1.6 to 1.3-1.5 after 30 epochs and never converged. Style loss hovered around 0.5. Assuming that we can actually minimize these losses is equivalent to claiming that we could create an oracle to perfectly recreate images of any dog from any pair of images of dogs. This is an outlandish task given just convolutional layers. That is not to say these losses are completely useless. They act like regularization terms, ensuring that the Augmentation net doesn't generate images so different from data in the training set. \nNeural augmentation has no effect on the MNIST dataset. We hypothesis that a simple CNN already performs so well on MNIST so that neural augmentation provides no benefits. We also hypothesis that the digits are already so simple that combining features don't really add any additional information. Finally there are just so many variations in the digits so augmented data don't provide \"new\" images that the network has not seen before. \nSome of the images generated by neural augmentation are quite remarkable. In most cases, the augmented images are a combination of the source images. The neural picks out the golden bodies of the fish and merges them together. For instance, in sample II, the neural augmentation picks out the large goldfish in the second source image and the orange fish on the left in the first source image while smoothing out the background grass.\n\n![image](goldfish2) Figure V: Goldfish sample I\n\n![image](goldfish1) Figure VI: Goldfish sample II\n\nThe sampling for dogs is quite similar. The augmented image picks up the characteristics of the second source image while preserving only the contours of the other dog. Features such as the contours of the nose and the legs of the other dog are somewhat visible.\n\n![image](dog1) Figure VII: Dog sample I\n\nHowever not all augmented images have any visual meaning. Even though no defining shape of dogs are created in the following two images, the augmented images are a bunch of contours of ears and legs which are defining characteristics of the dog.\n\n![image](dog2) Figure VIII: Dog sample II\n\n![image](dog4) Figure IX: Dog sample III\n\nIn all cases, it seems that the generated images that best improve performance have some form of regularization so that the color is faded and the background noise is faded out. Regarding colors, orange background colors in dog images are always picked up in the background to contrast images of goldfish. The next figure shows this property where the yellow wall paper is transformed into an orange hue. It's really fascinating that the despite the dogs' ears matching the tones of the background in both source images, the augmented image picks up on those details and colors them greenish in contrast to the orange background.\n\n![image](dog3) Figure X: Dog sample IV\n\nThe accuracy plot at each epoch shows that neural augmentation helps a little in preventing overfitting. For most of the first 20 epochs of training, the training accuracy with augmentation is slightly lower than the training accuracy without augmentation. This implies that learning augmentation helps with generalizing the classifier. A comparison of the losses would be more apt but not viable in this case since the various experiments use different loss functions.\n\n![image](loss1) Figure XI: Accuracy plots\n\n# Conclusion\/Future Work\n\nData augmentation has been shown to produce promising ways to increase the accuracy of classification tasks. While traditional augmentation is very effective alone, other techniques enabled by CycleGAN and other similar networks are promising. We experimented with our own way of combining training images allowing a neural net to learn augmentations that best improve the ability to correctly classify images. If given more time, we would like to explore more complex architecture and more varied datasets. To mimic industrial applications, using a VGG16 instead of SmallNet can help us determine if augmentation techniques are still helpful given complex enough networks that already deal with many overfitting and regularization problems. Finally, although GANs and neural augmentations do not perform much better than traditional augmentations and consume almost 3x the compute time or more, we can always combine data augmentation techniques. Perhaps a combination of traditional augmentation followed by neural augmentation further improves classification strength. \nGiven the plethora of data, we would expect that such data augmentation techniques might be used to benefit not only classification tasks lacking sufficient data, but also help improve the current state of the art algorithms for classification. Furthermore, the work can be applicable in more generic ways, as \"style\" transfer can be used to augment data in situations were the available data set is unbalanced. For example, it would be interesting to see if reinforcement learning techniques could benefit from similar data augmentation approaches. We would also like to explore the applicability of this technique to videos. Specifically, it is a well known challenge to collect video data in different conditions (night, rain, fog) which can be used to train self-driving vehicles. However, these are the exact situations under which safety is the most critical. Can our style transfer method be applied to daytime videos so we can generate night time driving conditions? Can this improve safety? If such methods are successful, then we can greatly reduce the difficulty of collecting sufficient data and replace them with augmentation techniques, which by comparison are much more simpler.","meta":{"dup_signals":{"dup_doc_count":12,"dup_dump_count":3,"dup_details":{"curated_sources":2,"2024-18":1,"unknown":9}},"filename":"out\/1712.04621_extract_deep_learning_augmentation.tex.md"},"subset":"arxiv"} +{"text":"abstract: Computers have profoundly changed the way scientific research is done. Whereas the importance of computers as research tools is evident to everyone, the impact of the digital revolution on the representation of scientific knowledge is not yet widely recognized. An ever increasing part of today's scientific knowledge is expressed, published, and archived exclusively in the form of software and electronic datasets. In this essay, I compare these digital scientific notations to the the traditional scientific notations that have been used for centuries, showing how the digital notations optimized for computerized processing are often an obstacle to scientific communication and to creative work by human scientists. I analyze the causes and propose guidelines for the design of more human-friendly digital scientific notations.\nauthor: Konrad Hinsen \nCentre de Biophysique Mol\u00e9culaire (UPR4301 CNRS) \nRue Charles Sadron, 45071 Orl\u00e9ans C\u00e9dex 2, France \nfirstname.lastname@example.com; \nSynchrotron SOLEIL, Division Exp\u00e9riences \nB.P. 48, 91192 Gif sur Yvette, France \ntitle: Scientific notations for the digital era\n\n*Note*: This article is [also available](http:\/\/www.sjscience.org\/article?id=527) in the [Self-Journal of Science](http:\/\/www.sjscience.org\/), where it is open for public discussion and review.\n\n# Introduction\n\nToday's computing culture is focused on results. Computers and software are seen primarily as tools that get a job done. They are judged by the utility of the results they produce, by the resources (mainly time and energy) they consume, and by the effort required for their construction and maintenance. In fact, we have the same utilitarian attitude towards computers and software as towards other technical artifacts such as refrigerators or airplanes.\n\nIn scientific research, however, the path that leads to a result is as important as the result itself. Drawing conclusions from an experimental measurement requires a good understanding of the experimental setup that was used to obtain the measurement. A scientist interpreting them must know the reliability and precision of the devices that were used, and be familiar with potential artifacts that could lead to misinterpretations. Likewise, the computational results obtained from scientific software can only be interpreted with a good understanding of what exactly the software does. Scientific software therefore has the same status in science as experimental setups and theoretical models.\n\nIn scientific discourse and in particular in the evaluation of a research publication, results are therefore scrutinized together with the path that lead to them. We expect experimentalists to explain the materials and methods they have used, and theoreticians to explain their reasoning in sufficient detail that their peers can understand it. We should thus treat computational science in the same way and require scientific software to be published and scrutinized in peer review. While publication of scientific software is slowly becoming common, peer review of this software remains exceptional. Its necessity is well recognized in principle, but the effort required for such a review is prohibitive. This is the most visible symptom of the problem that is the topic of this essay. More generally, this problem is that digital scientific knowledge is today expressed using notations such as programming languages, which are not suitable for communication between human scientists.\n\nIn the following, I will present a detailed analysis of this problem, and propose some general guidelines for improving the situation. The main audience is computational scientists, who have practical experience with doing science using computers but no formal training in computer science or in scientific epistemology. Readers with a computer science background may skim over much of the second part. Note that I will not propose *the* solution to the problem, nor even *a* solution. My goal is to convince computational scientists that there *is* a problem, and that it *can* be solved. Finding solutions that work well is likely to require many years and the participation of many people willing to test different ideas in practice.\n\nThe analysis that I present is applicable to all branches of science whose models are based on continuous mathematics, such as algebraic, differential, or integral equations. This includes almost all of physics and chemistry, a good part of biology and the quantitative social sciences, and all domains of applied research that build on foundations in physics and chemistry. Much of what I say also applies to models based on discrete mathematics, such as graphs or cellular automata, but I will not consider them for the sake of simplicity. The examples I will use for illustration reflect my own background in computational biophysics, but readers shouldn't find it difficult to substitute examples from their own field of work.\n\n## Outline\n\nI start by summarizing [the structure of scientific knowledge](#scientific-knowledge), explaining factual, procedural, and conceptual knowledge and why only the first two categories are used in computation. Next, I outline how scientific communication and the notations used for it [evolved in the course of history](#evolution). These two sections prepare the discussion of [digital scientific knowledge](#digital) and why we should care about it more than we do at the moment. This should provide sufficient motivation for the reader to work through the more technical [section on formal languages](#formal-languages), a well-known concept in computer science that unifies two categories that computational scientists tend to see as distinct: file formats for data and programming languages, the two dominant forms of digital scientific notation today.\n\nAfter this more theoretical part, I explain the importance of [composition of information items](#composition) using as an example the simulation of celestial mechanics, with an emphasis on the constraints on the [composition of digital knowledge](#composition-digital). My goal is to illustrate what one should be able to do with a proper digital scientific notation. I then compare to the [state of the art](#state-of-the-art) in computational science, pointing out how it is inadequate for the study of [complex systems](#complex-systems). One obstacle to improvement is a perceived dichotomy between [software and data](#software-data), which has its roots in computing technology but has no counterpart in the structure of scientific knowledge.\n\nAn important point that is often overlooked is the status of formal languages as the main [human-computer interface](#HCI) in computational science. Doing research is a different task from developing software, and requires a different interface. In particular, we should pay more attention to the difference between [human and computational semantics](#HCI-semantics) and to the need for simplicity and [flexibility](#flexibility) of a notation suitable for humans doing creative research. Moreover, a digital scientific notation must permit precise [references to the scientific record](#sr-references).\n\nIn the last part of this essay, I consider solution strategies for the problems that I have identified. I show two examples of how formal languages can be made [simple and flexible](#simple-and-flexible) while providing a straightforward mechanism for composition: [XML](#XML) with its namespace mechanism for composition, and the [Lisp](#lisp) family of programming languages with its macro system for creating small embedded formal languages. I conclude by proposing [design guidelines](#design-guidelines) for digital scientific notations.\n\n# The structure of scientific knowledge\n\nFor the following discussion of scientific notation, it is useful to classify scientific knowledge into three categories: factual, procedural, and conceptual knowledge. Factual knowledge consists of the kind of information one can record in tables, diagrams, or databases: the density of water, the names of the bones in the human body, the resolution of an instrument, etc. Procedural knowledge is about doing things, such as using a microscope or finding the integral of a function. Conceptual knowledge consists of principles, classifications, theories, and other means that people use to organize and reason about facts and actions.\n\nFactual and procedural knowledge relies on conceptual knowledge. A table listing the density of water at different temperatures refers to the concepts of density and temperature. Instructions for using a microscope refer to concepts such as sample or focus. Interpreting factual or procedural knowledge requires a prior knowledge of the underlying conceptual knowledge.\n\nConceptual knowledge has a hierarchical structure, with the definition of every concept referring to more fundamental concepts. This leads to the question of where this recursive process ends, i.e.\u00a0what the most fundamental concepts are. When considering human knowledge as a whole, this is a non-trivial problem in epistemology. For a discussion of scientific knowledge, and in particular for the present discussion of scientific notation, it is sufficient to consider the concepts of everyday life as a given base level.\n\nFactual and procedural knowledge often refer to each other. The statement \"The orbit of the Moon around the Earth is reproduced to precision A by solving Newton's equations for the solar system using numerical algorithm B and initial values C\" is factual knowledge, once specific A, B, and C are provided. But algorithm B is procedural knowledge, which in turn refers to some other factual knowledge, such as the masses of the Sun and its planets.\n\nA final missing piece is metadata. Every piece of factual and procedural knowledge comes with information attached to it that describes its provenance and known limits of validity. A table showing the density of water at different temperatures should state how, when, under which conditions, and by who the listed values were obtained. It should also provide an estimate of the values' accuracy.\n\nIn summary, the structure of scientific knowledge can be described as a web of factual and procedural knowledge items that refer to each other, and which are expressed in terms of concepts from the universe of conceptual knowledge. The latter consists of multiple layers, with concepts from everyday life at the bottom. Every layer refers to concepts from lower, more fundamental layers.\n\n# The evolution of scientific communication\n\nMost of scientific communication takes place through research articles, which are narratives that propose new factual and procedural knowledge, occasionally also new concepts, and try to convince the scientific community of the pertinence of this information. Over time, as a given subject area becomes better understood, the scientific community usually reaches a consensus about which concepts are the most useful for describing its phenomena. The knowledge from a large number of research articles is then distilled into review articles, monographs, and other reference works. The knowledge considered most fundamental ends up in textbooks for transmission to the next generation of scientists. Each unit of scientific communication is written for a specific audience, and relies on a stack of conceptual layers that this audience is expected to be familiar with.\n\nBefore the use of computers, scientific knowledge was mainly recorded on paper, using three forms of notation: written language, images, and tables. Written text combines plain language, domain-specific vocabulary, and shorthand notation such as mathematical formulas. Images include both drawings and observations captured in photographs, radiographs, etc. Tables represent datasets, which are most often numerical.\n\nThere is a close relation between the conceptual knowledge on which a narrative relies and the notation that it employs. Domain-specific vocabulary directly names relevant concepts. Shorthand notation replaces frequently used words and lengthy sentences that involve these concepts. For example, Newton's laws of motion are commonly written as\n\n$$F = m \\cdot a$$\n\nwhose full-length equivalent is \"The force acting on a point mass is equal to the product of its mass and its acceleration.\" Force, mass, and acceleration are concepts from mechanics and $F$, $m$, and $a$ are conventional shorthands for them. The symbols $=$ and $\\cdot$ are shorthands for the concepts of equality and product, both of which come from more fundamental conceptual layers in mathematics.\n\nThe standardization of scientific notation is variable and usually related to the stability of the concepts that it expresses. Well-established conceptual layers come with a consensus notation, whereas young conceptual layers can be expressed very differently by different authors. Scientists \"play around\" with both the concepts and the notations in rapidly evolving fields, before eventually settling on a consensus that has proven to work well enough. Even the most basic aspects of mathematical notation that we take for granted today were at some time the subject of substantial tinkering (1). Moreover, even a consensus notation is not completely rigid. Personal and disciplinary tastes and preferences are one cause of variation. As an example, there are several common notations for distinguishing vectors from scalars in geometry. Another cause is the limited number of concise names and labels. For example, the preference for one-letter names in mathematical formulas, combined with the useful convention of each letter having only one meaning in a given document, often imposes deviations from consensus naming schemes.\n\nThis pattern of a high variability during innovation phases giving way to consensus as a field or technology matures is ubiquitous in science and engineering. The time scale of the consolidation process is often decisive for reaching a satisfactory consensus. The lack of consensus in mature technology is felt as a nuisance by its users. A good example is the pointless diversity in chargers for mobile phones. On the other hand, premature consensus creates badly adapted technology that is difficult to get rid of. Computing technology is particularly affected by this problem. In fact, most of the standardized technology in computing \u2013 file formats, programming languages, file systems, Internet protocols, etc. \u2013 is no longer adequate for today's requirements. The reason is that the technical possibilities \u2013 and, as a consequence, user demands \u2013 evolve too fast for an orderly consensus formation, whose time scale is defined by human cognitive processes and social interactions that, unlike technological progress, have not seen any spectacular acceleration.\n\n# Digital scientific knowledge\n\nIn the context of computing, factual knowledge is stored in *datasets*, whereas procedural knowledge takes the form of *algorithms*. Conceptual knowledge is not affected by the transition from manual to mechanized computation. Like research articles and reference tables, datasets and algorithms implicitly refer to conceptual knowledge to give meaning to their contents. However, the concepts are not explicitly represented in the computer, because they are not required to perform a computation. Applying algorithms to datasets is a purely mechanical operation that does not require any knowledge or understanding of the underlying concepts. What *does* require an understanding of the concepts is the verification that a given computation is scientifically meaningful.\n\nIt is of course *possible* to store and process conceptual knowledge using computers, e.g.\u00a0in the form of [ontologies](http:\/\/en.wikipedia.org\/wiki\/Ontology_%28information_science%29), which represent conceptual knowledge as factual knowledge at a different semantic level. Such approaches are finding their place in scientific communication in the form of *semantic publishing* (2), whose goal is to make the scientific record machine-readable and thus accessible to automated analysis. However, performing a computation and managing information *about* this computation are different and independent operations, just like using a microscope is a different activity from researching the history of microscopy. I will come back to the role of digital scientific notations in semantic publishing [later](#sr-references).\n\nThis dissociation of mechanical computation from the conceptual knowledge base that defines its meaning has been recognized as a problem in various domains of digital knowledge management, for example in database design (3). The typical symptom is the existence of electronic datasets that nobody can interpret any more, because the original designers of the software and data formats did not document their work sufficiently well for their colleagues and successors. A frequent variant is people modifying software and data formats without updating the documentation. Every computational scientist has probably experienced the difficulties of dealing with datasets stored in undocumented formats, and with software whose inner workings are not described anywhere in an understandable form.\n\nThe most vicious manifestation of this problem relates to scientific software. Even when software is developed respecting the best practices of software engineering, it may nevertheless compute something else than what its users think it computes. Documentation can help to some degree by explaining the authors' intentions to the users, but nothing permits to verify that the documentation is complete and accurate. The only way to make sure that users understand what a piece of software computes is making the software's source code comprehensible for human readers. Today, most scientific software source code is unintelligible to its users, and sometimes it even becomes unintelligible to its developers over time.\n\nSome of the problems we are observing in computational science today are direct consequences of the fact that scientists have an insufficient understanding of the software they use. In particular, it suffers from rampant software errors (4,5)\\] and the near-universal non-reproducibility of computational results (6,7). The scientific community has failed so far to fully appreciate the double role of scientific software as tools for performing computations and as repositories of scientific knowledge (8). It has uncritically adopted notations for digital knowledge that are not adapted to human communication. As a consequence, the all-important critical discourse that makes scientific research self-correcting in the long run does not adequately cover digital scientific knowledge.\n\n# Formal languages\n\nThe defining characteristic of digital scientific knowledge is the use of [*formal languages*](http:\/\/en.wikipedia.org\/wiki\/Template:Formal_languages_and_grammars), rather the the informal languages of human communication. The term \"formal language\" is commonly used in computer science, but in computational science we usually speak of \"data formats\", \"file formats\", and \"programming languages\", all of which are specific kinds of formal languages. In this section, I will give a minimal overview of the characteristics of formal languages, which is necessary for understanding their implications for digital scientific knowledge.\n\nAt the hardware level of a digital computer, a computation is a multi-step process that transforms an input bit sequence into an output bit sequence. Information processing by computers thus requires all data to be expressed as bit sequences. Dealing with bit sequences is, however, very inconvenient for humans. We therefore use data representations that are more suitable for human brains, but still exactly convertible from and to the bit sequences that are stored in a computer's memory. These representations are called formal languages. The definition of a formal language specifies precisely how some piece of information is encoded in sequences of bits. Many formal languages use text characters instead of bits for another level of convenience. Since the mapping from text characters to bit sequences is straightforward (the currently dominant mapping is called Unicode (9)), this makes little difference in practice.\n\nThe definition of a formal language consists of two parts, syntax and semantics. Syntax defines which bit patterns or text strings are valid data items in the language. Syntax rules can be verified by a program called a parser. Semantics define the *meaning* of syntactically correct data items. With one important exception, semantics are mere conventions for the interpretation of digital data. As I explained above, meaning refers to conceptual knowledge that a computer neither has nor needs, since all it does is process bit sequences. The exception concerns formal languages for expressing programs, i.e.\u00a0the rules used by the computer for transforming data. The semantics of a programming language define how each operation transforms input data into output data. Writing down such transformation rules obviously requires a notation for the data that is being worked on. For that reason, a programming language also defines the syntax and semantics of data structures. In fact, a programming language can express all aspects of a computation. We use separate languages for data (\"file formats\") only as a convenience for users and for improving the efficiency of our computers.\n\nThere is a huge number of formal languages today, which can be organized into a hierarchy of abstraction layers, such that languages at a higher level can incorporate languages from lower levels. As a simple example, a programming language such as Fortran incorporates formal languages defining individual data elements - integers, floating-point numbers, etc. At the lowest level of this hierarchy, close to the bit level at which computing hardware operates, we have formal languages such as Unicode (9) for text characters or the floating-point number formats of IEEE standard 754 (10). One level up we find the memory layout of Fortran arrays, the layout of UTF-8 encoded text files, and many other basic data structures and file formats. Structured file formats such as XML (11) or HDF5 (12) are defined on the next higher level, as they incorporate basic data structures such as trees, arrays, or text strings. Programming languages such as Fortran or C reside on that level as well.\n\nDefining the semantics of a programming language is not a straightforward task. For non-programming formal languages, semantics are mere conventions and therefore defined by a document written for human readers. The same approach can be adopted for a programming language, resulting for example in the C language standard (13). But the semantics of programs also matter for their execution by a computer, and therefore a \"computer-readable\" definition of the semantics is required as well. It takes the form of either a program that translates the programming language into processor instructions, called a compiler, or a program that directly performs the actions of the programming language, called an interpreter. We thus have the C language standard defining the semantics of the C language for human readers, and a C compiler defining the semantics for execution by the computer. Unfortunately, there is no way to ensure or verify that the two definitions are equivalent. A computer program cannot do it, because the C language standard is not written in a formal language. A human computing expert cannot do it reliably, because a C compiler is much too complicated for verification by inspection.\n\nThis is in fact the same situation as I described in the last section for scientific software: the compiler is the equivalent of the scientific software, and the language definition is the equivalent of its documentation. This is not just a superficial analogy: there is in fact no profound difference between a compiler and a piece of scientific software. Both transform input data into output data according to complex rules that are explained to human readers in a separate document. Compilers are executable implementations of programming languages in the same way as scientific software is an executable implementation of scientific models. This analogy is useful because computer scientists have invested considerable effort into bridging the gap between executable and human-readable specifications of programming languages. Most of the ideas and some of the tools developed in this process can thus be adapted to scientific software.\n\nThe basic idea is to introduce [*formal specifications*](https:\/\/en.wikipedia.org\/wiki\/Formal_specification), which are written in formal languages and thus computer-readable, but which simpler than the software whose behavior they specify, and therefore more comprehensible to human readers. Specifications are simpler than the actual software for several reasons. One of them is that a specification can neglect many usability issues of software: performance, use of resources, portability between platforms, user interfaces, etc. are all irrelevant for specifying the core computations that the software performs. More simplification is possible if one accepts mere *definitions* instead of *algorithms*. A definition allows to test if a result is correct, but is not sufficient to obtain a result. As a simple example, consider sorting. The definition of a sorted list of items is \"an ordered list whose elements are the same as those of the input list\". Any algorithm for actually performing the sort operation is much more complicated. For a human reader, the definition is usually sufficient to understand what is going on, and testing procedures can verify that the algorithms implemented in software actually conform to the definition.\n\nLike specification languages, formal languages for representing digital scientific knowledge must aim for simplicity to facilitate comprehension by human scientists, in particular those not directly involved with the development of scientific software. Much of the experience gained from work on specification languages can probably be applied in the design of formal languages for science, but there are also differences to be taken into account. In particular, scientific knowledge differs from software in that its principal purpose is not to compute something. Computation in science is a means to an end which is understanding nature. In the next section, I will show a few examples of scientific information items and how they are used in the construction of scientific software while also serving different purposes.\n\n# Composition of information items\n\nA key operation in information management is the composition of data from various sources into a more complex assembly. Composition is a well-known concept in software development, as software is usually assembled from building blocks (procedures, classes, modules, \u2026), including preexisting ones taken from libraries. But composition is also an everyday task in the theoretical sciences, even though it is not labeled as such and in fact rarely ever identified as a distinct activity.\n\n## Example: composing a model for the solar system\n\nSuppose you want to predict the positions of the planets of our solar system over a few years. You would start with Newton's 17th-century description of celestial mechanics and compose a model from the following ingredients:\n\n1. Newton's law of motion: $F = m \\cdot a$\n\n2. Newton's law of gravitation: $F_{ij} = G \\frac{m_i \\cdot m_j}{|r_i-r_j|^2}$\n\n3. The masses $m_i$ of the sun and the planets.\n\n4. A set of parameters, derived from past astronomical observations, to define the initial state.\n\nAll these put together define the positions of the celestial bodies at all times in the past and future. But each of these items has a meaning independently of the others, and can be put to other uses, such as computing how fast an apple falls to the ground. You can also use the first two ingredients to prove energy conservation in celestial mechanics, or to derive Kepler's laws. Moreover, each of these pieces comes from a different source (observation, theoretical hypothesis, \u2026) that requires a specific approach to validation. We want to be able to compose them into a new entity called \"model for the solar system\", but we also want each piece to retain its own identity for other uses. Ideally, we want to present our solar system model as a composition that references the individual ingredients. And in the traditional printed-paper system of scientific communication, that's exactly what we do.\n\nLet's move on to computation. To make an actual prediction, you have to add some more ingredients. The model as composed above only *defines* the planetary orbits, but doesn't tell you how to *compute* them. So you need to add:\n\n1. A numerical solver for ordinary differential equations (ODEs), such as Runge-Kutta.\n\n2. Suitable parameters for that solver, depending on your accuracy and precision requirements. For Runge-Kutta, that's the size of the integration time step.\n\n3. A finite-size number representation with associated rules for arithmetic, because you can't compute with real numbers.\n\nYou can then take a large stock of pencils and paper and start to compute. If you prefer to delegate the grunt work to a computer, you need one final ingredient:\n\n1. A programming language, implemented in the form of a compiler or interpreter.\n\nYour final composition is then a simulation program for celestial mechanics, made from eight distinct ingredients. Ideally, you would publish each ingredient and the composition separately as nine machine-readable nanopublications (14). Unfortunately, with the current state of the art in computational science, that is not yet possible.\n\n## Composition of digital knowledge\n\nIn the pre-digital era, composition was never much of a problem. A scientist would take a few research articles or monographs describing the various ingredients, and then write down their composition on a fresh sheet of paper. Variations in the notations across different sources would be no more than an inconvenience. Our pre-digital scientist would translate notation into concepts when reading each source, and the concepts into his or her preferred notation when writing down the composition. As long as the concepts match, as they do in any mature field of science, that is routine work.\n\nComposition of digital knowledge is very different. The items to be composed must be matched not only in terms of (human) concepts, but also in terms of the syntax and semantics of a formal language. And that means that all ingredients must be expressed in *the same* formal language, which is then also the language of the composed assembly.\n\nIf we start from ingredients expressed in different languages, we have basically two options: translate everything to a common language, or define a new formal language that is a superset of all the languages used for expressing the various ingredients. We can of course choose a mixture of these two extreme approaches. But both of them imply a lot of overhead and add considerable complexity to the composed assembly. Translation requires either tedious and error-prone manual labor, or writing a program to do the job. Defining a superlanguage requires implementing software tools for processing it.\n\nAs an illustration, consider a frequent situation in computational science: a data processing program that reads a specific file format, and a dataset stored in a different format. The translation option means writing a file format converter. The superlanguage option means extending the data processing program to read a second file format. In both cases, the use of multiple formal languages adds complexity to the composition that is unrelated to the real problem to be solved, which is the data analysis. In software engineering, this is known as \"accidental complexity\", as opposed to the \"essential complexity\" inherent in the task (15).\n\nAs a second example, consider writing a program that is supposed to call a procedure written in language A and another procedure written in language B. The translation option means writing a compiler from A to B or vice-versa. The superlanguage option means writing a compiler or interpreter that accepts both languages A and B. A mixed approach could use two compilers, one for A and one for B, that share a common target language. The latter solution seems easy at first sight, because compilers from A and B to processor instructions probably already exist. However, the target language of a compiler is not \"processor instructions\" but \"the processor instruction set plus specific representations of data structures and conventions for code composition and memory management\". It is unlikely that two unrelated compilers for A and B have the same target language at this level of detail. Practice has shown that combining code written in different programming languages is always a source of trouble and errors, except when using tools that were explicitly designed from the start for implementing the superlanguage.\n\nMany of the chores and frustrations in the daily life of a computational scientist are manifestations of the composition problem for digital knowledge. Some examples are\n\n- file format conversion, as explained above\n\n- combining code in different languages, also explained above\n\n- software installation, which is the composition of an operating system with libraries and application-specific software into a functioning whole\n\n- package management, which is an attempt to facilitate software installation that re-creates the problem it tries to solve at another level\n\n- software maintenance, which is the continuous modification of source code to keep it composable with changing computational environments\n\n- I\/O code in scientific software, which handles the composition of software and input data into a completely specified computation\n\n- workflow management, which is the composition of datasets with multiple independently written and installed software packages into a single computation\n\nThese examples should be sufficient to show that the management of composition must be a high-priority consideration when designing formal languages for digital scientific knowledge.\n\n# The state of the art in managing digital scientific knowledge\n\nIn the last section I have listed the ingredients that need to be combined in order to make a solar system simulator. Let's look at how such a simulator is actually structured using today's scientific computing technology. We have the following clearly identifiable pieces:\n\n1. A simulation program, written in a programming language such as Fortran or C++, which incorporates ingredients 1, 2, 5, 7, and 8.\n\n2. An input file for that program, written in a special-purpose formal language defined by the author of the simulation program, containing ingredients 3, 4, and 6.\n\nThe structure of the input file is usually simple, meaning that it is straightforward to isolate ingredients 3, 4, and 6 from it. There is even a good chance that the input file will permit annotation of these items, indicating the sources they were taken from. If we are really lucky, the formal language of the input file is documented and designed to permit the extraction of information for other uses.\n\nThe simulation program itself is almost certainly a monolithic piece of information that combines 1, 2, 5, 7, and 8 in an inextricable way. None of the ingredients is easy to identify by inspection, and we'd better not even envisage extracting them using computational tools for other uses. If we want to change something, e.g.\u00a0use a different ODE solver or a different finite-size number representation, we'd probably rewrite large parts of the program from scratch. Worse, changing the finite-size number representation might actually force us to rewrite the program in a different language.\n\nThis is how today's scientific software is typically written, but let's also look at what we *could* do, using today's technology, if we were making a special effort to maintain the modular structure of our knowledge assembly.\n\nThe easiest part to factor out is number 5, the ODE solver. We could use one from a program library, and even choose a library that proposes several solvers. But using such a library comes at an additional cost in combining all the parts. We have to write ingredients 1 and 2 according to the rules of the library, and accept for 7 and 8 whatever the library allows us to use. In fact, the library modifies the formal language we use for writing our software, adding features but also imposing constraints. Fortran plus ODEPACK is not the same language as Fortran on its own.\n\nSuperficially, we can also factor out ingredients 1 and 2, which define the equations fed to the ODE solver. We could isolate these ingredients in the form of procedures (also called subroutines or functions). But those procedures do *not* represent the original equations. They only represent one aspect of the equations: the numerical evaluation of some of their subterms. We could not use these procedures to prove energy conservation, nor to derive Kepler's laws.\n\nFinally, we could envisage factoring out ingredient 7, the number representation. For example, we could use a library such as MPFR (16) to get access to wide range of floating-point formats. But the same remark applies as for the use of an ODE library: we would have to translate everything else into the C + MPFR language with its rather peculiar requirements. Moreover, it's either MPFR or an ODE library, unless we can find an ODE library written specifically for use with MPFR. The reason why can't freely combine an ODE library with a finite-size arithmetic library is the same that prevents us from using the ODE-specific equation-evaluation procedures for other purposes: an ODE library does not contain ODE solver algorithms, but specific *implementations* of such algorithms that are less versatile than the algorithms themselves.\n\nLeaving the narrow realm of development tools for numerical software, we could try to factor out the equations, ingredients 1 and 2, using a computer algebra system. Such a system lets us write down the equations as such, not only the numerical computation of its subterms. While the idea looks promising, the state of today's computer algebra systems doesn't make this a practically useful approach. They are not designed as parts of an ecosystem for scientific knowledge management. The formal languages they use for expressing terms and equations are insufficiently documented, and for commercial systems they are even partly secret. Some computer algebra systems have export functions that generate numerical code in a language like C or Fortran, but the exact semantics of this export are again opaque for lack of documentation.\n\n## Complex systems\n\nFor the example I have used in the last section, there is no real problem in practice because the whole model is rather simple. Ingredients 1 to 7 can be written down and composed on a single sheet of paper. We use computers only because the actual computation is very long to perform. It is quite feasible to do all theroretical work by hand, and write a simulation program just for doing the computation. That was in fact the dominant use of computers in science during their first few decades.\n\nThe situation changes drastically when we consider complex systems. If instead of the solar system we wish to simulate a protein at the atomic scale, we use a model that is overall very similar except for the second ingredient. Instead of Newton's law of gravitation, a one-line formula, we have an expression for the interatomic forces made up of tens of thousands of terms. The list of these terms is constructed from the molecular structure by an algorithm, meaning that we need a computer \u2013 and thus formal languages \u2013 not only for *simulating* our model but already for *defining* it. The model itself is digital scientific knowledge.\n\nSince we do not have adequate formal languages for writing down such digital models today, we cannot express them at all. We cannot analyze or discuss the model, nor compare it in depth to competing models. All we have is research papers describing the design principles behind the model, and software written to perform a numerical evaluation. The software source code is impenetrable for anyone but its authors. Moreover, there is obviously no way to verify that the software evaluates the model correctly, because that would require some other expression of the model for comparison. This is again an instance of the problem that I discussed [earlier](#formal-languages) for the definition of the semantics of programming languages. Our model should be part of the *specification* of our software, rather than being completely absorbed into its source code.\n\nIn the case of the popular models for biomolecular simulations, each of them is implemented by several different software packages, with each program producing somewhat different numbers for the same protein. On a closer look, each program actually implements its own variation of the model, with modifications made for performance reasons, or because the software authors believe the modification to be an improvement. In the end, what we think of as a model is really a family of different models derived from common design principles. In the absence of human-readable specifications of each variant, we cannot compile a detailed list of the differences, let alone estimate their impact on the results we obtain.\n\nSimilar situations exist wherever scientific models have become too complex to be written down on paper. As a second example, consider the Community Earth System Model (17), a popular model for the evolution of the Earth's climate. One would expect such a model to consist of a large number of coupled partial differential equations describing the behavior of the atmosphere and the oceans, and their complex interactions. But it really is a software package that implements a numerical solver for the equations. Contrary to the situation in biomolecular simulation, a significant effort is made to ensure that this software package can be considered a reliable reference implementation. But even if we trust the software to reliably evaluate the model numerically, we have still lost all the non-numerical uses of a scientific model.\n\n## Software and data in computational science\n\nIt is customary in computational science to distinguish between computer programs, also called *software*, and the *data* that these programs process. But the above discussion of formal languages shows that this distinction between software and data is not fundamental. We could very well use a single language to define all aspects of a computation, and obtain the result in the same language. This is in fact very easy to do, by hard-coding all input data into the source code of the program. In today's computing environments, that would be inconvenient in practice, but that is mostly due to the way our tools work.\n\nFrom the point of view of digital knowledgement management, it is desirable to identify the individual pieces of information we wish to handle, and the operations we wish to perform on them. The above analysis of a solar system simulation provides a simple example. We would then design formal languages specifically as digital scientific notations for our knowledge items. Software tools would be just tools, consuming and transforming scientific knowledge but not absorbing it into its source code. In other words, all scientific knowledge would become *data*.\n\nSome recent developments can be seen as stepping stones towards this goal. I will mention a single example, the specification of differential equations in FEniCS (18). FEniCS is a software package that solves partial differential equations numerically using the Finite Element method. A feature that distinguishes FEniCS from similar software packages is that it allows its users to write the differential equations to be solved in a notation very similar to traditional mathematics. In particular, the equations are written down as distinct information items, i.e.\u00a0they are *data*. They are *not* absorbed into program code that is structured according to the needs of software development. Similar approaches are used in other mathematical software packages. However, a crucial final step remains to be taken: Differential equations for FEniCS are written in a FEniCS-specific formal language that is not suitable for anything else than solving the equations in FEniCS. The scientific knowledge must be reformulated to fit to the tool. What we should have instead is a formal language for expressing all aspects of differential equations, and many tools, FEniCS being just one of them, that can process this formal language. In particular, we would like to be able to *compose* differential equations describing some physical system from individual ingredients, much like the equations governing the solar system are composed from the law of motion and the law of gravity.\n\nOne psychological barrier to considering all scientific knowledge as data is the fact that scientific knowledge includes algorithms. In the example of the solar system simulation, the numerical method for solving Newton's equation is an algorithm. The formal languages used to represent data in computational science do not permit the expression of algorithms. For most computational scientists, algorithms are parts of programs, and thus expressed in a programming language. However, it is easy to see that algorithms are just another kind of data. Compilers translate algorithms from one formal language to another, i.e.\u00a0they process algorithms as data. The same can be said of many tools we use every day to develop and analyze software. The only novelty in my proposal is that algorithms that count as scientific knowledge should be available for all kinds of scrutiny *in addition* to being executable by a computer.\n\nWe can also envisage an intermediate stage in which software tools continue to incorporate digital scientific knowledge just like they do today, but in which we also express and publish all digital scientific knowledge in human-friendly formal languages. The human-friendly version would then be part of the software's specification, and the equivalence of the two formulations would be verified as part of software testing.\n\n# Human-computer interactions through formal languages\n\nIf computers are to be powerful tools for scientific research, the computer's user interface must be designed to make the interaction of scientists with computers fluent and error-free. Whereas most other uses of computers happen through relatively simple interfaces (forms, graphical representations, command lines, \u2026), the interface between a scientist and a computer includes the formal languages in which scientific information is encoded for computation. In this respect, computational science resembles software development, where the human-computer interface includes programming languages. This similarity explains why techniques and tools from software engineering are increasingly adopted by computational science.\n\nIt is widely recognized in software engineering that software source code should be written primarily to explain the working of a program to other programmers, with executability on a computer being a technical constraint in this endeavor rather its main objective. Some authors even go farther and claim that the human understanding of the program, shared by the members of its development team, is the primary output of software development, because it is what enables the team to maintain and adapt the program as requirements evolve (19). Software engineering research has therefore started to investigate the usability of programming languages by programers (20).\n\nIn scientific research, human understanding takes an even more prominent role because developing an understanding of nature is the ultimate goal of science. Research tools, including software, are only a means to this end. Digital scientific notations are the main human-computer interface for research, and must be developed with that role in mind. The use of formal languages is a technical constraint, but suitability for research work and communication by human scientists must be the main design criterion.\n\nToday's digital scientific notations are programming languages and more or less well-defined file formats. In this section, I will outline the lessons learned from working with these adopted notations, and the consequences we should draw for the design of proper scientific notations in the future.\n\n## Human vs.\u00a0computational semantics\n\nA programming language fully defines the meaning of a program, and thus completely defines the result of a computation.[^1] However, software source code has a second semantic layer which matters only for human readers: references to conceptual domain knowledge in the choice of identifiers. Mismatches between what a program does and what its source code suggests it does are a common source of mistakes in scientific software.\n\nAs an illustration, consider the following piece of Python code:\n\nTo a human reader, the names `product`, `numbers`, and `factor` clearly suggest that this procedure multiplies a list of numbers. A careful reader would notice the `+` sign, indicating addition rather than multiplication. The careful reader would thus conclude that this is a multiplication program containing a mistake. This is exactly how a scientist reads formulas in a journal article: their meaning is inferred from the meaning of all their constituents, using an understanding of the context and an awareness of the possibility of mistakes.\n\nFor a computer, the procedure shown above simply computes the sum of a list of numbers. The identifiers carry no meaning at all; all that matters is that different identifiers refer to different things. As a consequence, the above procedure is executed without any error message.\n\nIf we analyze the situations that typically lead to program code like the above example, the careful human reader turns out to be right: most probably, the intention of the author is to perform multiplication, and the plus sign is a mistake. It is highly unlikely that the author wanted to perform an addition and chose multiplication-related terms to confuse readers.\n\nSince the program is perfectly coherent from a formal point of view, approaches based on algorithmic source code analysis, such as type checking, cannot find such mistakes. Software testing can be of help, unless similar mistakes enter into both the application code and the test code. Of today's software engineering techniques, the ones most likely to be of help are pair programming and code review. Like peer review of scientific articles, they rely on critical inspection by other humans.\n\nCode review is also similar to peer review in that it is reliable only if the reviewer is an expert in the domain, even more so than the original author. In the case of software, the reviewer must have a perfect understanding of the programming languages and libraries used in the project. This is not obvious from the above example, which is particularly short and simple. Any careful reader will likely spot the mistake, even without much programming experience. But more subtle mistakes of this type do happen and do go unnoticed, in particular when advanced language features are used that perhaps even the code's author does not fully understand. As an example, few scientists with basic Python knowledge are aware of the fact that the above five-line function could in fact compute almost anything at all, depending on the context in which it is used. All it takes is a class definition that defines addition in an unexpected way.[^2]\n\nThe main conclusion to draw from this is that digital scientific knowledge must be written in terms of very simple formal languages, in order to make human reviewing effective for finding mistakes. All the semantic implications of a knowledge item must be clear from the information itself and from the definition of the formal language it is written in. Moreover, a scientist working in the same domain should be able to read, understand, and memorize the language definition with reasonable effort and ideally pick it up while acquiring the domain knowledge itself, which is how we learn most of traditional scientific notation.\n\n## Flexibility in scientific notation\n\nAs I mentioned [above](#evolution), traditional pre-digital scientific notation is the result of an evolutionary process. In principle, scientists can use whatever notation they like, on the condition that they explain it in their publication. However, there is social pressure towards using well-established notation rather than inventing new ones. In practice, this leads to variable notation for new concepts that becomes more uniform over time as the concepts are better understood and consensus is reached about representing them in writing.\n\nIn contrast, formal languages used in computing are very rigid. The reasons are numerous and include technical aspects (ease of design and implementation) as well as historical ones (the advantages of flexibility were not immediately recognized). Perhaps the biggest barrier to flexibility left today is the near universal acceptance of rigidity as normal and inevitable, in spite of the problems that result from it. Most data formats used in computational science do not permit any variation at all. When data formats turn out to be insufficient for a new use case, the two possible choices are to \"bend the rules\" by violating some part of the definition, or to define a new format. Since bending the rules is often the solution of least effort in the short run, many data formats become ambiguous over time, with different software packages implementing different \"dialects\" of what everyone pretends to be a common format.[^3] Since computer programs lack the contextual background of humans, they cannot detect such variations, leading to an erroneous interpretation of data.\n\nProgramming languages are vastly more complex than data formats. In particular, implementing a programming language by writing of a compiler or interpreter is a significant effort, and requires competences that most computational scientists do not have. As a consequence, the programming languages used for computational science are few in number. Moreover, they are under the control of the individuals, teams, or institutions that produce their implementations. For all practical purposes, computational scientists consider programming languages as imposed from outside. The only choice left to the individual scientist or team is which of the existing languages to use, and then work around its limitations.\n\nA digital scientific notation should offer the same level of flexibility as traditional scientific notation: a scientist should be able to state \"I use conventions X and Y with the following modifications\", defining the modifications in a formal language to make them usable by computers. Social pressure, e.g.\u00a0in peer review, would limit abuses of this flexibility and lead to consensus formation in the long run.\n\n## References to the scientific record\n\nThe main infrastructure of science as a social process is the *scientific record*, which consists of the totality of scientific knowledge conserved in journal articles, monographs, textbooks, and electronic databases of many kinds. Scientists refer to the scientific record when they base new studies on prior work, but also when they comment on work by their peers, or when they summarize the state of the art in a review article, a monograph, or a textbook.\n\nIn scientific narratives, references to the scientific record are often imprecise by citing only a journal article, leaving it to the reader to find the relevant part of this article. It is, however, quite possible to refer to a specific figure or equation by a number. For computational work, references must be more precise: a dataset, a number, an equation. A digital scientific notation must therefore encourage the use of small information items that can be referenced individually while at the same time keeping track of their context. It matters that composite information items can be referenced as a whole but also permit access to their individual ingredients, as I have illustrated in my [celestial mechanics example](#composition).\n\nThe rapidly increasing volume of scientific data and facts is creating a need for computer-aided analysis of the network of scientific knowledge. This has motivated the development of *semantic publishing* (2), which consists in publishing scientific findings in a machine-readable form where concepts become references to ontologies. Current research in semantic publishing focuses on giving machine-readable semantics to non-quantitative statements that are typically transmitted by the narrative of a journal article. The development of digital scientific notations that I wish to encourage by this essay can be seen as a variant of semantic publishing applied to computational methods. In this analogy, today's scientific software is similar to today's journal articles in that neither form of expression permits the automated extraction of embedded knowledge items.\n\n# Simple and flexible formal languages\n\nThe criteria exposed in the [last section](#HCI) lead to a technical requirement for digital scientific notations: it must accommodate a large number of small and simple formal languages and make it straightforward to define variants of them. This may well seem impossible to many computational scientists. Large, rigid, general-purpose languages are today's standard for software development, whereas small, rigid, and undocumented languages dominate scientific data storage. However, there are examples of more flexible formal languages, which can serve as a source of inspiration for the development of digital scientific notations. I will describe two of them in this section.\n\nThe main technical obstacle to flexibility in formal languages is the requirement for composition that I have [discussed earlier](#composition-digital): the information items that enter into a composition must all be expressed in the same language. If that condition is not satisfied, an additional effort must be invested in the form of language conversion or more complex software that can process multiple languages.\n\nThe solution is to design a *framework* for a *family* of formal languages, and develop generic tools that can process any member of this family and also compositions of different members. In other words, flexibility enters the design and the support infrastructure at a very early stage. This principle should become clearer from two concrete examples: XML and Lisp.\n\n## XML: composable data formats\n\nXML (11) is a framework for defining formal languages that express tree-structured data. The central concept in XML is the *element*, which is a node in a tree whose type is identified by a tag. The tag also defines which attributes the element can have, and which conditions its child elements must satisfy. A concrete XML-based data format is defined by a *schema*, which contains an exhaustive list of the allowed tags and the constraints on the element types defined by each tag. Given a data file and the schema it is supposed to respect, generic XML processing tools can validate the data file, i.e.\u00a0check that it conforms to the schema, and also perform many types of data transformation that do not depend on the semantics of the data. Finally, writing programs that do semantics-dependent processing is facilitated by support libraries that take care of the semantics-independent operations, in particular parsing and validating the incoming information and producing correct result files. Because of these advantages, XML has become very popular and a large variety of schemas has been defined. Examples that may be familiar to computational scientists are MathML and OpenMath for mathematical formulas, SVG for vector graphics, CML for chemical data, and SBML for systems biology.\n\nComposition of XML data means constructing a tree from elements defined in different schemas. This was made possible with the introduction of XML *namespaces*. A single-schema XML document starts with a reference to its schema. A multi-schema XML document lists multiple schemas and associates a unique name with each of them. That name is then prefixed to each tag in the document. This prefix ensures that even in the presence of tag homonyms in the document's schemas, each element has a unique and well-defined tag.\n\nThe XML namespace mechanism is an implementation of the superlanguage approach that I [have described earlier](#composition-digital). Processing such superlanguages is made straightforward because the mechanisms for defining them are part of the XML definition. All modern XML processing software implements namespaces, and therefore can handle arbitrary superlanguages inside the XML universe.\n\nXML namespaces are not a magical solution to composing unrelated data items. Any software that performs semantics-dependent processing still needs to deal with each schema individually. But the tasks of defining languages, processing them, and processing compositions are enormously simplified by the XML framework. Defining an XML schema is much simpler than designing a complete data format, let alone a data format open for extensions. Processing someone else's XML data is also much simpler than processing someone else's ad-hoc data format, because the schema provides a minimum of documentation. Finally, the namespace mechanism encourages the definition of small schemas that can then be composed, making well-designed XML-based data formats easier to understand for human readers.\n\n## Lisp: extensible programming languages\n\nMost programming languages used today are constructed in much the same way. A syntax definition specifies which sequences of text characters are legal programs. This syntax definition is set in stone by the language designer. Some syntactical elements define fundamental data types, others fundamental executable operations. These basic building blocks can be combined by the programmer into larger-scale building blocks using language constructs for defining data structures, procedures, classes, etc. In fact, programming is almost synonymous with defining such entities and giving them names for later referring to them. In other words, programming means extending the language by new building blocks, the last of which is the program to be run. The programmer cannot modify the syntax in any way, nor take any features away from the basic language. This means in particular that the programmer cannot make the language any *simpler*.\n\nOne of the oldest family of programming languages, the Lisp family, differs from this picture in an important way. Its syntax is defined in two stages. The first stage merely defines how a central data structure called a *list* is written in terms of text characters. The elements of a list can be any basic Lisp data type, e.g.\u00a0numbers or symbols, but also other lists. Nested lists are equivalent to trees, and in fact Lisp's nested lists are very similar to the trees of elements that I have described in the [section on XML](#XML). The second stage of Lisp's syntax defines which lists are legal programs. The general convention is that the first element of a list specifies a language construct to which the remaining elements are parameters. For example, the list `(+\u00a02\u00a03)` means \"perform the + operation on the numbers 2 and 3\", whereas the list `(define\u00a0x\u00a0(+\u00a02\u00a03))` means \"set variable x to the value of the expression defined by the list `(+\u00a02\u00a03)`\".\n\nThis two-stage syntax is exploited in what is a very rare feature in programming languages: the second syntax level can be modified by the programmer, using a language construct called a *macro*. Technically, a macro is a function called as part of the compilation of Lisp code. When the compiler hits a list whose first element specifies a macro, it runs the macro function and substitutes the original macro-calling list by the macro function's return value, which is then compiled instead.\n\nTo understand the power of this construct, consider that a compiler is a program that transforms another program written in language A into an equivalent program written in language B. That is exactly what a macro does: it translates a program written in some language M into basic Lisp. The language M is defined by the macro itself, just like any compiler is an operational definition of a language as I [explained before](#formal-languages). Whatever the macro accepts as arguments is a valid program in M. A macro thus *is* a compiler, and by defining macros a programmer can define his or her own languages with no other restrictions than respecting the top layer of Lisp's syntax, i.e.\u00a0the list syntax. Most macros merely define small variations on the basic Lisp language, but nothing stops you from writing a `fortran` macro to implement a language equivalent to Fortran except that its syntax is defined in terms of nested lists.\n\nThe use of macros as building blocks of compilers has been pushed to a very advanced level in the Racket language (23), a Lisp dialect which its developers describe as a \"programmable programming language\". The path from the first Lisp macros of the 1960s via Scheme's hygienic macros to today's Racket has been a long one. For example, it turned out that making macros composable is not trivial (24). Today's Racket programming environment contains a large number of languages for various purposes. Plain \"racket\" is a standard general-purpose programming language. A core subset of \"racket\" is available as \"racket\/base\". Several languages are simplified forms of \"racket\" for teaching purposes. The simplification does not merely take out language features, but exploits the gain in simplicity for providing better error messages. Other languages are extensions, such as \"typed\/racket\" which adds static type checking. But Racket also lifts the traditional Lisp restriction of list-based syntax, providing a mechanism to write language-specific parsers. Both Java and Python have been implemented in Racket in this way. A language definition in Racket is nothing but a library (25), meaning that any number of languages can co-exist. Moreover, a new language can be based on any existing one, making it straightforward to define small modifications.\n\nA big advantage of the Lisp\/Racket approach to implementing new languages is that all those languages are interoperable, because they are all compiled to basic Lisp\/Racket. This is an implementation of the translation approach to composing different languages that I have [described before](#composition-digital). Another advantage is that defining new languages becomes much easier. Implementing a big language such as Python remains a difficult task even in Racket. But implementing a small variation on an existing language \u2013 take away some parts, add some others \u2013 is simple enough to be accessible to an average software developer.\n\n# Designing digital scientific notations\n\nThe main conclusion from the analysis that I have presented in this essay is that digital scientific notations should be based on formal languages with the following properties:\n\n- **Small and simple**: each formal language must be so small and simple that a scientist can memorize it easily and understand its semantics in detail.\n\n- **Flexible**: a scientist must be able to create modifications of existing languages used in his\/her field in order to adapt them to new requirements and personal preferences.\n\n- **Interoperable**: composition of digital knowledge items expressed in different languages must be possible with reasonable effort.\n\nThe [two examples](#simple-and-flexible) I have presented above suggest that a good approach is to define a framework of languages and implement generic tools for common manipulations. The foundation of this framework should provide basic data types and data structures:\n\n- numbers (integers, rationals, floating-point, machine-level integers)\n\n- symbols\n\n- text\n\n- N-dimensional arrays\n\n- trees\n\n- sets\n\n- key-value maps (also called associative arrays, hash tables, or dictionaries)\n\nThe representation of these fundamental data types in terms of bit sequences can be based on existing standards such as XML (text) or HDF5 (binary). It is probably inevitable to have multiple such representations to take into account conflicting requirements of different application domains. As long as automatic loss-less interconversion can be ensured, this should not be an obstacle to interoperability. An added advantage of keeping the lowest level of representation flexible is the possibility to adapt to future technological developments, for example IPFS (26) whose \"permanent Web\" approach seems well adapted to preserving the scientific record.\n\nThere should also be a way to represent algorithms, but it is less obvious how this should best be done. Any of the common Turing-complete formalisms (lambda calculus, term rewriting, \u2026) could be used, but it may turn out to be useful to have access to less powerful formalisms as well, because they facilitate the automated analysis of algorithms.\n\nA next layer could introduce domain-specific but still widely used data abstractions, e.g.\u00a0from geometry. For much of mathematics, the OpenMath content dictionaries (27) could be adopted. On top of this layer, each scientific community can build its own digital scientific notations, and each scientist can fine-tune them to specific needs.\n\nAn illustration of how these principles can be applied is given by the MOlecular SimulAtion Interchange Conventions (MOSAIC) (28), which define a digital notation for molecular simulations. MOSAIC lacks the common layer of data types listed above, and is therefore not easily interoperable with other (future) digital notations. It does, however, define data structures specific to molecular simulations in terms of more generic data structures, in particular arrays. MOSAIC defines two bit-level representations, based on XML and HDF5. A Python library (29) proposes three further implementations in terms of Python data structures, and implements I\/O to and from the XML and HDF5 representations.\n\nTraditional scientific notations have evolved as a byproduct of scientific research, and digital scientific notations will have to evolve in the same way in order to be well adapted to the task. In this spirit, the ideas listed in this section are merely the basis I intend to use in my own future work, but they may well turn out to be a dead end in the long run. I would like to encourage computational scientists to develop their own approaches if they think they can do better. As I have stated in the introduction, my goal with this essay is not to propose solutions, but to expose the problem. If computational scientists start to think about \"digital scientific notation\" rather than \"file formats\" and \"programming languages\", I consider my goal achieved.\n\n# References\n\n1\\. History of mathematical notation. In: Wikipedia, the free encyclopedia \\[Internet\\]. 2016 \\[cited 2016 Apr 25\\]. Available from: \n\n2\\. Shotton D. Semantic publishing: The coming revolution in scientific journal publishing. Learned Publishing \\[Internet\\]. 2009 Apr 1 \\[cited 2016 Apr 20\\];22(2):85\u201394. Available from: \n\n3\\. Borgida A, Mylopoulos J. Data Semantics Revisited. In: Bussler C, Tannen V, Fundulaki I, editors. Semantic Web and Databases \\[Internet\\]. Springer Berlin Heidelberg; 2004 \\[cited 2016 Apr 19\\]. pp. 9\u201326. (Lecture notes in computer science). Available from: \n\n4\\. Soergel DAW. Rampant software errors undermine scientific results. F1000Research \\[Internet\\]. 2014; Available from: \n\n5\\. Merali Z. Computational science: .Error. Nature \\[Internet\\]. 2010;775\u20137. Available from: \n\n6\\. Stodden V, Bailey DH, Borwein J, LeVeque RJ, Rider W, Stein W. Setting the Default to Reproducible \\[Internet\\]. 2013 Feb pp. 1\u201319. Available from: \n\n7\\. Peng RD. Reproducible research in computational science. Science \\[Internet\\]. 2011;334(6060):1226\u20137. Available from: \n\n8\\. Hinsen K. Computational science: Shifting the focus from tools to models. F1000Research \\[Internet\\]. 2014;3. Available from: \n\n9\\. Unicode 8.0.0 \\[Internet\\]. 2015 \\[cited 2016 Apr 21\\]. Available from: \n\n10\\. IEEE Standard for Floating-Point Arithmetic. IEEE Std 754-2008. 2008 Aug;1\u201370.\n\n11\\. Extensible Markup Language (XML) \\[Internet\\]. 1998\u20132016 \\[cited 2016 Apr 21\\]. Available from: \n\n12\\. The HDF Group. Hierarchical Data Format, version 5 \\[Internet\\]. 1997\u20132016. Available from: \n\n13\\. ISO\/IEC 9899:2011 - Information technology \u2013 Programming languages \u2013 C \\[Internet\\]. 2011 \\[cited 2016 Apr 21\\]. Available from: \n\n14\\. Groth P, Gibson A, Velterop J. The anatomy of a nanopublication. Information Services & Use \\[Internet\\]. 2010 Jan 1 \\[cited 2016 Apr 18\\];30(1-2):51\u20136. Available from: \n\n15\\. Brooks FPJ. No Silver Bullet: Essence and Accidents of Software Engineering. Computer. 1987 Apr;20(4):10\u20139.\n\n16\\. Fousse L, Hanrot G, Lef\u00e8vre V, P\u00e9lissier P, Zimmermann P. MPFR: A Multiple-precision Binary Floating-point Library with Correct Rounding. ACM Trans Math Softw \\[Internet\\]. 2007 Jun \\[cited 2016 Apr 21\\];33(2). Available from: \n\n17\\. Community Earth System Model \\[Internet\\]. 1983\u20132016 \\[cited 2016 Apr 21\\]. Available from: \n\n18\\. Aln\u00e6s M, Blechta J, Hake J, Johansson A, Kehlet B, Logg A, et al. The FEniCS Project Version 1.5. Archive of Numerical Software \\[Internet\\]. 2015 Dec 7 \\[cited 2016 Apr 21\\];3(100). Available from: \n\n19\\. Naur P. Programming as theory building. Microprocessing and Microprogramming \\[Internet\\]. 1985 \\[cited 2016 Mar 31\\];15(5):253\u201361. Available from: \n\n20\\. PLATEAU '10: Evaluation and Usability of Programming Languages and Tools \\[Internet\\]. New York, NY, USA: ACM; 2010. Available from: \n\n21\\. Regehr J. A Guide to Undefined Behavior in C and C++ \\[Internet\\]. 2010 \\[cited 2016 Apr 20\\]. Available from: \n\n22\\. wwPDB. Atomic Coordinate Entry Format Version 3.3 \\[Internet\\]. 2011 \\[cited 2016 Apr 21\\]. Available from: \n\n23\\. Flatt M, PLT. Reference: Racket. PLT Design Inc. 2010. Report No.: PLT-TR-2010-1.\n\n24\\. Flatt M. Composable and Compilable Macros: You Want It when? In: Proceedings of the Seventh ACM SIGPLAN International Conference on Functional Programming \\[Internet\\]. New York, NY, USA: ACM; 2002 \\[cited 2016 Apr 14\\]. pp. 72\u201383. (ICFP '02). Available from: \n\n25\\. Tobin-Hochstadt S, St-Amour V, Culpepper R, Flatt M, Felleisen M. Languages As Libraries. In: Proceedings of the 32Nd ACM SIGPLAN Conference on Programming Language Design and Implementation \\[Internet\\]. New York, NY, USA: ACM; 2011 \\[cited 2016 Apr 6\\]. pp. 132\u201341. (PLDI '11). Available from: \n\n26\\. Benet J. IPFS - Content Addressed, Versioned, P2P File System. 2014 Jul 14 \\[cited 2016 Apr 26\\]; Available from: \n\n27\\. OpenMath society. OpenMath \\[Internet\\]. 2000\u20132013 \\[cited 2016 Apr 21\\]. Available from: \n\n28\\. Hinsen K. MOSAIC: A data model and file formats for molecular simulations. J Chem Inf Model \\[Internet\\]. 2014;54(1):131\u20137. Available from: \n\n29\\. Hinsen, Konrad. pyMosaic 0.3.1. 2014 \\[cited 2016 Apr 21\\]; Available from: \n\n[^1]: At least it does in theory. The definitions of many popular languages are incomplete and ambiguous (21).\n\n[^2]: Lest more experienced Pythonistas put up a smug grin reading this, I suggest they ask themselves if they fully understand how far the code's result can be manipulated from the outside using metaclasses. I am the first to admit that I don't.\n\n[^3]: Readers familiar with computational structural biology have probably had bad surprises of this kind with the PDB (22) format.","meta":{"dup_signals":{"dup_doc_count":33,"dup_dump_count":30,"dup_details":{"curated_sources":4,"2023-23":1,"2022-49":1,"2022-27":1,"2022-05":1,"2021-39":1,"2021-31":1,"2020-10":1,"2019-51":1,"2019-39":1,"2019-35":1,"2019-30":1,"2019-18":1,"2019-09":1,"2018-51":1,"2018-43":1,"2018-39":1,"2018-34":1,"2018-30":1,"2018-22":1,"2018-13":1,"2018-05":1,"2017-47":1,"2017-39":1,"2017-30":1,"2017-26":1,"2017-17":1,"2023-50":1,"2017-13":1,"2024-22":1}},"filename":"out\/1605.02960.tex.md"},"subset":"arxiv"} +{"text":"Computer Simulations for Biological Ageing and Sexual Reproduction\n\nD. Stauffer$^1$, P.M.C. de Oliveira, S. Moss de Oliveira, T.J.P. Penna and J.S. S\u00e1 Martins$^2$\n\nInstituto de F\u0131\u0301sica, Universidade Federal Fluminense,\n\nAv. Litor\u00e2nea s\/n, Boa Viagem, Niter\u00f3i 24210-340, RJ, Brazil:\n\n$^1$visiting from Inst. for Theoretical Physics, Cologne University, D-50923 K\u00f6ln, Euroland\n\n$^2$ Colorado Center for Chaos and Complexity\/CIRES,\n\nUniversity of Colorado, Boulder CO 80309, USA\n\nThe sexual version of the Penna model of biological ageing, simulated since 1996, is compared here with alternative forms of reproduction as well as with models not involving ageing. In particular we want to check how sexual forms of life could have evolved and won over earlier asexual forms hundreds of million years ago. This computer model is based on the mutation-accumulation theory of ageing, using bits-strings to represent the genome. Its population dynamics is studied by Monte Carlo methods.\n\nKeywords: parthenogenesis, genome, menopause, testosteron, Monte Carlo simulation\n\n# Introduction\n\nCan physicists contribute to understand biological subjects? Since the first attempts by the Nobel laureate Schr\u00f6dinger (1944), there were a lot of tentative answers to this question, probably most of them useless. What particular knowledge can physicists bring to Biology? One particular tentative, biased answer for this second question is presented below. It is biased because it concerns just the authors' traditional line of research.\n\nCritical phenomena appear in macroscopic physical systems undergoing continuous phase transitions. An example is water crossing the critical temperature of $374^{\\rm o}\\,$C, above which one can no longer distinguish liquid from vapour. Another is a ferromagnetic material which loses its spontaneous magnetisation when heated above its critical temperature. Such systems present unusual behaviours. For instance, some quantities increase without limits as one approaches more and more the critical point, as the water compressibility: a very small pressure leads to an enormous volume shrinkage. Analogously, by applying a very small magnetic field, one can drastically increase the magnetisation of a ferromagnetic sample. In both examples, also the specific heat diverges at the critical point, meaning that the system can absorb or deliver a large amount of heat, without any sensible temperature variation. Needless to mention the important practical applications of such a behaviour, according to which a fine tuning of some quantity can lead to an enormous variation of another related quantity. All modern electronics, for instance, is based on the possibility of getting an electric current passing through an otherwise insulating device, simply by applying a small electric field. Also, some plastic materials can undergo very large volume expansions under very small electric or magnetic impulses: they are used for manufacturing of artificial muscles, catheters which unblock arteries, microengines, etc.\n\nThese features have attracted the attention of physicists since more than a century. They discovered an also unusual behaviour concerning the mathematical description of such systems: the appearence of **power-laws**, i.e. $Q \\sim \\vert T-T_{\\rm c} \\vert^{-\\gamma}$ or $C \\sim\n\\vert T-T_{\\rm c} \\vert^{-\\alpha}$, where $Q$ is the quoted diverging quantity (compressibility or magnetic susceptibility), $C$ is the specific heat, and $\\vert T-T_{\\rm c} \\vert$ measures how far the system is from its own critical point. The symbol $\\sim$ represents proportionality. Critical exponents like $\\gamma$, $\\alpha$, etc are characteristic of the corresponding quantity, $Q$, $C$, etc.\n\nThe most interesting feature of these phenomena is the so-called universality: the precise values of the exponents $\\gamma$, $\\alpha$, etc are the same for entire classes of completely different systems. For instance, $\\alpha = 0.12$ for both water and any ferromagnet in which magnetisation presents uni-axial symmetry. Also, $\\gamma = 1.24$ for both the water compressibility and the magnetic susceptibility of the ferromagnetic material. Besides critical exponents, many other qualitative and quantitative characteristics of the various systems belonging to the same universality class coincide as well. In spite of having been observed much before, these coincidences remained unexplained until the work of Wilson (1971), three decades ago, who was awarded with the Nobel prize because of this work (see also: Wilson & Kogut 1974; Wilson 1979). The key concept needed to understand this phenomenon is the decaying of correlations with increasing distances. Suppose one picks two points inside the system, separated by a distance $x$. How much a perturbation performed at one of these points will be felt at the other? The correlation $I$ between these two points is a measure of this mutual influence, and generally decays for larger and larger values of $x$, according to the exponential behaviour\n\n$$I \\sim \\exp{(-x\/\\xi)}\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\n{\\rm (for\\,\\, non\\,\\, critical\\,\\, situations)},\\eqno({\\rm NC})$$\n\nwhere $\\xi$ is the so-called correlation length. Would one take two points distant from each other a distance $x$ larger than $\\xi$, the correlation $I$ would be negligible. This means that one does not need to study the macroscopic system as a whole, with its enormous number of component units: it is enough to take a small piece of the system with linear dimensions of the same order as $\\xi$ (for instance, a sphere with radius, say, $10\\xi$). Once one knows, for instance, the specific heat of this small piece, that of the whole system is obtained by a simple volume proportionality $C \\sim V$ or $Q \\sim V$.\n\nHowever, the nearer the system is to its critical point, the larger is $\\xi$, and the larger is the \"small\" piece representing the whole, i.e. $\\xi \\sim \\vert T-T_{\\rm c} \\vert^{-\\nu}$. Just **at** the critical point, one can no longer break the system into small pieces: the macroscopic critical behaviour of the system is no longer proportional to its volume. Instead, critical quantities become non-linear, non-extensive, and behave as $Q \\sim V^{\\Phi_\\gamma}$ or $C \\sim V^{\\Phi_\\alpha}$, where $\\Phi_\\gamma = \\gamma\/3\\nu$, $\\Phi_\\alpha = \\alpha\/3\\nu$, etc. Also, the above exponential form (NC) for $I$ concerns only the dominating decay valid for a finite $\\xi$. **At** the critical point where $\\xi \\to\n\\infty$, however, other sub-dominating terms enter into the scene, i.e.\n\n$$I \\sim x^{-\\eta}\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\n{\\rm (for\\,\\, critical\\,\\, situations)},\\eqno({\\rm C})$$\n\nwhere $\\eta$ is another critical exponent.\n\nBoth forms (NC) and (C) mean that correlations decay for larger and larger distances. The important conceptual difference is that in (NC) they decay much faster, according to a characteristic length scale $\\xi$ above which correlations become negligible. On the opposite, there is no characteristic length scale in the critical case (C): correlations are never negligible even between two points very far from each other, inside the system. Thus clusters and holes are observed at all sizes, a crucial property e.g. for electrophoresis.\n\nThis is a big mess for theoretical physicists: since any tentative to break such systems into small pieces is denied, they are forced to treat them as a whole. Fortunately, this same strange non-linear, critical behaviour leads to another very important property: most microscopic details of the system are irrelevant in what concerns its critical behaviour, since large distances dominate the scenario. That is why water compressibility presents exactly the same critical exponent $\\gamma$ as the magnetic susceptibility of any uni-axial ferromagnet, as well as the same values for the other exponents $\\alpha$, $\\nu$, $\\eta$, etc, and thus the same critical behaviour. Not only water and such ferromagnetic materials, but also any other natural or artificial system which belongs to the same (huge) universality class. One example of such mathematical toys is the famous Ising model: each point on a regular lattice holds a binary variable (a number 0 or 1), and interacts only with its neigbouring sites. No movement at all, no molecules, no atoms, no electrons interacting through complicated quantum rules. The only similarities between this toy model and real water are two very general ingredients: the three-dimensionality of the space and the one-dimensionality of the main variables involved (the numbers 0 or 1 within the toy, and the liquid-vapour density difference within water, also a number, as opposed to a three-dimensional vector). Nevertheless, one can use this very simple toy in order to obtain the critical behaviour common to all much more complicated systems belonging to the same universality class.\n\nHowever, even the study of these toy models is far from trivial, due to the already quoted impossibility of breaking the system into small, separate pieces. Thus, the main instrument is the computer, where one can store the current state of each unit, i.e. a number 0 or 1, into a single bit of the memory. By programming the computer to follow the evolution of this artificial system time after time, i.e. by repeatedly flipping 0s into 1s (or vice-versa) according to some prescribed microscopic rule, one can measure the various quantities of interest. Note that this approach has nothing to do with the numerical solution of a well posed mathematical problem defined by specific equations. Instead, the idea is **to simulate** the real **dynamical** behaviour of the system on the computer, and **to measure** the interesting quantities. During the last half century, this \"almost experimental\" technique was tremendously developed by the (now-called) computational physicists, a fast growing scientific community to which the authors belong (Stauffer & Aharony 1994; de Oliveira 1991; Moss de Oliveira et al. 1999).\n\nBiological evolution (Darwin 1859) also presents the same fundamental mathematical ingredient which characterise physical critical systems: the power-laws. A lot of evidences are known, today (see, for instance, Kauffman 1993 and 1995; Bak 1997). A simple and well known example is the number $A$ of still alive lineages within an evolving population: it decays in time according to the power-law $A \\sim t^{-1}$, where the exponent $-1$ can be exactly obtained from the coalescence theory (see, for instance, Excoffier 1997). According to this, after many generations, all individuals of the population are descendents of a single lineage-founder ancestor. The number of generations one needs to wait for this coalescence is proportional to the number of founder individuals, due to the value $-1$ of the exponent. Also, during the whole evolution of the population, the number $E$ of already-extinct lineages with $n$ individuals behaves as $E \\sim n^{-0.5}$, where the new exponent $-0.5$ is also exactly known. The interesting point is that these exponents are **universal**, i.e. their precise values do not change for different microscopic rules dictating how individuals die, how they are born, etc. Another simple example is the evolution of a recessive disease: the frequency of the recessive gene among the evolving population also decays in time as a power-law, thus **without a characteristic extinction time**. Due to this particular mathematical decaying feature, the recessive gene extinction is postponed forever (Jacquard 1978). An explanation for the narrow relation between biological evolution and critical dynamics is presented by de Oliveira (2000).\n\nThe Penna model for biological ageing (Penna 1995) is entirely based on Darwinian evolution, and may be compared with the Ising model, for this particular evolutionary phenomenon: genes are also represented by binary variables (0 for ordinary genes, 1 for harmful ones). It spreads widely during the last half decade, and was applied to many different biological problems involving ageing, always within the general interpretation above: **a very simple model supposed to reproduce the universal features of much more complicated, real phenomena.**\n\nSenescence, or biological ageing, can mean many things; for computer simulations it is best defined as the increase of mortality with increasing age. It seems not to exist for bacteria, where even the concept of death is difficult to define, but for humans as well as for other organisms (Vaupel et al. 1998) this rapid increase of the probability to die, after childhood diseases are overcome, is well known. Fig.1 shows typical human data for a rich country.\n\nThe reasons for ageing are controversial (Watcher & Finch 1997; see also the whole special issues of *La Recherche*: July\/August 1999 and *Nature*: November 9th, 2000). There may be exactly one gene for longevity, or senescence comes from wear and tear like for insect wings and athlete's limbs, from programmed cell death (apoptosis - Holbrook et al. 1996), from metabolic oxygen radicals destroying the DNA (see for instance Azbel 1994), or from mutation accumulation (Rose 1991). The computer simulations reviewed here use this last assumption, which does not exclude all the other reasons. For example, the oxygen radicals may produce the mutations which then accumulate in the genome transmitted from one generation to the next. Except if stated otherwise, the mutations here are all detrimental and inherited.\n\nAfter a short description of the model in section 2, we deal in section 3 with the question whether sexual reproduction was better or worse than asexual reproduction hundreds of million years ago when sex appeared, while section 4 tries to explain why today's women live longer than men and have menopause. Section 5 reviews other aspects, and Section 6 gives a short summary.\n\nA more detailed account, but without the results of 1999 and 2000 emphasized here, is given in our book (Moss de Oliveira et al. 1999).\n\n# The Penna model\n\nIn the original asexual version of the Penna model (Penna 1995) the genome of each individual is represented by a computer word (bit-string) of 32 bits (each bit can be zero or one). It is assumed that each bit corresponds to one \"year\" in the individual lifetime, and consequently each individual can live at most for 32 \"years\". *A bit set to one means that the individual will suffer from the effects of a deleterious inherited mutation (genetic disease) in that and all following years*. As an example, an individual with a genome $10100...$ would start to become sick during its first year of life and would become worse during its third year when a new disease appears. In this way the bit-string represents in fact a \"chronological genome\". The biological motivation for such a representation is, for instance, the Alzheimer disease: its effects generally appear at old ages, although the corresponding defective gene is present in the genetic code since birth.\n\nThe extremely short size of the 32 bit-string used in the model would be totally unrealistic if all our genes were related to life-threatening diseases. However, among the average number of $10^8$ units we have in our real genome, only around $10^4$ to $10^5$ units play a functional role. Moreover, only a subgroup of these will give rise to a serious disease at some moment of the individual lifetime. Besides, qualitatively there was no difference when 32, 64 and 128 bits were taken into account (Penna & Stauffer 1996).\n\nOne step of the simulation corresponds to reading one bit of all genomes. Whenever a new bit of a given genome is read, we increase by one the individual's age. The rules for the individual to stay alive are: 1) The number of inherited diseases (bits set to 1) already accumulated until its current age must be lower than a threshold $T$, the same for the whole population. In the example given above, if $T=2$ the individual would live only for 2 years. 2) There is a competition for space and food given by the logistic Verhulst factor $V=1-N(t)\/N_{max}$, where $N_{max}$ is the maximum population size the environment can support and $N(t)$ is the current population size. We usually consider $N_{max}$ ten times larger than the initial population $N(0)$. At each time step and for each individual a random number between zero and one is generated and compared with $V$: if it is greater than $V$, the individual dies independently of its age or genome. The smaller the population size is, the greater is the probability of any individual to escape from this random killing factor.\n\nIf the individual succeeds in staying alive until a minimum reproduction age $R$, it generates $b$ offspring in that and all following years (unless we decide to set also some maximum reproduction age). The offspring genome is a copy of the parent's one, except for $M$ randomly chosen mutations introduced at birth. Although the model allows good and bad mutations, generally we consider only the bad ones. In this case, if a bit 1 is randomly tossed in the parent's genome, it remains 1 in the offspring genome; however, if a bit zero is randomly tossed, it is set to 1 in the mutated offspring genome. In this way, for the asexual reproduction the offspring is always as good as or worse than the parent. Even so, a stable population is obtained, provided the birth rate $b$ is greater than a minimum value, which was analytically obtained by Penna & Moss de Oliveira (1995). In fact, the population is sustained by those cases where no mutation occurs, when a bit already set to 1 in the parent genome is chosen. These cases are enough to avoid mutational meltdown, that is, extinction due to accumulation of deleterious mutation, first considered by Lynch & Gabriel (1990). The reason why we consider only harmful mutations is that they are 100 times more frequent than the backward ones (reverse mutations deleting harmful ones - Pamilo et al. 1987).\n\nThe sexual version of the Penna model was first introduced by Bernardes (1995 and 1996), followed by Stauffer et al. (1996) who adopted a slightly different strategy. We are going to describe and use the second one (see also Moss de Oliveira et al. 1996). Now individuals are diploids, with their genomes represented by two bit-strings that are read in parallel. One of the bit-strings contains the genetic information inherited from the mother, and the other from the father. In order to count the accumulated number of mutations and compare it with the threshold $T$, it is necessary to distinguish between recessive and dominant mutations. A mutation is counted if two bits set to 1 appear at the same position in both bit-strings (inherited from both parents) or if it appears in only one of the bit-strings but at a dominant position (locus). The dominant positions are randomly chosen at the beginning of the simulation and are the same for all individuals.\n\nThe population is now divided into males and females. After reaching the minimum reproduction age $R$, a female randomly chooses a male with age also equal to or greater than $R$ to breed (for sexual fidelity see Sousa & Moss de Oliveira 1999). To construct one offspring genome first the two bit-strings of the mother are cut in a random position (crossing), producing four bit-string pieces. Two complementary pieces are chosen to form the female gamete (recombination). Finally, $m_f$ deleterious mutations are randomly introduced. The same process occurs with the male's genome, producing the male gamete with $m_m$ deleterious mutations. These two resulting bit-strings form the offspring genome. The sex of the baby is randomly chosen, with a probability of $50\\%$ for each one. This whole strategy is repeated $b$ times to produce the $b$ offspring. The Verhulst killing factor already mentioned works in the same way as in the asexual reproduction.\n\nA very important parameter of the Penna model is the minimum reproduction age $R$. According to mutation accumulation-theory, Darwinian selection pressure tries to keep our genomes as clean as possible until reproduction starts. For this reason we age: mutations that appear early in life are not transmitted and disappear from the population, while those that become active late in life when we barely reproduce can accumulate, decreasing our survival probability but without risking the perpetuation of the species. One of the most striking examples of such a mechanism is the catastrophic senescence of the pacific salmon and other species called semelparous: In these species all individuals reproduce only once in life, all at the same age. This can be easily implemented smply by setting a maximum reproduction age equal to $R$. After many generations, the inherited mutations have accumulated in such a way that as soon as reproduction occurs, individuals die. This explanation was given by Penna et al. (1995), using the Penna model (see also Penna & Moss de Oliveira 1995 and a remark from Tuljapurkar on page 70 in Wachter & Finch 1997).\n\n# Comparison of sexual and asexual reproduction\n\n## Definitions\n\nIn this section we check which way of reproduction is best: Sexual, asexual or something in between. We denote as asexual (AS) and sexual (SX) the simulation methods described in the previous section, that means cloning of a haploid genome for AS, and crossover for diploid genomes with males and females separated for SX. Intermediate possibilities which will also be compared are apomictic parthenogenesis (AP), meiotic parthenogenesis (MP), hermaphroditism (HA), and mixtures of them. One could also group AS, AP and MP into asexual and HA and SX into sexual reproduction. Parasex, the exchange of haploid genome parts between different bacteria, is not simulated here.\n\nTo find out which way is the most successful one we simulate each choice separately with the same parameters, in particular with the same $N_{max}$ for the Verhulst factor taking into account the limits of space and food. The choice with the largest equilibrium population, after the initial transient phenomena are overcome, is regarded as the best. We assume it would win in a Darwinian selection (see Stauffer et al. 2000 for some justification) if different populations following these different ways of reproduction would compete against each other in the same environment, without any symbiosis or predator-prey relation between them.\n\nAS and SX were defined already in the preceding section. For AP the diploid genome is copied without crossover, only mutations. For MP the diploid genome is crossed over, and one of the two resulting haploid bit-strings is randomly chosen, duplicated and mutated to form the new diploid genome. HA is similar to SX except that there is no separation into males and females; instead all of them can generate offspring and each individual selects randomly a partner from the whole population to exchange genome as in SX. Fig.2 summarizes the four versions schematically.\n\nIn all the copying of genome (bit-strings), point mutations are assumed to happen with the same probability per bit. Thus typically for AS with a genome of 32 bits we assume one mutation per generation for the whole genome, while for the diploid cases AP, MP, HA and SX we assume two. (We assume the same mutation rate for males and females in the Penna model simulations.) The birth rate is also assumed to be the same for all birth-giving individuals. For example, for AS, AP, MP and HA we have four offspring per suitable individual and per year, while for SX we have four offspring per suitable female and per year. Thus the birth rate averaged over males and females is only two instead of four. And we have to find out whether this loss of a factor of two in the average birthrate for SX is overcome by advantages not contained in the other ways of reproduction.\n\n(For HA as simulated by Stauffer et al. (2000) during one iteration some individuals have already aged by one \"year\", while others have not yet aged. Results do not change much, see topmost data in Fig.3 below, if now in one iteration we first let everybody age one time unit, and only afterwards partners are selected.)\n\n## Comparison without ageing\n\nThe Redfield model (Redfield 1994) is an elegant model requiring much less computer time than the Penna model, but having no age structure. It is not a population dynamics model following the lifetime of each individual, but only simulates their probabilities to survive up to reproduction. The mortality increases exponentially with the number of mutations in the individual. For the sexual variant the number of mutations in the child is determined by a binomial distribution such that on average the child has as its own number of mutations half the number of the father, plus half the number from the mother. At birth, new mutations are added following a Poisson distribution, for both AS and SX. Because of the lack of an explicit genome, the forms AP, MP, HA between AS and SX were not simulated.\n\nThis model triggered many publications in the physics literature since it originally made SX much worse than AS: The average mortality was about 25 percent for AS, and did not change when the simulation switched to SX. But still the males were eating the food away from the females. Actually, the male mutation rate is much higher than the female one, and when that was taken into account the mortality with SX became much higher than for AS (Redfield 1994).\n\nHowever, the picture changed drastically when we took into account (Stauffer et al. 1996) that most hereditary diseases are recessive (acting only when both father and mother had them in their transmitted genome) and not dominant (acting already when only one of the two inherited bit-strings has them). Then the mortality decreased by about an order of magnitude, and SX became much better than AS. The same drastic improvement was found when for SX the females selected only males with few mutations.\n\nHowever, since the males do not give birth, these simulations (Stauffer et al. 1996) required the female birth rate for SX to be twice as high as the AS birth rate. Correcting this factor of two, and still assuming that only 20% of the mutations are dominant diseases, SX lost out for the originally selected mutation rate (Redfield 1994) of 0.3. Increasing the mutation rate to 1 for both AS and SX, SX won (Stauffer 1999) over AS if the male mutation rate was the same as the female one, and AS won over SX when the male mutation rate was three or more times higher than the female one. Thus, as also observed in Nature, sometimes asexual and sometimes sexual reproduction is better.\n\nA more realistic model, involving an explicit genome in the form of bit-strings, was more recently investigated by \u00d6r\u00e7al et al. (2000). It did not involve ageing, however, since all bit positions were treated equally. Instead, \u00d6r\u00e7al et al. (2000) used the Jan et al. (2000) parameter $\\mu$ defined such that only individuals with $\\mu$ and more mutations exchange genome. (The model is then closer to HA than to SX.) Healthy individuals without many mutations reproduce similarly to AS. Five different versions were studied, depending on the number of offspring and on whether individuals with $\\mu$ and more mutations mate only with each other or also with a wider population having less mutations. The simulation showed that in none of the five cases the sexual population died out; in one case it won even completely and made the AS population extinct.\n\nThese two models (Redfield 1994 and \u00d6r\u00e7al et al. 2000) thus give the same result: The simpler asexual way of life is not necessarily better than the more complicated way of genome exchange. And this possible justification of sex comes only from intrinsic genetic reasons, not from extrinsic or social reasons like parasites, changing environment, or child protection. On the other hand, there are also cases were asexual reproduction is better. With ageing included to make the simulation more realistic, the next subsection will tell us a different story.\n\n## Comparison in Penna ageing model\n\nMost organisms age, and thus we should compare sexual and asexual reproduction in a model with ageing, where reproduction starts only after a certain age. The Penna model of section 2 is the only one for which we know of computer simulations for ageing and sex. The first comparisons of MP with SX in this model were published by Bernardes (1997). More recently, AS, AP, MP, HA and SX were simulated with it (Stauffer et al. 2000). We enlarge the range of possibilities by incorporating into it the Jan parameter (Jan et al. 2000) $\\mu$ such that organisms with $\\mu$ and more mutations try to find a partner with whom they exchange genome (HA and SX), while those with less than $\\mu$ mutations use AP or MP. One counts only the mutations already set, up to the current age of each individual, to make the choice.\n\nThe simulated mixtures of reproduction were: MP-HA, MP-SX, AP-SX. In the first case, the final population depends non-monotonically on $\\mu$ showing a maximum for intermediate $\\mu$, while in the two other mixtures the behaviour is monotonic making the mixtures less interesting. Since $T$ mutations kill an individual, we have $0 \\le \\mu \\le T$, with $\\mu = 0$ describing pure HA and $\\mu = T$ describing pure MP for the MP-HA mixture. Fig.3 summarizes our main results.\n\nWe see that SX (the small dots in the lower half) is by far the worst, MP (+) and AS (line) give nearly identical results, AP (x) is slightly worse than AS, and finally a mixture of HA and MP (stars) with $\\mu = 4$ gives the best results. Why then do males exist ?\n\nSome people claim men should eat less steaks and drink less alcohol. Taken to the extreme, we assume the males eat nothing and are so much smaller than the females that they consume no space. Some animals followed this way of life long before us. Then their contribution to the Verhulst dying probability $(N_m + N_f)\/N_{max}$ mentioned in section 2 becomes negligible, and the dying probability is simplified to $N_f\/N_{max}$. With this change, the population for SX roughly doubles, not surprisingly, and then SX is by far the best solution. The majority of the present authors insist that this result is an artifact of the assumptions and is no way to regulate their lives. In the evolution of sex hundreds of million years ago it is difficult to imagine that together with the mutation towards SX, immediately also the male body size became much smaller and consumed much less food.\n\nPerhaps male aggressiveness plays a useful role in protecting the children while reducing the male survival chances. Using the algorithm to be described in section 4 for testosteron as an explanation for the lower mortality of women compared to men, and female partner selection described later in this section, Fig.4 shows that now SX is above MP or AS. This child protection is already an effect outside the intrinsic genetic effects discussed before. The following paragraphs discuss environmental effects which can also justify SX over AS, in agreement with reality.\n\nParasites have long been claimed to justify sexual reproduction, since the greater genetic variety of the offspring gives the parasites less chance to adapt to the host. An early computer simulation (Howard & Lively 1994) without ageing already showed sexual reproduction to die out compared with asexual reproduction if no parasites are present (thus similar to Redfield 1994), while in the presence of parasites sex can give the better chance of survival (Howard & Lively 1994). Within the more realistic Penna model, including ageing, the parasite problem was studied more recently by S\u00e1 Martins (2000) with the same result: Parasites justify sex.\n\nFor this purpose, MP hosts, SX hosts and 1000 parasites were simulated together in the same environment (S\u00e1 Martins 2000) represented by $N_{max}$ for the sum of the populations. With probability $1\/10^4$, an individual can switch from MP to SX or back. The parasites are represented by bit-strings. Each female host has contact with 60 parasites, and if one parasite agrees in its bit-string with that of the female, this host loses her ability to procreate. The parasite bit-string, on the other hand, is modified into the female bit-string if the parasite meets the same bit-string for the second time.\n\nStarting with SX, the whole population changes to MP after a few hundred time steps, if no parasites are present. In the presence of parasites, however, starting with MP the whole population switches to SX in an even shorter time (S\u00e1 Martins 2000). Thus the well known greater variety of SX (S\u00e1 Martins & Moss de Oliveira 1998; Dasgupta 1997) compared to MP saves the sexual population from the attacks of parasites.\n\nBoth papers (Howard & Lively 1994; S\u00e1 Martins 2000) were motivated by observations of biologist Lively and his collaborators on snails. They do not discuss if parasites already were plaguing the presumably much smaller organisms nearly $10^9$ years ago when sex appeared.\n\nAnother reason to justify SX compared with MP are rapid changes in the environment, to which natural evolution cannot be fast enough. SX and HA lead across the species to a larger variety in the genome than AS or MP, and after a catastrophe like the meteor killing the dinosaurs, the species with a greater variety has a larger chance to contain a minority of individuals adapted to the changed conditions. Fig.5, adapted from S\u00e1 Martins & Moss de Oliveira (1998), compares MP with SX. First, MP gives the higher population as in Fig.3, but when a sudden change in the environment is introduced into the simulation, the SX population has a higher chance to survive the catastrophe than MP.\n\nAnother reason why genomic exchange between different individuals may be better than AS, AP or MP is partner selection (Redfield 1994; Stauffer et al. 1996). Let us simulate in the sexual Penna model first a birthrate of eight children per female and \"year\"; then we reduce the birth rate to four to account for males not getting pregnant; and in the third step we assume that the females select as partners only healthy males with few mutations. Some parameter region could be found, Fig.6, where selection in step 3 gave a strong advantage over no selection (step 2), but this advantage was not enough to overcome the loss of half the births compared with step 1. With the parameters of Fig.3 instead, the SX data would shift upward to 18.5 million if only males with at most one active mutation are selected.\n\nThus, while the simpler models without ageing gave clear intrinsic justifications for the existence of males, the more realistic ageing model required parasites, catastrophes, or child protection for this purpose (partner selection may help somewhat) and intrinsically slightly preferred a HA-MP mixture over haploid asexual cloning. It remains to be seen what SX simulations will give in other models (e.g. Onody 2000).\n\n# Why women live longer and have menopause?\n\nMen may be useless according to section 3.3, but why do they live shorter than women, in the developed countries of the 20th century ?\n\nMany mutations limit life, and thus the higher mutation rate of males compared with females (Redfield 1994) could be the reason of higher male mortality. Simulations (Stauffer et al. 1996) showed that this is not the case: The offspring is randomly either male or female and in both cases inherits roughly the same mutations from the parents; for the same reason, women are not killed by mutations after menopause (see below), in contrast to Pacific Salmon (Moss de Oliveira et al. 1999; Penna et al. 1995). Somatic mutations, which are not inherited and not given on to the offspring, reduce the male mortality compared to the female one if the rate of somatic mutations is higher for males than for females (Moss de Oliveira et al. 1996). Alternatively, females could be just more resistant than males against diseases (Penna & Wolf 1997). Mammalian females have two X chromosomes, while the males have one X and one Y chromosome such that all X mutations are dominant (Schneider et al. 1998). Except at old age, these last three assumptions all lead to higher male than female mortalities, as in reality (Fig.1), and where reviewed in our book (Moss de Oliveira et al. 1999).\n\nA more recent idea was suggested to us by the medical researcher Klotz and is related to male lifestyle (steaks and alcohol) caused by testosteron (Klotz 1998; Klotz & Hurrelmann 1998; Baulieu 1999). This hormone causes higher male aggressivity leading to death, as well as more arteriosclerosis later in life. These bad effects were perhaps counterbalanced earlier in human evolution by helping the males to defend their families against predators or fellow men. This child protection by males can in some other form also occur in other animals. Using this child protection assumption together with sexual selection as described below, the above SX results in Fig.4 were produced. And for today's humans, the testosteron parameters of this child protection model could be chosen such that the male mortality is about twice as high as the female mortality, except at old age, in agreement with reality (Stauffer & Klotz 2000; Stauffer 2000).\n\nPresumably, the true reasons for the difference between the mortalities of men and women is a combination of genetic and social effects, as is shown by the variation from country to country within Europe (Gjon\u00e7a et al. 1999). The XX-XY chromosome hypothesis (Schneider et al. 1998) is supported by the observation (Paevskii 1985) that male birds usually live longer than the females: For birds, the females have two different and the males have two identical chromosomes, opposite to mammals.\n\n(Technical remark: To simulate the effects of male testosteron level in Fig.4, the Verhulst dying probability was increased, for males only, by an amount proportional to the age-dependent testosteron level $k(a)$ (Stauffer & Klotz 2000; Stauffer 2000). On the other hand, the probability of babies to survive was multiplied by a factor min(const$\\cdot k(a),2$) such that a too low testosteron level of the father causes his babies to be killed by others. The function $k(a)$ evolved to an optimum shape through small heritable mutations in $k(a)$.)\n\nMenopause for women means an abrupt ceasing of their reproductive function, while for men, andropause is a rather smooth decay with age. Similar effects exist for the other mammals (though perhaps under different names, which we ignore here). For Pacific Salmon, life ends for males and females shortly after the end of reproduction of both (Penna et al. 1995); why does the same effects not occur for women ?\n\nPure genetic reasons (Stauffer et al. 1996) in the unmodified Penna model, without child care, already allow women to survive menopause. Conception decides randomly whether the new baby is a boy or a girl, and the genome is the same apart from the difference in X and Y chromosomes. Thus if all bits above reproductive age would be set equal to one for women (as for Pacific Salmon), the men would also die at that same age of menopause. To kill the women earlier than men, mother Nature would need a longevity gene in the Y chromosome, which in reality contains little genetic information. In this way, female survival after menopause is consistent with the mutation-accumulation hypothesis in the Penna model.\n\nThis consistency does not yet explain why menopause evolved. The only simulation we are aware of explaining menopause (Moss de Oliveira et al. 1999a) introduces two new assumptions into the Penna model: a risk of dying at birth for the mother which increases with the number of active mutations and thus with age; and child care in the sense that young children die if their mother dies. Then the maximum age for reproduction was allowed to emerge from the simulation, instead of being put in fixed at the beginning, by assuming it to be hereditary apart from small mutations up or down. As a result, the distribution of the maximum age of reproduction peaked at about 15 \"years\" whereas without child care its maximum was at 32 years, at the oldest possible bit position.\n\n# Other aspects\n\nGeneticist S. Cebrat (priv. comm.) has criticized our crossover method for the sexual Penna model as published in Moss de Oliveira et al. (1999). Since we split the bit strings at some randomly selected position and then combine the first part of one bit-string with the second part of the other bit-string, and since bit positions correspond to individual age, we produce correlations for the mutations in consecutive ages. In real DNA, the genes are not stored consecutively in the order in which they become active during life. Thus it is better to select randomly one subset of bits from one bit-string, and the other bits from the complimentary bit-string. Simulations indicate no clear difference, Fig.7.\n\nOverfishing (Moss de Oliveira et al. 1995; Penna et al. 2000) and the inheritance of longevity (de Oliveira et al. 1998) were simulated using the asexual version of the Penna model.\n\nAlthough we have centered this work on the Penna model, it is not the only way to go in ageing studies. It is an apropriate tool that allows us to unravel the importance of mutation accumulation effects. The very first models studied by physicists, summarized in Stauffer (1994), were based on the antagonistic pleiotropy. They were inspired by the Partridge and Barton review (Partridge & Barton 1993). There they proposed a constraint for the survival rates from babies to juveniles $J$ and juveniles to adults $A$ as $J + A^4 =1$. Because it has only two parameters and three ages, the exponential increase of mortalities cannot be observed. Some attempts of implementing antagonistic effects on bit-string models have been done. Bernardes imposed an extra deleterious mutation at advanced ages for a fraction of the population with higher reproduction rates (Bernardes 1996). Sousa and Moss de Oliveira, in a more detailed study, have shown that the combined action of mutation accumulation and antagonistic pleiotropy at defined ages can extend the lifespan of a population (in preparation). Sousa and Penna have introduced a different strategy, where both sides of the antagonism are present: good (bad) mutations at earlier (later) ages (in preparation). The minimum age at reproduction is allowed to vary. The later the individual reaches the sexual maturity, the more fertile it is. There is a clear compromise to postpone the maturity (and consequently to decrease the integrated fertility) against to be more exposed to death by competition or action of bad mutations. Preliminary results suggest that except for unrealistic handicaps on the fertility for later maturity, natural selection drives the populaton to earliest maturity (see also Medeiros et al. 2000).\n\n# Summary\n\nThe computer simulations of biological ageing, mostly using the Penna model, could explain nicely the roughly exponential increase of mortality functions with age, the existence of menopause, and (with less clarity) the existence of other forms of reproduction besides asexual cloning of haploid genomes. We speculate that from this asexual way, mother Nature may have evolved via apomictic and\/or meiotic parthenogenesis towards hermaphroditism, and only later separated the population into males and females because of external or social reasons, as simulated. Finally, menopause appeared because of the need for child care by the mother and the risks for her associated with giving birth later in life. For practical applications, simulations suggested not to catch young fish, or young and old lobsters, in order to maximize the catch (Moss de Oliveira et al. 1995; Penna et al. 2000).\n\n**REFERENCES**\n\nAzbel MYa. 1994. Universal biological scaling and mortality. *Proc. Natl. Acad. Sci. USA* **91**: 12453-12457.\n\nBak P. 1997. *How Nature Works: the Science of Self-Organized Criticality*, Oxford University Press.\n\nBaulieu EE. 1999. Le vieillissement est-il soluble dans les hormones? *La Recherche* **322**: 72-74.\n\nBernardes AT. 1995. Mutational meltdown in large sexual populations. *J. Physique* I **5**: 1501-1515.\n\nBernardes AT. 1996. Strategies for reproduction and ageing. *Ann. Physik* **5**: 539-550.\n\nBernardes AT. 1997. Can males contribute to the genetic improvement of the species? *J. Stat. Phys.* **86**: 431-439.\n\nDarwin C. 1859. *On the Origin of Species by Means of Natural Selection*, Murray, London.\n\nDasgupta S. 1997. Genetic crossover vs. cloning by computer simulation. *Int. J. Mod. Phys.* C **8**: 605-608.\n\nde Oliveira PMC. 1991. *Computing Boolean Statistical Models*, World Scientific, Singapore\/London\/New York.\n\nde Oliveira PMC. 2000. Why do evolutionary systems stick to the edge of chaos. *Theory in Biosci.*: in press.\n\nde Oliveira PMC, Moss de Oliveira SM, Bernardes AT & Stauffer D. 1998. *Lancet* **352**: 911-912.\n\nExcoffier L. 1997. Ce que nous dit la genealogie des genes. *La Recherche* **302**: 82-84.\n\nGjon\u00e7a A, Tomassini C & Vaupel JW. 1999. Pourqoi les femmes survivent aux hommes? *La Recherche* **322**: 96-99.\n\nHolbrook NJ, Martin GR & Lockshin RA. 1996. *Cellular Ageing and Death*, Wiley-Liss, New York.\n\nHoward RS & Lively CM. 1994. Parasitism, mutation accumulation and the maintenance of sex. *Nature* **367**: 554-557 and **368**: 358 (Erratum).\n\nJacquard A. 1978. *\u00c9loge de la Diff\u00e9rence: la G\u00e9n\u00e9tique et les Hommes*, \u00c9ditions du Seuil, Paris.\n\nJan N, Moseley L & Stauffer D. 2000. A hypothesis for the evolution of sex. *Theory in Biosci.* **119**: 166-168.\n\nKauffman SA. 1993. *Origins of Order: Self-Organization and Selection in Evolution*, Oxford University Press, New York.\n\nKauffman SA. 1995. *At home in the Universe*, Oxford University Press, New York.\n\nKlotz T. 1998. *Der fr\u00fche Tod des starken Geschlechts*, Cuvillier, G\u00f6ttingen.\n\nKlotz T & Hurrelmann K. 1998. Adapting the health care system to the needs of the aging male. *The Aging Male* **1**: 20-27.\n\nLynch M & Gabriel W. 1990. Mutation load and the survival of small populations. *Evolution* **44**: 1725-1737.\n\nMedeiros G, Idiart MA & de Almeida RMC. 2000. Selection experiments in the Penna model for biological aging. *Int. J. Mod. Phys.* C **11**: No. 7.\n\nMoss de Oliveira, Penna TJP & Stauffer D. 1995. Simulating the vanishing of northern cod fish. *Physica* A **215**, 298-304.\n\nMoss de Oliveira S, de Oliveira PMC & Stauffer D. 1996. Ageing with sexual and asexual reproduction: Monte Carlo simulations of mutation accumulation. *Braz. J. Phys.* **26**: 626-630.\n\nMoss de Oliveira S, de Oliveira PMC & Stauffer D. 1999. *Evolution, Money, War and Computers*, Teubner, Leipzig.\n\nMoss de Oliveira S, Bernardes AT & S\u00e1 Martins JS. 1999a. Self-organisation of female menopause in populations with child care and reproductive risk. *Eur. Phys. J.* B **7**: 501-504.\n\n\u00d6r\u00e7al B, T\u00fczel E, Sevim V, Jan N & Erzan A. 2000. Testing a hypothesis for the evolution of sex. *Int. J. Mod. Phys.* C **11**: 973-986; and also DeCoste C & Jan N, priv. comm.\n\nOnody RN. 2000. The Heumann-H\u00f6tzel Model revisited. Talk O-24 at FACS 2000, Macei\u00f3, Brazil.\n\nPaevskii VA. 1985. *Demography of Birds* (in Russian), Nauka, Moscow.\n\nPamilo P, Nei M & Li WH. 1987. Accumulation of mutations in sexual and asexual populations. *Genet. Res., Camb.* **49**: 135-146.\n\nPartridge L & Barton NH. 1993. Optimality, mutation and the evolution of ageing. *Nature* **362**: 305-311.\n\nPenna TJP. 1995. A bit-string model for biological ageing. *J. Stat. Phys.* **78**: 1629-1633.\n\nPenna TJP & Moss de Oliveira S. 1995. Exact results of the bit-string model for catastrophic senescence. *J. Physique* I **5**: 1697-1703.\n\nPenna TJP, Moss de Oliveira S & Stauffer D. 1995. Mutation accumulation and the catastrophic senescence of the Pacific salmon. *Phys. Rev.* E **52**: R3309-R3312.\n\nPenna TJP & Stauffer D. 1996. Bit-string ageing model and German population. *Zeits. Phys.* B **101**: 469-470.\n\nPenna TJP & Wolf D. 1997. Computer simulation of the difference between male and female death rates. *Theory in Biosc.* **116**: 118-124.\n\nPenna TJP, Racco A & Sousa AO. 2000. Can microscopic models for age-structured populations contribute to Ecology? Talk IT-11 at FACS 2000, Macei\u00f3, Brazil (to appear in *Physica A*).\n\nRedfield RJ. 1994. Male mutations and the cost of sex for males. *Nature* **369**: 145-147.\n\nRose MR. 1991. *Evolutionary Biology of Aging*, Oxford University Press, New York.\n\nS\u00e1 Martins JS. 2000. Simulated coevolution in a mutating ecology. *Phys. Rev.* E **61**: 2212-2215.\n\nS\u00e1 Martins JS & Moss de Oliveira S. 1998. Why sex - Monte Carlo simulations of survival after catastrophes. *Int. J. Mod. Phys.* C **9**: 421-432.\n\nSchneider J, Cebrat S & Stauffer D. 1998. Why do women live longer than men? A Monte Carlo Simulation of Penna-type models with X and Y cromossomes. *Int. J. Mod. Phys.* C **9**: 721-725.\n\nSchr\u00f6dinger E. 1944. *What is Life?*, Cambridge University Press, Cambridge.\n\nSousa AO & Moss de Oliveira S. 1999. High reproduction rate versus sexual fidelity. *Eur. Phys. J.* B **10**: 781-785.\n\nStauffer D. 1994. Monte Carlo simulations of biological ageing. *Braz. J. Phys.* **24**: 900-906.\n\nStauffer D. 1999. Why care about sex? Some Monte Carlo justification. *Physica* A **273**: 132-139.\n\nStauffer D. 2000. Self-organisation of testosterone level in the Penna-Klotz ageing model. *Theory in Biosciences*, in press.\n\nStauffer D & Aharony A. 1994. *Introduction to Percolation Theory*, Taylor and Francis, London.\n\nStauffer D, de Oliveira PMC, Moss de Oliveira S & Zorzenon dos Santos RM. 1996. Monte Carlo simulations of sexual reproduction. *Physica* A **231**: 504-514.\n\nStauffer D & Klotz T. 2000. The mathematical point of view: The sex-specific life expectancy and the influence of testosterone in an aging simulation model and its consequences for prevention. Submitted to *The Aging Male*.\n\nStauffer D, S\u00e1 Martins JS & Moss de Oliveira S. 2000. On the uselessness of men - Comparison of sexual and asexual reproduction. *Int. J. Mod. Phys.* C **11**: No. 7.\n\nVaupel JW, Carey JR, Christensen k, Johnson TE, Yashin AI, Holm NV, Iachine IA, Kanisto V, Khazaeli AA, Liedo P, Longo VD, Zeng Y, Manton KG & Curtsinger JW. 1998. Biodemography of longevity. *Science* **280**: 855-860.\n\nWachter KW & Finch CE. 1997. *Between Zeus and the Salmon. The Biodemography of Longevity*, National Academy Press, Washington DC.\n\nWilson KG. 1971. Renormalization group and critical phenomena I. Renormalization group and the Kadanoff scaling picture. *Phys. Rev.* **B4**: 3174-3183.\n\nWilson KG & Kogut J. 1974. The renormalization group and the $\\epsilon$ expansion. *Phys. Rep.* **12C**: 75-200.\n\nWilson KG. 1979. Problems in physics with many scales of length. *Sci. Am.* **241**: 140-157.\n\n# Figure Captions\n\nFig.1: Male (+) and female (x) mortality functions in the USA, 1991-1995; from J.R. Wilmoth's Berkeley Mortality Database demog.berkeley.edu\/wilmoth\/. The straight line through the male data indicates the exponential increase with age (Gompertz law). These mortality functions are defined as $-d\\ln S(a)\/da$ where $S(a)$ is the probability to survive up to an age of $a$ years.\n\nFig.2: Schematic representation of the genomic changes for AS, AP, MP (from left to right) in part a and for SX in part b. (The diagram for SX is also valid for HA, except that for HA all individuals can reproduce.)\n\nFig.3: Comparison of populations, versus number of iterations or \"years\", for (from below) SX, AP, MP. The highest data refer to a mixture of HA and MP with $\\mu = 4$; nearly the same results are obtained for $\\mu = 3$ and 5. The data for AS (line) overlap with those of MP (+), while AP(x) fluctuates around slightly lower values. For SX we show the sum of males and females. Threshold $T=9$, 4 births per year and per female above minimum reproduction rate of 8, one mutation per string of 32 bits at birth, $N_{max}$ = 80 million is about four times larger than the actual populations.\n\nFig.4: Comparison of populations, versus number of iterations, for SX with child protection (+,x), and AS (line). (For the data marked by x, females select only male partners with at most one active bad mutation.) $N_{max}$ = 5 million; otherwise parameters as in Fig.3.\n\nFig.5: Comparison of MP (lines) with SX (crosses) population before and after a sudden change in the environment. For SX, the female birth rate of two was the same as for all MP individuals. ($N_{max}= 400000, \\; T=3, \\; d = 5, \\; R = 10$, one mutation per bit-string.)\n\nFig.6: Can partner selection overcome the loss of half the births ? The top data show step 1, the bottom data step 2, and the middle data step 3 (see text): Selection helps, but not enough.\n\nFig.7: Comparison of traditional ordered crossover (x, squares) with better random crossover (+, stars) showing little difference. The Verhulst deaths appear at all ages (+, x) or only at birth (stars, squares).","meta":{"dup_signals":{"dup_doc_count":33,"dup_dump_count":30,"dup_details":{"curated_sources":2,"2020-05":1,"2019-47":1,"2018-09":1,"2017-43":1,"2017-34":1,"2016-50":1,"2016-44":1,"2016-18":1,"2015-48":1,"2015-40":1,"2015-35":1,"2015-32":1,"2015-27":1,"2015-22":1,"2015-14":1,"2014-52":1,"2014-42":1,"2014-41":1,"2014-35":1,"2014-23":2,"2014-15":2,"2020-50":1,"2015-18":1,"2015-11":1,"2015-06":1,"2014-10":1,"2013-48":1,"2013-20":1,"2017-13":1}},"filename":"out\/cond-mat0011524_extract_anais.tex.md"},"subset":"arxiv"} +{"text":"author: Enrico Brehm\ntitle: Black hole essay\n\n# The classical viewpoint\n\nBlack holes are some of the strangest phenomenons in our univers. Here, we first want to discuss them as objects of a classical physical theory, where *classical* means that we forget about all possible quantum effects and their consequences. In the present case the underlying classical theory is Einstein's theory of gravity . It describes space and time in terms of a collection of fields whose behaviour is dictated by the Einstein equations.\n\nOne natural question to ask is how does this theory describe space and time around a massive spherical object like a star? The solution to this was found by Schwarzschild . It is valid all around any static round object and, remarkably, only depends on its mass. However, quite strange things can happen when all that mass is packed within a specific radius named after Schwarzschild. Then a so-called event horizon forms at the Schwarzschild radius and we start talking about a black hole.\n\nBefore we shed more light on the strangeness of black holes, let us try to get some intuition for the circumstances under which black holes can appear. The Schwarzschild radius $r_s$ and the mass $M$ of a spherical object have a very easy relation: they are proportional to each other, $r_s = a\\cdot M$, with $a$ being very small when we measure in standard units. Let us for example consider an object with the mass of earth, then its Schwarzschild radius is given by only about 9 mm! No physical process is known in which all of earth can be compressed that much and it seams quite unlikely that there are many (if at all) black holes with the mass of earth. The situation changes when we consider bigger and bigger masses. This is because the volume within the Schwarzschild radius, *i.e.*, the space where we can store all the mass to form a black hole, increases much faster. In fact, if we double the mass we get roughly eight times more volume to store it. The formation of black holes becames easier the heavier they are. It is known that there are mechanisms at the end of the lifetime of very massive stars that lead to the formation of so-called stellar black holes . Even heavier black holes can for example develop when stellar ones merge.\n\nLet us return to the promised strangeness of black holes. Actually, if we are far away from the black hole, it is not much different from any other stellar object of the same mass. The only big difference is that we do not see any light originating from the black hole. An interesting effect, that occurs when we come closer to it, is that time for us passes slower than for those who stay away from the black hole. However, the effect is not tractable in a direct way. Any clock that we carry with us behaves completely fine from our perspective. Only if we return back to a place far away from the black hole and compare the times passed we could see a difference. This effect appears, in fact, for any massive object that we come close to and is not only a feature of black holes. Remember that the solution to Einstein's equations, which in particular tells us how time behaves, outside the spherical object only depends on its mass! However, for any stellar object that is not a black hole we would at some point reach that object and enter it. Inside of it the solutions do depend on its specifics. One can show that effects like time dilation can not increase further arbitrarily. If we, however, come closer and closer to the black hole the effect will grow without a bound until we reach the previously mentioned event horizon.\n\nPassing the event horizon has some severe consequences. If we take for example the latter observation of unbounded time dilatation serious, we come to the conclusion that in the moment that we spend on the horizon *all* time passes for anything outside the black hole. The end of all things happens in the rest of the universe, and in fact after the moment in which we enter the black hole, there is no way back. This is also visible in a remarkable and strange feature of Schwarzschild's solution. If we compare it inside and outside of the event horizon one observes that global time and the radial direction interchanged their meaning. In a world outside the black hole everything and everyone has to move forward in time. This is a fundamental feature of Einstein's theory. Inside the event horizon the radial direction takes the place of time with the drastic consequence that, no matter what, we have to move forward in this direction, where forward means towards the center of the black hole. There really is no way back! Not even the most powerful rocket that we could imagine can prevent us from finally reaching the very center of the black hole, where gravitational forces become immeasurably strong and at the latest there the quantum features of black holes and gravity itself must show their face.\n\nHowever, other quantum aspects of black holes will be visible much earlier. Some of them will be discussed in what follows.\n\n# Contact with the quantum world and information processing\n\nWe have seen before that Schwarzschild's solution only depends on the mass of the object. But what happens with all the information about the object that collapsed into a black hole? It persisted of many different particles, it had a temperature, a matter distribution, a specific spectrum of radiation, and so on. If we only believe in the classical world, then all that information is hidden behind the event horizon after the formation of the black hole. Then it is by no means tractable for anyone outside the black hole. In a classical world this is not a big problem. It is at most sad that no one outside the black hole can get the information but causes no issues concerning consistency of the theory.\n\nHowever, we know that our world is in fact not a classical one, and our knowledge about the quantum theories that describe at least ordinary matter in our universe is rather decent. Using this knowledge Hawking could show that quantum effects near the event horizon of a black hole lead to a constant flow of particles away from it . A black hole radiates and, hence, must loose mass over time. If we wait long enough a black hole evaporates either completely or until some tiny remnant of it is left.\n\nWhere is all the information after the evaporation? Now that we include some quantumness in the description of black holes this question becomes very important. Thoughtless processing of quantum information can easily lead to inconsistencies. For example, information has to be spread very fast inside the black hole. Otherwise it would be possible to copy quantum states which is strictly forbidden in any consistent quantum theory. To be honest there is no real consensus among the physics community on how the black hole deals with quantum information. One possibility might be that it is hidden in its Hawking radiation. If we wait long enough and collect a sufficient amount of it we might be able to regain all the information we want. However, there are many more gedankenexperiments concerning similar issues. Finding a convincing and self-consistent description of black holes in contact with the quantum world will probably be an important step towards a quantum description of gravity itself. This might be one of the next big steps in theoretical physics!\n\nAt last, let us try to get some intuition on how important quantum effects of a black hole are. If we first consider an ordinary stellar black hole of, say, four times the mass of our sun, then its Hawking radiation can be associated with a temperature only roughly a hundred millionth Kelvin[^1] above the absolute zero temperature. It, hence, plays almost no role in describing the everyday physics of that black hole. This is true for any stellar (or even heavier) black hole. Next let us consider a coin of, say, five gram. Quantum effects of that coin do not play any significant role in its everyday physics. It can be described almost perfectly by classical theories. However, if we consider a black hole of the same mass, things look rather different. As a (partially) quantum object it radiates and evaporates within a tiny fraction of a second. All its mass converts into energy which results in an explosion three times stronger than the bomb dropped on Hiroshima. We see that for black holes quantum effects play a significant role much earlier than for matter under ordinary circumstances.\n\n[^1]: Kelvin is the standard measure of temperature in physics. A change of one Kelvin in temperature is the same as the change of one $^\\circ$C.","meta":{"dup_signals":{"dup_doc_count":45,"dup_dump_count":30,"dup_details":{"curated_sources":4,"2023-23":1,"2023-14":1,"2023-06":1,"2022-49":1,"2022-27":1,"2022-21":2,"2022-05":2,"2021-39":1,"2021-31":1,"2021-17":1,"2021-10":1,"2020-45":1,"2020-40":1,"2020-29":1,"2020-16":2,"2020-10":1,"2020-05":2,"2019-51":1,"2019-47":1,"2019-43":3,"2019-35":3,"2019-26":1,"2019-18":4,"2019-09":2,"2023-50":1,"2024-22":1,"2024-18":1,"2024-10":1,"2024-30":1}},"filename":"out\/1901.01045_extract_BlackHoles.tex.md"},"subset":"arxiv"} +{"text":"abstract: This paper proposes Markovian Generative Adversarial Networks (MGANs), a method for training generative neural networks for efficient texture synthesis. While deep neural network approaches have recently demonstrated remarkable results in terms of synthesis quality, they still come at considerable computational costs (minutes of run-time for low-res images). Our paper addresses this efficiency issue. Instead of a numerical deconvolution in previous work, we precompute a feed-forward, strided convolutional network that captures the feature statistics of *Markovian patches* and is able to directly generate outputs of arbitrary dimensions. Such network can directly decode brown noise to realistic texture, or photos to artistic paintings. With adversarial training, we obtain quality comparable to recent neural texture synthesis methods. As no optimization is required any longer at generation time, our run-time performance (0.25M pixel images at 25Hz) surpasses previous neural texture synthesizers by a significant margin (at least 500 times faster). We apply this idea to texture synthesis, style transfer, and video stylization.\nauthor: Chuan Li and Michael Wand\nbibliography: mybib.bib\ntitle: Precomputed Real-Time Texture Synthesis with Markovian Generative Adversarial Networks\n\n# Introduction\n\nImage synthesis is a classical problem in computer graphics and vision\u00a0. The key challenges are to capture the structure of complex classes of images in a concise, learnable model, and to find efficient algorithms for learning such models and synthesizing new image data. Most traditional \"*texture synthesis*\" methods address the complexity constraints using Markov random field (MRF) models that characterize images by statistics of local patches of pixels.\n\nRecently, generative models based on deep neural networks have shown exciting new perspectives for image synthesis\u00a0. Deep architectures capture appearance variations in object classes beyond the abilities of pixel-level approaches. However, there are still strong limitations of how much structure can be learned from limited training data. This currently leaves us with two main classes of \"deep\" generative models: 1) *full-image models* that generate whole images\u00a0, and 2) *Markovian models* that also synthesize textures\u00a0.\n\nThe first class, full-image models, are often designed as specially trained auto-encoders\u00a0. Results are impressive but limited to rather small images (typically around 64$\\times$``{=html}64 pixels) with limited fidelity in details. The second class, the deep Markovian models, capture the statistics of local patches only and assemble them to high-resolution images. Consequently, the fidelity of details is good, but additional guidance is required if non-trivial global structure should be reproduced\u00a0. Our paper addresses this second approach of deep Markovian texture synthesis.\n\nPrevious neural methods of this type\u00a0 are built upon a deconvolutional framework\u00a0. This naturally provides blending of patches and permits reusing the intricate, emergent multi-level feature representations of large, discriminatively trained neural networks like the VGG network\u00a0, repurposing them for image synthesis. As a side note, we will later observe that this is actually crucial for high-quality result (Figure\u00a0). Gatys et al.\u00a0 pioneer this approach by modeling patch statistics with a global Gaussian models of the higher-level feature vectors, and Li et al.\u00a0 utilize dictionaries of extended local patches of neural activation, trading-off flexibility for visual realism.\n\nDeep Markovian models are able to produce remarkable visual results, far beyond traditional pixel-level MRF methods. Unfortunately, the run-time costs of the deconvolution approach are still very high, requiring iterative back-propagation in order to estimate a pre-image (pixels) of the feature activations (higher network layer). In the most expensive case of modeling MRFs of higher-level feature patches\u00a0, a high-end GPU needs several minutes to synthesize low-resolution images (such as a 512-by-512 pixels image).\n\nThe objective of our paper is therefore to improve the efficiency of deep Markovian texture synthesis. The key idea is to precompute the inversion of the network by fitting a strided[^1] convolutional network\u00a0 to the inversion process, which operates purely in a feed-forward fashion. Despite being trained on patches of a fixed size, the resulting network can generate continuous images of arbitrary dimension without any additional optimization or blending, yielding a high-quality texture synthesizer of a specific style and high performance[^2].\n\nWe train the convolutional network using adversarial training\u00a0, which permits maintaining image quality similar to the original, expensive optimization approach. As result, we obtain significant speed-up: Our GPU implementation computes $512 \\times 512$ images within 40ms (on an nVidia TitanX). The key limitation, of course, is to precompute the feed-forward convolutional network for each texture style. Nonetheless, this is still an attractive trade-off for many potential applications, for example from the area of artistic image or video stylization. We explore some of these applications in our experiments.\n\n# Related Work\n\nDeconvolutional neural networks have been introduced to visualize deep features and object classes. Zeiler et al.\u00a0 back-project neural activations to pixels. Mahendran et al.\u00a0 reconstruct images from the neural encoding in intermediate layers. Recently, effort are made to improve the efficiency and accuracy of deep visualization\u00a0. Mordvintsev et al. have raised wide attention by showing how deconvolution of class-specifc activations can create hallucinogenic imagery from discriminative networks\u00a0. The astonishing complexity of the obtained visual patterns has immediately spurred hope for new generative models: Gatys et al.\u00a0 drove deconvolution by global covariance statistics of feature vectors on higher network layers, obtaining unprecedented results in artistic style transfer. The statistical model has some limitations: Enforcing per-feature-vector statistics permits a mixing of feature patterns that never appear in actual images and limit plausibility of the learned texture. This can be partially addressed by replacing point-wise feature statistics by statistics of spatial patches of feature activations\u00a0. This permits photo-realistic synthesis in some cases, but also reduces invariance because the simplistic dictionary of patches introduces rigidity. On the theory side, Xie et al.\u00a0 have proved that a generative random field model can be derived from used discriminative networks, and show applications to unguided texture synthesis.\n\nFull image methods employ specially trained auto-encoders as generative networks\u00a0. For example, the Generative Adversarial Networks (GANs) use two networks, one as the discriminator and other as the generator, to iteratively improve the model by playing a minimax game\u00a0. This model is extended to work with a Laplacian pyramid\u00a0, and with a conditional setting\u00a0. Very recently, Radford et al.\u00a0 propose a set of architectural refinements[^3] that stabilized the performance of this model, and show that the generators have vector arithmetic properties. One important strength of adversarial networks is that it offers perceptual metrics\u00a0 that allows auto-encoders to be training more efficiently. These models can also be augmented semantic attributes\u00a0, image captions\u00a0, 3D data\u00a0, spatial\/temporal status\u00a0 etc.\n\nIn very recent, two concurrent work, Ulyanov et al.\u00a0 and Johnson et al.\u00a0 propose fast implementations of Gatys et al's approach. Both of their methods employ precomputed decoders trained with a perceptual texture loss and obtain significant run-time benefits (higher decoder complexity reduces their speed-up a bit). The main conceptual difference in our paper is the use of Li et al.'s\u00a0 feature-patch statistics as opposed to learning Gaussian distributions of individual feature vectors, which provides some benefits in terms of reproducing textures more faithfully.\n\n# Model\n\nLet us first conceptually motive our method. Statistics based methods\u00a0 match the distributions of source (input photo or noise signal) and target (texture) with a Gaussian model (Figure\u00a0, first). They do not further improve the result once two distributions match. However, real world data does not always comply with a Gaussian distribution. For example, it can follow a complicated non-linear manifold. Adversarial training\u00a0 can recognize such manifold with its discriminative network (Figure\u00a0, second), and strengthen its generative power with a projection on the manifold (Figure\u00a0, third). We improve adversarial training with contextually corresponding Markovian patches (Figure\u00a0, fourth). This allows the learning to focus on the mapping between different depictions of the same context, rather than the mixture of context and depictions.\n\nFigure\u00a0 visualizes our pipeline, which extends the patch-based synthesis algorithm of Li et al.\u00a0. We first replace their patch dictionary (including the iterative nearest-neighbor search) with a continuous discriminative network *D* (green blocks) that learns to distinguish actual feature patches (on VGG_19 layer Relu3_1, purple block) from inappropriately synthesized ones. A second comparison (pipeline below *D*) with a VGG_19 encoding of the same image on the higher, more abstract layer Relu5_1 can be optionally used for guidance. If we run deconvolution on the VGG networks (from the discriminator and optionally from the guidance content), we obtain deconvolutional image synthesizer, which we call *Markovian Deconvolutional Adversarial Networks* (MDANs).\n\nMDANs are still very slow; therefore, we aim for an additional generative network *G* (blue blocks; a strided convolutional network). It takes a VGG_19 layer Relu4_1 encoding of an image and directly decodes it to pixels of the synthesis image. During all of the training we do not change the *VGG_19* network (gray blocks), and only optimize *D* and *G*. Importantly, both *D* and *G* are trained simultaneously to maximize the quality of *G*; *D* acts here as adversary to *G*. We denote the overall architecture by *Markovian Generative Adversarial Networks* (MGANs).\n\n## Markovian Deconvolutional Adversarial Networks (MDANs)\n\nOur MDANs synthesize textures with a deconvolutional process that is driven by adversarial training: a discriminative network *D* (green blocks in Figure\u00a0) is trained to distinguish between \"neural patches\" sampled from the synthesis image and sampled from the example image. We use regular sampling on layer *relu3_1* of *VGG_19* output (purple block). It outputs a classification score $s= \\pm1$ for each neural patch, indicating how \"real\" the patch is (with $s = 1$ being real). For each patch sampled from the synthesized image, $1 - s$ is its texture loss to minimize. The deconvolution process back-propagates this loss to pixels. Like Radford et al.\u00a0 we use batch normalization (BN) and leaky ReLU (LReLU) to improve the training of *D*.\n\nFormally, we denote the example texture image by $\\bv{x}_t \\in \\reals^{w_t \\times h_t}$, and the synthesized image by $\\bv{x} \\in \\reals^{w \\times h}$. We initialize $\\bv{x}$ with random noise for un-guided synthesis, or an content image $\\bv{x}_c \\in \\reals^{w \\times h}$ for guided synthesis. The deconvolutio iteratively updates $\\bv{x}$ so the following energy is minimized: $$\\begin{aligned}\n\\bv{x}&= &\\underset{x}{\\operatorname{arg}\\,\\operatorname{min}}\\;E_{t}(\\Phi(\\bv{x}), \\Phi(\\bv{x}_{t})) + \\alpha_{1}E_{c}(\\Phi(\\bv{x}), \\Phi(\\bv{x}_{c})) + \\alpha_{2}\\Upsilon(\\bv{x})\n\\label{eq:MDAN}\n\\end{aligned}$$ Here $E_{t}$ denotes the texture loss, in which $\\Phi(\\bv{x})$ is $\\bv{x}$'s feature map output from layer *relu3_1* of *VGG_19*. We sample patches from $\\Phi(\\bv{x})$, and compute $E_{t}$ as the Hinge loss with their labels fixed to one: $$E_{t}(\\Phi(\\bv{x}), \\Phi(\\bv{x}_{t})) = \\frac{1}{N}\\sum_{i = 1}^{N}\\max(0, 1 - 1 \\times s_{i})\n\\label{eq:hingeloss}$$ Here $s_i$ denotes the classification score of $i$-th neural patch, and $N$ is the total number of sampled patches in $\\Phi(\\bv{x})$. The discriminative network is trained on the fly: Its parameters are randomly initialized, and then updated after each deconvolution, so it becomes increasingly smarter as synthesis results improve.\n\nThe additional regularizer $\\Upsilon(\\bv{x})$ in Eq.\u00a0 is a smoothness prior for pixels\u00a0. Using $E_{t}$ and $\\Upsilon(\\bv{x})$ can synthesize random textures (Figure\u00a0). By minimizing an additional content loss $E_{c}$, the network can generate an image that is contextually related to a guidance image $\\bv{x}_{c}$(Figure\u00a0). This content loss is the Mean Squared Error between two feature maps $\\Phi(\\bv{x})$ and $\\Phi(\\bv{x}_{c})$. We set the weights with $\\alpha_1 = 1$ and $\\alpha_2 = 0.0001$, and minimize Equation\u00a0 using back-propagation with ADAM\u00a0 (learning rate 0.02, momentum 0.5). Notice each neural patch receives its own output gradient through the back-propagation of *D*. In order to have a coherent transition between adjacent patches, we blend their output gradient like texture optimization\u00a0 did.\n\n## Markovian Generative Adversarial Networks (MGANs)\n\nMDANs require many iterations and a separate run for each output image. We now train a variational auto-encoder (VAE) that decodes a feature map directly to pixels. The target examples (textured photos) are obtained from the MDANs. Our generator *G* (blue blocks in Figure\u00a0) takes the layer *relu4_1* of *VGG_19* as the input, and decodes a picture through a ordinary convolution followed by a cascade of fractional-strided convolutions (FS Conv). Although being trained with fixed size input, the generator naturally extends to arbitrary size images.\n\nAs Dosovitskiy et al.\u00a0 point out, it is crucially important to find a good metric for training an auto-encoder: Using the Euclidean distance between the synthesized image and the target image at the pixel level (Figure\u00a0, pixel VAE) yields an over-smoothed image. Comparing at the neural encoding level improves results (Figure\u00a0, neural VAE), and adversarial training improves the reproduction of the intended style further (Figure\u00a0, MGANs).\n\nOur approach is similar to classical Generative Adversarial Networks (GANs) , with the key difference of not operating on full images, but neural patches from the *same* image. Doing so utilizes the contextual correspondence between the patches, and makes learning easier and more effective in contrast to learning the distribution of a object class\u00a0 or a mapping between contextually irrelevant data\u00a0. In additional we also replace the Sigmoid function and the binary cross entropy criteria from\u00a0 by a max margin criteria (Hinge loss). This avoids the vanishing gradient problem when learning *D*. This is more problematic in our case than in Radfort et al.'s\u00a0 because of less diversity in our training data. Thus, the Sigmoid function can be easily saturated.\n\nFigure\u00a0 (MGANs) shows the results of a network that is trained to produce paintings in the style of Picasso's \"Self-portrait 1907\". For training, we randomly selected 75 faces photos from the CelebA data set\u00a0, and in additional to it 25 non-celebrity photos from the public domain. We resize all photos so that the maximum dimension is 384 pixels. We augmented the training data by generating 9 copies of each photo with different rotations and scales. We regularly sample subwindows of 128-by-128 croppings from them for batch processing. In total we have 24,506 training examples, each is treated as a training image where neural patches are sampled from its *relu3_1* encoding as the input of *D*.\n\nFigure\u00a0 (top row, MGANs) shows the decoding result of our generative network for a training photo. The bottom row shows the network generalizes well to test data. Notice the MDANs image for the test image is never used in the training. Nonetheless, direct decoding with *G* produces very good approximation of it. The main difference between MDANs and MGANs is: MDANs preserve the content of the input photo better and MGANs produce results that are more stylized. This is because MGANs was trained with many images, hence learned the most frequent features. Another noticeable difference is MDANs create more natural backgrounds (such as regions with flat color), due to its iterative refinement. Despite such flaws, the MGANs model produces comparable results with a speed that is 25,000 times faster.\n\nFigure\u00a0 shows some intermediate results MGANs. It is clear that the decoder gets better with more training. After 100 batches, the network is able to learn the overall color, and where the regions of strong contrast are. After 300 batches the network started to produce textures for brush strokes. After 1000 batches it learns how to paint eyes. Further training is able to remove some of the ghosting artifacts in the results. Notice the model generalizes well to testing data (right).\n\n# Experimental Analysis\n\nWe conduct empirical experiments with our model: we study parameter influence (layers for classification, patch size) and the complexity of the model (number of layers in the network, number of channels in each layer). While there may not be a universal optimal design for all textures, our study shed some light on how the model behaves for different cases. For fair comparison, we scale the example textures in this study to fixed size (128-by-128 pixels), and demand the synthesis output to be 256-by-256 pixels.\n\n**Visualizing decoder features:** We visualize the learned filters of decoder *G* in Figure\u00a0. These features are directly decoded from a one-hot input vector. Individual patches are similar to, but not very faithfully matching the example textures (reconfirming the semi-distributed and non-linear nature of the encoding). Nonetheless, visual similarity of such artificial responses seems strong enough for synthesizing new images.\n\n**Parameters:** Next, we study the influence of changing the input layers for the discriminative network. To do so we run unguided texture synthesis with discriminator *D* taking layer *relu2_1*, *relu3_1*, and *relu4_1* of *VGG_19* as the input. We use patch sizes of 16, 8 and 4 respectively for the three options, so they have the same receptive field of 32 image pixels (approximately; ignoring padding). The first three results in Fig.\u00a0 shows the results of these three settings. Lower layers (*relu2_1*) produce sharper appearances but at the cost of losing form and structure of the texture. Higher layer (*relu4_1*) preserves coarse structure better (such as regularity) but at the risk of being too rigid for guided scenarios. Layer *relu3_1* offers a good balance between quality and flexibility. We then show the influence of patch size: We fix the input layer of *D* to be *relu3_1*, and compare patch size of 4 and 16 to with the default setting of 8. The last two results in Fig.\u00a0 shows that such change will also affect the rigidity of the model: smaller patches increase the flexibility and larger patches preserve better structure.\n\n**Complexity:** We now study the influence of 1) the number of layers in the networks and 2) the number of channels in each layer. We first vary the *D* by removing the convolutional layer. Doing so reduces the depth of the network and in consequence the synthesis quality (first column, Fig.\u00a0). Bringing this convolutional layer back produces smoother synthesis (second column, Fig.\u00a0). However, in these examples the quality does not obviously improves with more additional layers (third column, Fig.\u00a0).\n\nTesting the *D* with 4, 64, and 128 channels for the convolutional layer, we observe in general that decreasing the number of channels leads to worse results (fourth column, Fig.\u00a0), but there is no significance difference between 64 channels and 128 channels (second column v.s. fifth column). The complexity requirements also depend on the actual texture. For example, the ivy texture is a rather simple MRF, so the difference between 4 channel and 64 channel are marginal, unlike in the other two cases.\n\nNext, we fix the discriminative network and vary the complexity of the generative network. We notice some quality loss when removing the first convolutional layer from the decoder, or reducing the number of channels for all layers, and only very limited improvement from a more complex design. However the difference is not very significant. This is likely because the networks are all driven by the same discriminative network, and the reluctance of further improvement indicates there are some non-trivial information from the deconvolutional process that can not be recovered by a feed forward process. In particular, the fractionally-strided convolutions does not model the nonlinear behaviour of the max-pooling layer, hence often produces alias patterns. These become visible in homogeneous, texture-less area. To avoid artifacts but encourage texture variability, we can optionally add Perlin noise\u00a0 to the input image.\n\n### Initialization\n\nUsually, networks are initialized with random values. However we found *D* has certain generalization ability. Thus, for transferring the same texture to different images with MDANs, a previously trained network can serve as initialization. Figure\u00a0 shows initialization with pre-trained discriminative network (that has already transferred 50 face images) produces good result with only 50 iterations. In comparison, random initialization does not produce comparable quality even after the first 500 iterations. It is useful to initialize *G* with an auto-encoder that directly decodes the input feature to the original input photo. Doing so essentially approximates the process of inverting *VGG_19*, and let the whole adversarial network to be trained more stably.\n\n**The role of VGG:** We also validate the importance of the pre-trained *VGG_19* network. As the last two pictures in Figure\u00a0 show, training a discriminative network from scratch (from pixel to class label\u00a0) yields significantly worse results. This has also been observed by Ulyanov et al.\u00a0. Our explanation is that much of the statistical power of VGG_19 stems from building shared feature cascades for a diverse set of images, thereby approaching human visual perception more closely than a network trained with a limited example set.\n\n# Results\n\nThis section shows examples of our MGANs synthesis results. We train each model with 100 randomly selected images from ImageNet, and a single example texture. We first produce 100 transferred images using the MDANs model, then regularly sample 128-by-128 image croppings as training data for MGANs. In total we have around 16k samples for each model. The training take as about 12 min per epoch. Each epoch min-batches through all samples in random order. We train each texture for upto five epochs.\n\nFigure\u00a0 compares our synthesis results with previous methods. First, our method has a very different character in comparison to the methods that use global statistics\u00a0: It transfers texture more coherently, such as the hair of Lena was consistently mapped to dark textures. In contrast, the Gaussian model \u00a0 failed to keep such consistency, and have difficulty in transferring complicated image content. For example the eyes in \u00a0's result and the entire face in\u00a0's result are not textured. Since these features do not fit a Gaussian distribution, they are difficult to be constrained by a Gram matrix. The other local patch based approach\u00a0 produces the most coherent synthesis, due to the use of non-parametric sampling. However, their method requires patch matching so is significantly slower (generate this 384-by-384 picture in 110 seconds). Our method and Ulyanov et al.\u00a0 run at the same level of speed; both bring significantly improvement of speed over Gatys et al.\u00a0 (500 times faster) and Li et al.\u00a0 (5000 times faster).\n\nFigure\u00a0 further discuss the difference between the Gaussian based method\u00a0 and our method[^4]. In general\u00a0 produces more faithful color distributions in respect to the style image. It also texture background better (see the starry night example) due to the learning of mapping from noise to Gaussian distribution. On the other hand, our method produces more coherent texture transfer and does not suffer the incapability of Gaussian model for more complex scenarios, such as the facade in both examples. In comparison\u00a0 produces either too much or too little textures in such complex regions.\n\nFigure\u00a0 shows that unguided texture synthesis is possible by using the trained model to decode noise input. In this case, Perlin noise[^5] images are forwarded through *VGG_19* to generate feature maps for the decoder. To our surprise, the model that was trained with random ImageNet images is able to decode such features maps to plausible textures. This again shows the generalization ability of our model. Last, Figure\u00a0 shows our video decoding result. As a feed-forward process our method is not only faster but also relatively more temporally coherent than the deconvolutional methods.\n\nLast but not the least, we provide details for the time\/memory usage of our method. The time measurement is based on a standard benchmark framework\u00a0 (Figure\u00a0): Our speed is at the same level as the concurrent work by Ulyanov et al.\u00a0, who also use a feed-forward approach, perform significantly faster than previous deconvolution based approaches\u00a0. More precisely, both our method and Ulyanov et al.\u00a0 are able to decode 512-by-512 images at 25Hz (Figure\u00a0, left), while\u00a0 leads the race by a very small margin. The time cost of both methods scale linearly with the number of pixels in the image. For example, our method cost 10 ms for a 256-by-256 image, 40 ms for a 512-by-512 image, and 160 ms for a 1024-by-1024 image. Both methods show a very significant improvement in speed over previous deconvolutional methods such as Gatys et al.\u00a0 and Li et al.\u00a0 (Figure\u00a0 right): about 500 times faster than Gatys et al.\u00a0, and 5000 times faster than Li et al.\u00a0. In the meantime our method is also faster than most traditional pixel based texture synthesizers (which rely on expensive nearest-neighbor searching). A possible exceptions would be a GPU implementation of \"Patch Match\"\u00a0, which could run at comparable speed. However, it provides the quality benefits (better blending, invariance) of a deep-neural-network method (as established in previous work\u00a0).\n\nMemory-wise, our generative model takes 70 Mb memory for its parameters(including the *VGG* network till layer Relu4_1). At runtime, the required memory to decode a image linearly depends on the image's size: for a 256-by-256 picture it takes about 600 Mb, and for a 512-by-512 picture it requires about 2.5 Gb memory. Notice memory usage can be reduced by subdividing the input photo into blocks and run the decoding in a scanline fashion. However, we do not further explore the optimization of memory usage in this paper.\n\n# Limitation\n\nOur current method works less well with non-texture data. For example, it failed to transfer facial features between two difference face photos. This is because facial features can not be treated as textures, and need semantic understanding (such as expression, pose, gender etc.). A possible solution is to couple our model with the learning of object class\u00a0 so the local statistics is better conditioned. For synthesizing photo-realistic textures, Li et al.\u00a0 often produces better results due to its non-parametric sampling that prohibits data distortion. However, the rigidity of their model restricts its application domain. Our method works better with deformable textures, and runs significantly faster.\n\nOur model has a very different character compared to Gaussian based models\u00a0. By capturing a global feature distribution, these other methods are able to better preserve the global \"look and feels\" of the example texture. In contrast, our model may deviate from the example texture in, for example, the global color distribution. However, such deviation may not always be bad when the content image is expected to play a more important role.\n\nSince our model learns the mapping between different depictions of the same content, it requires features highly invariant features. For this reason we use the pre-trained *VGG_19* network. This makes our method weaker in dealing with highly stationary backgrounds (sky, out of focus region etc.) due to their weak activation from *VGG_19*. We observed that in general statistics based methods\u00a0 generate better textures for areas that has weak content, and our method works better for areas that consist of recognizable features. We believe it is valuable future work to combine the strength of both methods.\n\nFinally, we discuss the noticeable difference between the results of MDANs and MGANs. The output of MGANs is often more consistent with the example texture, this shows MGANs' strength of learning from big data. MGANs has weakness in flat regions due to the lack of iterative optimization. More sophisticated architectures such as the recurrent neural networks can bring in state information that may improve the result.\n\n# Conclusion\n\nThe key insight of this paper is that adversarial generative networks can be applied in a Markovian setting to learn the mapping between different depictions of the same content. We develop a fully generative model that is trained from a single texture example and randomly selected images from ImageNet. Once trained, our model can decode brown noise to realistic texture, or photos into artworks. We show our model has certain advantages over the statistics based methods\u00a0 in preserving coherent texture for complex image content. Once trained (which takes about an hour per example), synthesis is extremely fast and offers very attractive invariance for style transfer.\n\nOur method is only one step in the direction of learning generative models for images. An important avenue for future work would be to study the broader framework in a big-data scenario to learn not only Markovian models but also include coarse-scale structure models. This additional invariance to image layout could, as a side effect, open up ways to also use more training data for the Markovian model, thus permitting more complex decoders with stronger generalization capability over larger classes. The ultimate goal would be a directly decoding, generative image model of large classes of real-world images.\n\n# Acknowledgments\n\nThis work has been partially supported by the Intel Visual Computing Institute and the Center for Computational Science Mainz. We like to thank Bertil Schmidt and Christian Hundt for providing additional computational resources.\n\n[^1]: A strided convolutional network hence replaces pooling layers by subsampled convolution filters that learn pooling during training (for example, two-fold mean pooling is equivalent to blurred convolution kernels sampled at half resolution).\n\n[^2]: See supplementary material and code at: https:\/\/github.com\/chuanli11\/MGANs\n\n[^3]: strided convolution, ReLUs, batch normalization, removing fully connected layers\n\n[^4]: Since Ulyanov et al.\u00a0 and Johnson et al.\u00a0 are very similar approaches, in this paper we only compare to one of them\u00a0. The main differences of\u00a0 are: 1) using a residual architecture instead of concatenating the outputs from different layers; 2) no additional noise in the decoding process.\n\n[^5]: We need to use \"brown\" noise with spectrum decaying to the higher frequencies because flat \"white\" noise creates an almost flat response in the encoding of the VGG network. Somer lower-frequency structure is required to trigger the feature detectors in the discriminative network.","meta":{"dup_signals":{"dup_doc_count":13,"dup_dump_count":3,"dup_details":{"curated_sources":1,"2024-18":1,"unknown":11}},"filename":"out\/1604.04382_extract_main.tex.md"},"subset":"arxiv"} +{"text":"abstract: After a brief review of the historical development and *CLASSICAL* properties of the BLACK HOLES, we discuss how our present knowledge of some of their *QUANTUM* properties shed light on the very concept of ELEMENTARY PARTICLE. As an illustration, we discuss in this context the decay of accelerated protons, which may be also relevant to astrophysics.\nauthor: George E. A. Matsas\ndate: 2024-09-30\ntitle: Elementary Particles under the Lens of the Black Holes\n\n# Black Holes: Historical developments \n\nThe first black hole solution was found in 1916 by the German astrophysicist Karl Schwarzschild few months after General Relativity was formulated (and little time before his death in the Russian front). It was a static and spherically symmetric solution of the vacuum Einstein Eqs. described by the line element $$ds^2 = (1-2 M\/r) dt^2 - (1-2 M\/r)^{-1} dr^2 - r^2 d\\Omega\\;,$$ where $M$ is the black hole mass. Notwithstanding it took many decades before the scientific community accepted that black holes were physical solutions which could be indeed realized in nature. In 1939 we can still find A. Einstein stating in the conclusions of an article\u00a0: \"the Schwarzschild singularities do not exist in the physical reality\". This was not what J. Oppenheimer and his student, H. Snyder, concluded in the same year\u00a0, however, after analyzing the collapse of massive stars.\n\nIn 1938, J. Oppenheimer and G. Volkoff found that neutron stars had a limit for its mass beyond which they should collapse\u00a0. In the year after, Oppenheimer and Snyder decided to analyze it in more detail. For technical reasons, they assumed some simplifications: spherical symmetry, constant density, no rotation and no shock waves with emission of matter or radiation. Under these conditions they concluded that the collapse would lead eventually to a black hole indeed, but there remained some unclear features to be understood. In contrast to the description made by observers at rest on the surface of the star who would witness a continuous collapse towards the singularity, asymptotic observers would see the star surface like \"frozen\" on the event horizon. These seemingly contradictory descriptions were only reconciled after D. Finkelstein found in 1958 a coordinate system which was able to cover smoothly the internal and external regions of the black hole\u00a0. This conceptual step in addition with more precise numerical simulations, which were possible thanks to a better comprehension of the nuclear structure, ended up to corroborate Oppenheimer and Snyder's conclusion and bit most skepticism about the possible existence of black holes. J. Wheeler, in particular, evolved from criticizer to supporter of the black hole idea and in 1967 he introduced the denomination *black hole* to what was called *collapsed star* in the west and *frozen star* in the east. More than 40 years after the Schwarzschild solution was discovered, black holes were treated at least as a real possibility.\n\nIt is common to consider 1964 as the beginning of the *black hole golden era*. From the theoretical point of view, R. Penrose has introduced topological methods with which he was able to derive some quite general results. For example, he was able to show (under some natural assumptions in the classical realm) that black holes must have a singularity in their interior\u00a0. Developments in the observational domain also took place. In 1966 I. Novikov and Ya. Zel'dovich raised the possibility that there should exist binary systems formed by an ordinary star and a black hole orbiting around each other. It would be natural, thus, to expect the combined emission of X-ray and visible light from such systems since as matter is attracted by the black hole its gravitational potential would be converted into thermal energy and eventually into X-ray\u00a0. This turned out to be the most probable explanation for the spectrum associated with Cygnus X-1 as it became clear in 1971 with the collected data from the Uhuru satellite. It is worthwhile to notice that while the prediction that star-size black holes could be X-ray sources was confirmed only 5 years after its formulation, the explanation that radio galaxies (observed since the 30's) and quasars (observed since the 60's) were energized by the presence of super black holes had to wait more than 40 years.\n\nDifferent evidences favoring the existence of black holes are mounting since then, and it is expected for soon some direct signal from a black hole event horizon. This may come as a shadow disc at the photograph plate of Sgr A$^*$ (at the center of the Milky Way, where a many-million-solar-mass black hole is believed to exist) \\[when small enough wavelength observations become possible\\] or in the form of gravitational wave signals to be detected up to the 10's by the Ligo and Virgo Earth-based gravitational wave detectors or in the 20's by the Lisa space gravitational wave detector, we do not know; but what we do know is that *it will be the confirmation of one of the greatest predictions of theoretical physics*.\n\n# Black Holes: CLASSICAL properties \n\nA strongly asymptotically predictable spacetime $$({\\cal M}, g)$$ is formally said to contain a black hole $B$ if $$B \\equiv {\\cal M} - J^- ({\\cal J^+})$$ is not empty, i.e., if there is a region from where classical light rays cannot escape to infinity, where $J^-$ is the causal past and ${\\cal J}^+$ is the future null infinity, . The event horizon of the black hole is defined as being the boundary of $B$: $$H \\equiv \\dot J^- ({\\cal J}^+) \\cap {\\cal M}\\;.$$ The solution discovered by Schwarzschild contains a particular kind of black hole which is static and spherically symmetric but could it exist other black holes with, let us say, more exotic forms and exquisite properties? In 1964, A. Doroshkevich, I. Novikov and Ya. Zel'dovich showed that quasi-spherically symmetric collapsing stars give rise to perfectly spherically symmetric black holes\u00a0. This was the prelude of a series of far-reaching theorems known as *black hole no-hair theorems*.\n\nIn 1967 W. Israel derived what can be considered the first piece of this series of theorems, namely, *every rotationless black hole should be spherically symmetric*\u00a0. As a next step, it was natural, thus, to extend the analysis to rotating black holes. A solution for a rotating black hole was unveiled by R. Kerr in 1963\u00a0 (but only identified as so in 1965 by R. Boyer and R. Lindquist\u00a0, B. Carter\u00a0 and R. Penrose). At that time, it was not clear, however, if there would not exist other vacuum solutions of the Einstein Eqs. describing black holes with angular momentum. This quest was embraced by B. Carter in 1972 (with a contribution by D. Robinson) who showed that according to the vacuum Einstein Eqs. the most general black hole solution was the one given by Kerr. The event horizon of a Kerr black hole is more elongated at the equator than on the poles and the underlying geometry of a rotating black hole is richer than of a static one but still its structure remains quite simple since most properties of the original star are lost in the collapse. To put it in R. Price's words: In a star collapse process with a black hole formation, everything that can be radiated (i.e. does not satisfy some conservation law) will be radiated.\n\nThe most general formulation of the no-hair theorems associated with the electrovacuum solution of Einstein Eqs. states that black holes are completely characterized by their mass $M$, charge $Q$ and angular momentum $J$ and its geometry is described by the Kerr-Newman line element. For instance, the black hole area can be written as ($c=G=1$) $$A = 4\\pi^2 \\left[ 2 M^2 - Q^2 + 2 M \\sqrt{M^2 - Q^2 - J^2\/M^2\\,} \\right]\\;.$$\n\nThus black holes not only are probably the most exotic structures at the heavens but also one of the simplest ones as well.\n\n# Black Holes: SEMICLASSICAL properties \n\nThe beginning of the black hole semiclassical era took place in 1974. This was the summit of a number of curious events which actually began in 1971\u00a0. In this year, S. Hawking showed that the total horizon area for any given set of black holes did not decrease with time. In particular, according to this theorem, black holes were indestructible. In order to derive this theorem, Hawking used some quite reasonable hypotheses (at least in the classical realm). In 1972, in analogy to the second law of thermodynamics, J. Bekenstein associated an entropy to each black hole proportional to the area of its event horizon\u00a0. Hawking had a strong negative reaction at first but two years later, as he analyzed the collapse of stars in the context of Quantum Field Theory in Curved Spacetimes (where positive energy conditions normally used in classical theorems are not valid), Hawking showed that black holes should radiate with a thermal spectrum\u00a0 with temperature ($c=G=\\hbar=k_B=1$) $$T = {\\cal K}\/ 2\\pi\\;,$$ (as measured by assymptotic observers), where $${\\cal K} = 4 \\pi \\sqrt{M^2 - Q^2 - J^2\/M^2\\,}\/A$$ is the surface gravity. Eventually, black holes could have associated an entropy proportional to its horizon area $$S = \\frac{c^3 A}{4G \\hbar}$$ as conjectured by Bekenstein (and precisely calculated by Hawking). This discovery opened a subarea denominated *Black Hole Thermodynamics*, which is presently very active because of some fundamental questions raised in connection with information theory and quantum mechanics but which will be hardly solved outside the context of a full quantum gravity theory. In Hawking\u00b4s words: *Holes may be black classically but are gray quantum-mechanically*.\n\nIn order to understand better the Hawking effect, let us make a detour through Quantum Field Theory. It became clear since the early times of Quantum Mechanics that the no-particle state, i.e. the vacuum, has a very rich structure. Most (if not all) of its exotic properties are connected with the concept of virtual particles. Virtual particles violate the Heisenberg uncertainty principle and, thus, cannot be directly observed. Notwithstanding, they do have indirect observable consequences. Probably the most paradigmatic example of the physical consequences of the virtual particles is given by the Casimir effect.\n\nAccording to the Casimir effect\u00a0, uncharged parallel metallic plates in the vacuum experience an attractive pressure given by (see Ref.\u00a0 for a comprehensive review and Ref.\u00a0 for a pedagogical introduction) $$|F|\/A = 3 \\pi^2 \\hbar c\/ 710 \\, d^4 \\;,$$ where $d$ is the distance between the plates and we are discounting any gravitational effects because of the plate masses. We note that this is intrinsically a quantum-relativistic effect which would vanish for $\\hbar \\to 0$ and lead to nonsense results in the nonrelativistic limit $c \\to \\infty$. Roughly speaking, the metal plates play the role of boundaries to the virtual photons diminishing the total vacuum energy $\\langle 0 | \\hat H | 0 \\rangle$ as the plates get closer to each other, where $\\hat H$ is the free Hamiltonian associated with the photon field.\n\nWe already know that virtual photons feel the presence of static metallic plates but what does it happen if we consider a (nonuniformly) accelerated metallic plate in the vacuum? The metal plate will transfer energy to the virtual particles letting them real. Indeed, a photon flux will be emitted opposite to the acceleration direction while negative energy fluxes will be emitted in the acceleration direction. This is known as dynamical Casimir effect (but could be fairly called Moore effect\u00a0). This effect is interesting in its own right and also for being a kind of flat-spacetime analog of the Hawking effect. Here the mirror plays the role of the star, the emitted photons correspond to the Hawking radiation and the inward flux of negative energy is responsible for the black hole evaporation. The main difference here is that contrary to the mirror case, where only photons are radiated, the star collapse leads to the emission of all kind of particles. This is so because, according to the equivalence principle, all particles are coupled to gravity in the same way. What would not be easy to anticipate is that the spectrum of the emitted particles as detected by asymptotic observers can be associated to a black body. In the particular case of a static chargeless black hole, the corresponding temperature is $$T= \\hbar c^3 \/ 8 \\pi k_B G M\\;,$$ where $M$ is the black hole mass. Notice the appearance of the four universal constants $c,\\hbar,G, k_B$.\n\nThe larger the black hole, the lower the temperature and only \"small-mass\" particles ($m c^2 \\leq k_B T$) will be likely to escape. Large-mass particles will be scattered back to the hole by the scattering potential. Notwithstanding, it is worthwhile to notice that arbitrarily large mass particles could be, in principle, observed as follows. By assuming that the evaporation process is adiabatic, the radiation temperature as measured by static observers at different Schwarzschild radial coordinates $r$ outside the black hole will differ from the one at the infinity by a red-shift factor\u00a0, namely, $$T(r)= T \/ \\sqrt{1-2 GM\/r c^2}\\;.$$ Thus, the closer to the horizon the higher the temperature and the more likely to detect massive particles. However, there is no free lunch in nature: in order to probe particles with Planck mass one has to get as close to the horizon as the Planck length.\n\n# ELEMENTARY PARTICLES UNDER THE LENS OF THE BLACK HOLES \n\nThe Hawking effect connects in a nontrivial way Relativity, Quantum Mechanics, Gravity and Thermodynamics and has raised a number of different questions, some of which are still opened. Notwithstanding it became clear since 1976 after W. Unruh\u00a0 that although static observers outside black holes detect a thermal bath of particles, free falling observers close enough to the horizon would have their detectors basically unexcited. (Here one may think of a usual 2-level Unruh-DeWitt detector\u00a0.) The explanation for this phenomenon is closely connected with previous works by S. Fulling\u00a0 and P. Davies\u00a0 which called attention to the fact that the particle content of a Quantum Field Theory is observer dependent. This conclusion has far-reaching implications even to Quantum Field Theory in flat spacetime. Indeed, the vacuum state as defined by inertial observers in the Minkowski space corresponds to a thermal state of all particles at temperature $$T = \\hbar a\/2 \\pi c k_B$$ as detected by observers with constant proper acceleration $a$. It can be said that uniformly accelerated observers see as real those particles which inertial observers ascribe as being virtual.\n\nIt is also possible to figure out the opposite situation where particles which are unobservable to uniformly accelerated observers are observable to inertial ones. In 1991 A. Higuchi, D. Sudarsky and the author were analyzing the following problem associated with the radiation emitted from uniformly accelerated charges. It is well known that accelerated charges radiate with respect to inertial observers and the emitted power is given by the Larmor formula\u00a0 as (see also Ref.\u00a0 for a deep discussion on the radiation reaction problem) $$W = e^2 a^2 \/6 \\pi c^3 \\;.$$\n\nIn spite of this, there was a consensus that co-accelerated observers with uniformly accelerated charges, i.e. charges with constant proper acceleration $a$, would not detect any radiation since the corresponding field is static with respect to them\u00a0. According to Quantum Field Theory, however, the usual classical electromagnetic radiation can be interpreted in terms of photons. So, if the co-accelerated observers did not observe any radiation, \"where had the photons observed by the inertial observers gone\"? The answer to this question is directly related with the fact that the elementary particle concept is observer dependent. Indeed, the emission of a finite-energy photon as seen in the inertial frame corresponds to the emission to or absorption from the Fulling-Davies-Unruh (FDU) thermal bath (in which the electron is immersed according to co-accelerated observers) of a *zero-energy* Rindler photon. The emission rate of finite energy photons as defined by the inertial observers and the combined emission and absorption rate of zero-energy Rindler photons as defined by the co-accelerated observers can be both written as\u00a0 ($c, \\hbar = 1$) $$P_{k_\\bot} (a) = \\frac{e^2}{4 \\pi^3 a} | K_1 (k_\\bot\/a)|^2$$ where $k_\\bot$ is the photon transverse momentum (with respect to the acceleration direction). Zero-energy Rindler photons are perfectly well defined entities since they can carry non-zero transverse momentum but cannot be detected by physical observers because they concentrate on the horizon of the uniformly accelerated observers\u00a0. From an epistemological point of view, zero-energy Rindler photons have much in common with virtual particles since although they cannot be observed they are indirectly important as a mean to explain some physical phenomena; in this case, the \"disappearance\" of the photons in the electron co-accelerated frame. Zero-energy particles are also important in analyzing other problems as, for instance, the response of static sources interacting with the Hawking radiation of a black hole\u00a0.\n\nProbably because of its non-intuitiveness the FDU effect was received with skepticism by part of the scientific community. Although the derivation of the effect is sound and the conclusion indisputable, part of the community took the position that only a \"*direct*\" observation of the effect would be convincing. Notwithstanding, this is not an easy task since no macroscopic body would resist to the typical accelerations $a$ necessary for this purpose: $$T\/ (1 K) = a\/ (2.5 \\times 10^{22} cm\/s^2)\\;.$$ The strategy had to be otherwise, namely, a gedanken experiment able to make it clear that the FDU effect would be necessary for the consistency of the Quantum Field Theory itself. This was the strategy followed by D. Vanzella and the author\u00a0 inspired by previous works\u00a0.\n\nAccording to the standard model, inertial protons are stable. But this is not so for accelerated ones because of the work transferred to the proton by the external accelerating agent. As far as the proton proper acceleration satisfies $a \\ll m_n + m_e + m_\\nu - m_p$ the decay process will be much suppressed but for $a > m_n + m_e + m_\\nu - m_p$ the weak decay channel $$p^+ \\to n^0 + e^+ + \\nu\n\\label{pweakdecay}$$ will be favored up to $a \\approx m_\\pi$ after what the strong-decay channel $$p^+ \\to n^0 + \\pi^+\n\\label{pstrongdecay}$$ will dominate. Recent calculations show that high-energy protons with $E\\approx 10^{14}$ eV under the influence of magnetic fields of $B \\approx 10^{14}$ G found in some pulsars should decay in a fraction of a second in laboratory time\u00a0.\n\nThe analysis above, however, is valid for inertial observers. But how can we understand the decay process from the point of view of co-moving observers with a uniformly accelerated proton? According to these observers, in order to decay the proton must remove energy from the particles of the thermal bath in which it is immersed in its rest frame. Thus, according to the co-moving observers, the decay processes will be seen quite differently. Indeed, in the regime where the proton\/neutron can be considered as unexcited\/excited states of a two-level quantum mechanical system, processes () and () will be interpreted according to coaccelerated observers as $$\\label{pweakdecayac}\n\\begin{equation}\np^+ + e^- \\to n^0 + \\nu\n\\end{equation}\n\\begin{equation}\np^+ + \\bar \\nu \\to n^0 + e^+ \n\\end{equation}\n\\begin{equation}\np^+ + \\bar \\nu + e^- \\to n^0 \n\\end{equation}$$ and $$p^+ + \\pi^- \\to n^0 \n\\label{pstrongdecayac}$$ respectively. In particular, the correct mean lifetime is predicted in the co-accelerated frame by assuming the processes above in conjunction with the presence of the FDU thermal bath\u00a0. Had we not taken into account the FDU thermal bath, the proton would be seemingly stable according to the co-accelerated observers (for sake of energy conservation) in contradiction with the inertial frame conclusion: *The FDU effect is necessary for the consistency of Quantum Field Theory.*\n\n# Concluding remarks \n\nThe overwhelming difficulty of constructing a quantum gravity theory can be illustrated by the fact that different people will give different answers to what such a theory should look like. Moreover, there is no reason to believe that the rules of quantum mechanics which are tested up to scales of $10^{-15}$ cm would not be drastically modified in the quantum gravity domain. Assuming that $c$, $\\hbar$, and $G$ are the only fundamental constants to the quantum gravity theory, we expect that its typical effects will become obvious at the Planck scale, i.e. as soon as we accelerate elementary particles at energies of $E > M_p c^2 = \\sqrt{\\hbar c^5\/G}$, we are able to probe distances of $L_p < \\sqrt{G \\hbar\/c^3}$, we look at processes with time scales of $T_p < \\sqrt{G \\hbar\/c^5}$ or we observe structures with densities of $\\rho_p = c^5\/G^2 c^2$. In principle, these extreme situations would be likely to be realized only in singular regions, as e.g., close to the Big Bang and at black hole singularities. Unhappily the Big Bang is mostly screened by a number of effects associated with the primordial plasma (although it may be that gravitational wave detectors open a window to it) and black hole singularities are not naked. Thus, it might seem that we would be hopelessly lost from both sides: theoretically and observationally. But this is not so according to the Semiclassical Gravity theory!\n\nIf one is not allowed to visit the Chinese Imperial city, one should better wait for news just outside its limits. In our case, the Imperial city is the Quantum Gravity realm; it is forbidden to us because we do not fit into the Planck scale; and it is worthwhile to wait for news coming from it because quantum mechanical information should leak from the lock. The Hawking effect is probably the better example of how quantum gravity effects can escape towards the macroscopic domain. It might be difficult to observe the radiation emitted from large black holes since the associated temperature is very small: $$T\/(1 K) = 10^{-7} M_\\odot \/ M$$ but this is not so for the radiation emitted from smaller (primordial?) black holes. Even for large black holes, the situation is not that bad as soon as we may probe directly the region close to the horizon where the radiation temperature is very blue-shifted.\n\nWe do not know how far we will be able to go with this semiclassical approach as well as people did not know how far they were going to reach by using the semiclassical electromagnetic theory rather than QED in atomic physics; but what we do know is that every step forward in this down-up strategy will be (in principle) a long-lasting one because, after all, we are dealing with the safe side of our standard theories. Moreover because the Semiclassical Gravity is in the interface of General Relativity, Quantum Field Theory, and Thermodynamics, unexpected effects which does not have to do directly with Quantum Gravity are being unveiled. Here we have focused on the contribution of the Semiclassical Gravity Theory to the concept of elementary particle but other contributions could also be cited. Recently, Unruh has raised the very interesting possibility of mimicking the Hawking effect through Condensed Matter laboratory experiments\u00a0. For this purpose it is enough to arrange a compact region in a background medium (think of a spherical region in the middle of a pool) such that inside it the inward velocity of the medium is larger than the sound velocity. In this way, phonons would not be able to escape from this trapped region and we would have a sonic hole. Many (kinematical) classical and semiclassical properties of the black holes can be experimentally probed in this way. In particlular, Hawking phonon radiation is expected to be observed from sonic holes.\n\nMore embarassing than having not formulated yet the full quantum gravity theory is being aware of how much we still do not know about those theories which we thought to have mastered long ago. In this vein, quantum gravity can wait; the misteries hidden in our standard theories cannot. After all, we can always hold on V. Weisskopf words: *Is it really the end of theoretical physics to get the world formula? The greatest physicists have always thought that there was one, and that everything else could be derived from it. Einstein believed it, Heisenberg believed it. I am not such a great physicist, I do not believe it... This, I think, is because nature is inexhaustible.*","meta":{"dup_signals":{"dup_doc_count":34,"dup_dump_count":30,"dup_details":{"curated_sources":3,"2018-13":1,"2017-51":1,"2017-34":1,"2017-26":1,"2017-17":1,"2016-50":1,"2016-40":1,"2016-30":1,"2015-48":1,"2015-40":1,"2015-35":1,"2015-32":1,"2015-27":1,"2015-22":1,"2015-14":1,"2014-52":1,"2014-49":1,"2014-42":2,"2014-41":1,"2014-35":1,"2014-23":2,"2014-15":1,"2018-22":1,"2015-11":1,"2015-06":1,"2014-10":1,"2013-48":1,"2013-20":1,"2015-18":1}},"filename":"out\/gr-qc0301037.tex.md"},"subset":"arxiv"} +{"text":"bibliography: PLoSOne_jbollen09.bib\n\n| | |\n|:---|:---|\n| **Please cite as:** | Bollen J, Van de Sompel H, Hagberg A, Chute R, 2009 A Principal Component Analysis of 39 Scientific Impact Measures. PLoS ONE 4(6): e6022. doi:10.1371\/journal.pone.0006022 |\n| URL | `http:\/\/www.plosone.org\/article\/info%3Adoi%2F10.1371%2Fjournal.pone.0006022` |\n\n# Abstract\n\n**Background**: The impact of scientific publications has traditionally been expressed in terms of citation counts. However, scientific activity has moved online over the past decade. To better capture scientific impact in the digital era, a variety of new impact measures has been proposed on the basis of social network analysis and usage log data. Here we investigate how these new measures relate to each other, and how accurately and completely they express scientific impact.\n\n**Methodology**: We performed a principal component analysis of the rankings produced by 39 existing and proposed measures of scholarly impact that were calculated on the basis of both citation and usage log data.\n\n**Conclusions**: Our results indicate that the notion of scientific impact is a multi-dimensional construct that can not be adequately measured by any single indicator, although some measures are more suitable than others. The commonly used citation Impact Factor is not positioned at the core of this construct, but at its periphery, and should thus be used with caution.\n\n# Introduction\n\nScience is a *gift economy*; value is defined as the degree to which one's ideas have freely contributed to knowledge and impacted the thinking of others. Since authors use citations to indicate which publications influenced their work, scientific impact can be measured as a function of the citations that a publication receives. Looking for quantitative measures of scientific impact, administrators and policy makers have thus often turned to citation data. \nA variety of impact measures can be derived from raw citation data. It is however highly common to assess scientific impact in terms of average journal citation rates. In particular, the Thomson Scientific Journal Impact Factor (JIF) which is published yearly as part of the Journal Citation Reports (JCR) is based on this very principle; it is calculated by dividing the total number of citations that a journal receives over a period of 2 years by the number of articles it published in that same period. \nThe JIF has achieved a dominant position among measures of scientific impact for two reasons. First, it is published as part of a well-known, commonly available citation database (Thomson Scientific's JCR). Second, it has a simple and intuitive definition. The JIF is now commonly used to measure the impact of journals and by extension the impact of the articles they have published, and by even further extension the authors of these articles, their departments, their universities and even entire countries. However, the JIF has a number of undesirable properties which have been extensively discussed in the literature . This had led to a situation in which most experts agree that the JIF is a far from perfect measure of scientific impact but it is still generally used because of the lack of accepted alternatives. \nThe shortcomings of the JIF as a simple citation statistic have led to the introduction of other measures of scientific impact. Modifications of the JIF have been proposed to cover longer periods of time and shorter periods of times (JCR's Citation Immediacy Index). Different distribution statistics have been proposed, e.g.\u00a0Rousseau (2005) and the JCR Citation Half-life \n(`http:\/\/scientific.thomson.com\/free\/essays\/citationanalysis\/citationrates\/`). \nThe H-index was originally proposed to rank authors according to their rank-ordered citation distributions, but was extended to journals by Braun (2005) . Randar (2007) and Egghe (2006) propose the g-index as a modification of the H-index. \nIn addition, the success of Google's method of ranking web pages has inspired numerous measures of journal impact that apply social network analysis to citation networks. Pinski (1975) first proposed to rank journals according to their eigenvector centrality in a citation network. Bollen (2006) and Dellavalle (2007) proposed to rank journals according to their citation PageRank (an approximation of Pinski's eigenvector centrality), followed by the launch of `eigenfactor.org` that started publishing journal PageRank rankings in 2006. The Scimago group (`http:\/\/www.scimagojr.com\/`) now publishes the Scimago Journal Rank (SJR) that ranks journals based on a principle similar to that used to calculate citation PageRank. PageRank has also been proposed to rank individual articles . Using another social network measure, Leydesdorff (2007) proposes betweenness centrality as an indicator of a journal's interdisciplinary power. \nSince scientific literature is now mostly published and accessed online, a number of initiatives have attempted to measure scientific impact from *usage log data*. The web portals of scientific publishers, aggregator services and institutional library services now consistently record usage at a scale that exceeds the total number of citations in existence. In fact, Elsevier announced 1 billion fulltext downloads in 2006, compared to approximately 600 million citations in the entire Web of Science database. The resulting usage data allows scientific activity to be observed immediately upon publication, rather than to wait for citations to emerge in the published literature and to be included in citation databases such as the JCR; a process that with average publication delays can easily take several years. Shepherd (2007) and Bollen (2008) propose a Usage Impact Factor which consists of average usage rates for the articles published in a journal, similar to the citation-based JIF. Several authors have proposed similar measures based on usage statistics . Parallel to the development of social network measures applied to citation networks, Bollen (2005, 2008) demonstrate the feasibility of a variety of social network measures calculated on the basis of usage networks extracted from the clickstream information contained in usage log data. \nThese developments have led to a plethora of new measures of scientific impact that can be derived from citation or usage log data, and\/or rely on distribution statistics or more sophisticated social network analysis. However, which of these measures is most suitable for the measurement of scientific impact? This question is difficult to answer for two reasons. First, impact measures can be calculated for various citation and usage data sets, and it is thus difficult to distinguish the true characteristics of a measure from the peculiarities of the data set from which it was calculated. Second, we do not have a universally accepted, golden standard of impact to calibrate any new measures to. In fact, we do not even have a workable definition of the notion of \"scientific impact\" itself, unless we revert to the tautology of defining it as the number of citations received by a publication. As most abstract concepts \"scientific impact\" may be understood and measured in many different ways. The issue thus becomes which impact measures best express its various aspects and interpretations. \nHere we report on a Principal Component Analysis (PCA) of the rankings produced by a total of 39 different, yet plausible measures of scholarly impact. 19 measures were calculated from the 2007 JCR citation data and 16 from the MESUR project's log usage data collection (`http:\/\/www.mesur.org\/`). We included 4 measures of impact published by the Scimago (`http:\/\/www.scimagojr.com\/`) group that were calculated from Scopus citation data. The resulting PCA shows the major dimensions along which the abstract notion of scientific impact can be understood and how clusters of measures correspond to similar aspects of scientific impact.\n\n# Methods\n\nThe mentioned 39 scientific impact measures were derived from various sources. Our analysis included several existing measures that are published on a yearly basis by Thomson-Reuters and the Scimago project. Other measures were calculated on the basis of existing citation- and usage data. The following sections discuss the methodology by which each of these impact measures was either extracted or derived from various usage and citation sources.\n\n## Data preparation and collection\n\nAs shown in Fig. , the following databases were used in this analysis:\n\nCitation\n\n: The CDROM version of the 2007 Journal Citation Reports (JCR Science and Social Science Editions) published by Thomson-Reuters Scientific (formerly ISI)\n\nUsage\n\n: The MESUR project's reference collection of usage log data: `http:\/\/www.mesur.org\/`: a collection of 346,312,045 user interactions recorded by the web portals operated by Thomson Scientific (Web of Science), Elsevier (Scopus), JSTOR, Ingenta, University of Texas (9 campuses, 6 health institutions), and California State University (23 campuses) between March 1st 2006 and February 1st 2007.\n\nAdditional citation measures\n\n: A set of journal rankings published by the Scimago project that are based on Elsevier Scopus citation data: \n `http:\/\/www.scimagojr.com\/`\n\nIn the following sections we detail the methodology that was used to retrieve and calculate 39 scientific impact measures from these data sets, and the subsequent analysis of the correlations between the rankings they produced. Throughout the article measures are identified by a unique identifier number that is listed in Table . We hope these identifiers will allow readers to more conveniently identify measures in subsequently provided diagrams and tables such as Fig. , and .\n\n```latex\n\\begin{table*}\n\\begin{small}\n\\caption{\\label{measure_loadings} Measure loadings after varimax rotation of first 2 components of PCA analysis of measure correlations (Spearman rank-order). Average Spearman rank-order correlations to all other measures are listed under $\\bar{\\rho}$ (five lowest values indicated by $\\star$).}\n\\begin{center}\n\\begin{tabular}{lllllrrl}\n\\hline\\hline\nID & Type & Measure & Source & Network parameters & PC1 & PC2 & $\\bar{\\rho}$\\\\\\hline\\hline\n1 & Citation & Scimago Journal Rank & Scimago\/Scopus & & -0.974 & -8.296 & 0.556$^\\star$ \\\\\n2 & Citation & Immediacy Index & JCR 2007 & & 1.659 & -7.046 & 0.508$^\\star$ \\\\\n3 & Citation & Closeness Centrality & JCR 2007 & Undirected, weighted & 0.339 & -6.284 & 0.565$^\\star$ \\\\\n4 & Citaton & Cites per doc & Scimago\/Scopus & & -1.311 & -6.192 & 0.588$^\\star$ \\\\\n5 & Citation & Journal Impact Factor & JCR 2007 & & -1.854 & -5.937 & 0.592$^\\star$ \\\\\n6 & Citation & Closeness centrality & JCR 2007 & Undirected, unweighted & -1.388 & -4.827 & 0.619 \\\\\n7 & Citation & Out-degree centrality & JCR 2007 & Directed, weighted & -3.191 & -4.215 & 0.642 \\\\\n8 & Citation & Out-degree centrality & JCR 2007 & Directed, unweighted & -2.703 & -4.015 & 0.640 \\\\\n9 & Citation & Degree Centrality & JCR 2007 & Undirected, weighted & -4.850 & -2.834 & 0.690 \\\\\n10 & Citation & Degree Centrality & JCR 2007 & Undirected, unweighted & -4.398 & -2.643 & 0.691 \\\\\n11 & Citation & H-Index & Scimago\/Scopus & & -3.326 & -2.003 & 0.681 \\\\\n12 & Citation & Scimago Total cites & Scimago\/Scopus & & -4.926 & -1.722 & 0.712 \\\\\n13 & Citation & Journal Cite Probability & JCR 2007 & & -5.389 & -1.647 & 0.710 \\\\\n14 & Citation & In-degree centrality & JCR 2007 & Directed, unweighted & -5.302 & -1.429 & 0.717 \\\\\n15 & Citation & In-degree centrality & JCR 2007 & Directed, weighted & -5.380 & -1.554 & 0.712 \\\\\n16 & Citation & PageRank & JCR 2007 & Directed, unweighted & -4.476 & 0.108 & 0.693 \\\\\n17 & Citation & PageRank & JCR 2007 & Undirected, unweighted & -4.929 & 0.731 & 0.726 \\\\\n18 & Citation & PageRank & JCR 2007 & Undirected, weighted & -4.160 & 0.864 & 0.696 \\\\\n19 & Citation & PageRank & JCR 2007 & Directed, weighted & -3.103 & 0.333 & 0.659 \\\\\n20 & Citation & Y-factor & JCR 2007 & Directed, weighted & -2.971 & 0.317 & 0.657 \\\\\n21 & Citation & Betweenness centrality & JCR 2007 & Undirected, weighted & -0.462 & 0.872 & 0.643 \\\\\n22 & Citation & Betweenness centrality & JCR 2007 & Undirected, unweighted & -0.474 & 1.609 & 0.642 \\\\\n23 & {\\it Citation} & {\\it Citation Half-Life} & {\\it JCR 2007} & & \/ & \/ & {\\it 0.037} \\\\\\\n%%\n24 & Usage & Closeness centrality & MESUR 2007 & Undirected, weighted & 3.130 & 2.683 & 0.703 \\\\\n25 & Usage & Closeness centrality & MESUR 2007 & Undirected, unweighted & 3.100 & 3.899 & 0.731 \\\\\n26 & Usage & Degree centrality & MESUR 2007 & Undirected, unweighted & 3.271 & 3.873 & 0.729 \\\\\n27 & Usage & PageRank & MESUR 2007 & Undirected, unweighted & 3.327 & 4.192 & 0.728 \\\\\n28 & Usage & PageRank & MESUR 2007 & Directed, unweighted & 3.463 & 4.336 & 0.727 \\\\\n29 & Usage & In-degree centrality & MESUR 2007 & Directed, unweighted & 3.463 & 4.015 & 0.728 \\\\\n30 & Usage & Out-degree centrality & MESUR 2007 & Directed, unweighted & 3.484 & 3.994 & 0.727 \\\\\n31 & Usage & PageRank & MESUR 2007 & Directed, weighted & 3.780 & 4.217 & 0.710 \\\\\n32 & Usage & PageRank & MESUR 2007 & Undirected, weighted & 3.813 & 4.223 & 0.710 \\\\\n33 & Usage & Betweenness centrality & MESUR 2007 & Undirected, unweighted & 3.988 & 4.271 & 0.699 \\\\\n34 & Usage & Betweenness centrality & MESUR 2007 & Undirected, weighted & 3.957 & 3.698 & 0.693 \\\\\n35 & Usage & Degree centrality & MESUR 2007 & Undirected, weighted & 5.293 & 3.528 & 0.683 \\\\\n36 & Usage & Out-degree centrality & MESUR 2007 & Directed, weighted & 5.302 & 3.518 & 0.683 \\\\\n37 & Usage & In-degree centrality & MESUR 2007 & Directed, weighted & 5.286 & 3.531 & 0.683 \\\\\n38 & Usage & Journal Use Probability & MESUR 2007 & & 8.914 & 1.833 & 0.593 \\\\\n%%\n39 & {\\it Usage} & {\\it Usage Impact Factor} & {\\it MESUR 2007} & & \/ & \/ & {\\it 0.279} \\\\\n\\hline\\hline\n\\end{tabular}\n\\end{center}\n\\end{small}\n\\end{table*}\n```\n\n## Retrieving existing measures\n\nThe 2007 JCR contains a table listing 4 citation-based impact measures for a set of approximately 7,500 selected journals, namely\n\n2007 Immediacy Index\n\n: (Table , ID 2):The same year average citation rate, i.e.\u00a0the average number of times articles that were published in a journal in 2006 were cited in 2006.\n\n2007 Journal Impact Factor\n\n: (Table , ID 5): A 2 year average per-article citation rate of a journal, i.e. the average number of times articles that were published in a journal in 2004 and 2005 were cited in 2006.\n\nCitation Half-life\n\n: (Table , ID 23):The median age of articles cited in a journal in 2006.\n\nIn addition, the Scimago project publishes several impact measures that are based on Elsevier's Scopus citation data. We retrieved the following 4 measures from its web site:\n\n2007 Scimago Journal Rank\n\n: (Table , ID 1) The citation PageRank of a journal calculated on the basis of Elsevier Scopus citation data divided by the number of articles published by the journal in the citation period (3 years) (`http:\/\/www.scimagojr.com\/SCImagoJournalRank.pdf`), i.e.\u00a0an average per-article journal PageRank.\n\nCites per doc\n\n: (Table , ID 4) The average number of citations received by articles published in a year over a 2 year period in the Scopus database.\n\nH-Index\n\n: (Table , ID 11) Journal citation h-index, i.e.\u00a0the $h$ number of articles in a journal that received at least $h$ citations in the Scopus database.\n\nScimago Total cites\n\n: (Table , ID 12) The number of citations received by the articles published in a journal during the three previous years according to the Scopus database.\n\nThe Scimago journal rankings were downloaded from their web site in the form of an Excel spreadsheet and loaded into a MySQL database. This added 4 measures of journal impact to our data set bringing the total number of retrieved, existing measures to 8.\n\n## Calculating social network measures of scientific impact.\n\nIn and we describe methods to rank journals on the basis of various social network measures of centrality , e.g.\u00a0betweenness centrality that is calculated from journal citation- and usage graphs. These social network measures were shown to elucidate various aspects of a journal's scientific impact on the basis of its connections in citation- or usage-derived networks. In addition, this approach has led to innovative ranking services such as eigenfactor.org<\/a>. We followed the same approach in this work by extracting citation- and usage-networks from our data and defining a set of well-studied social network measures on the basis of those networks as shown in Fig. . In the following sections, we therefore first describe the creation of the citation- and usage networks after which we describe the set of social network measures that were calculated on the basis of both.\n\n### Citation network\n\nThe 2007 JCR contains a table that lists the number of citations that point from one journal to another. The number of citations is separated according to the publication year of both the origin and target of the citation. For example, from this table we could infer that 20 citations point from articles published in \"Physica Review A\" in 2006 to articles published in \"Physica Review B\" in 2004 and 2005. Each such data data point can thus be described as the n-tuple\n\n$$a \\in A=V^2 \\times Y_s \\times Y_e \\times \\mathbb{N}^+$$\n\nwhere $V=\\{v_1, \\cdots, v_n\\}$ is the set of $n$ journals for which we have recorded citation data, $Y_s=\\{y_0, \\cdots, y_m\\}$ is the set of $m$ years for which outgoing were recorded, $Y_e=\\{y_0, \\cdots, y_k\\}$ is the set of $k$ years for which incoming citations were recorded, and $\\mathbb{N}^+$ denotes the set of positive integers including zero that represent the number of counted citations. For example, the journal citation tuplet $a=(1, 2, \\{2006\\}, \\{2004,2005\\}, 50)$ represents the observation that 20 citations point from articles published in journal 1 in the year 2006 to those published in journal 2 in 2004 and 2005. \n$A$, the set of citation n-tuples, describes a citation network whose connections indicate the number of times that articles published in one journal cited the articles published in another journal for a particular time period. Such a network can be represented by the citation matrix $C_{Y_s,Y_e}$ of which each entry $c_{i,j}$ represents the number of observed citations that point from articles published in journal $v_i$ in the date range given by $Y_s$ to articles published in journal $v_j$ in the date range $Y_e$. \nWe attempted to ensure that our citation network conformed to the definition of the Journal Impact Factor rankings published in the 2007 JCR. We therefore extracted citations from the JCR that originated in 2006 publications and pointed to 2004 and 2005 publications. The resulting citation network contained 897,608 connections between 7,388 journals, resulting in a network density of 1.6% (ratio of non-zero connections over all possible non-reflexive connections). This citation network was represented as a $7,338 \\times 7,338$ matrix labeled C whose entries $c_{i,j}$ were the number of 2006 citations pointing from journal $i$ to the 2004 and 2005 articles of journal $j$.\n\n### Usage network\n\nIn we describe a methodology to derive journal relations from the session clickstreams in large-scale usage data. The same methodology has in this case been used to create a journal usage network on the basis of which a set of social network measures were calculated. This procedure, related to association rule learning , is described in more detail in Bollen (2006, 2008) with respect to the calculation of usage-based, social-network measures of scientific impact. \nIn short, the MESUR project's reference collection of usage log data consists of log files recorded by a variety of scholarly web portals (including some of the world's most significant publishers and aggregators) who donated their usage log data to the MESUR project in the course of 2006-2007. All MESUR usage log data consisted of a list of temporally sorted \"requests\". For each individual request the following data fields were recorded: (1) date\/time of the request, (2) session identifier, (3) article identifier, and (4) request type. The session identifier grouped requests issued by the same (anonymous) user, from the same client, within the same session. This allowed the reconstruction of user \"clickstreams\", i.e.\u00a0the sequences of requests by individual users within a session. Since each article for this investigation is assumed to be published in a journal, we can derive journal clickstreams from article clickstreams. \nOver all clickstreams we can thus determine the transition probability\n\n$$P(i,j) = \\frac{N(v_i,v_j)}{\\sum_j N(v_i,v_j)}$$\n\nwhere $N(v_i,v_j)$ denotes the number of times that we observe journal $v_i$ being followed by $v_j$ in the journal clickstreams in MESUR's usage log data. The transition probability $P(i,j)$ thus expresses the probability by which we expect to observe $v_j$ after $v_i$ over all user clickstreams. \nThis analysis was applied to the MESUR reference data set, i.e.\u00a0346,312,045 user interactions recorded by the web portals operated by Thomson Scientific (Web of Science), Elsevier (Scopus), JSTOR, Ingenta, University of Texas (9 campuses, 6 health institutions), and California State University (23 campuses) between March 1st 2006 and February 1st 2007. To ensure that all subsequent metrics were calculated over the same set of journals, the resulting set of journal transition probabilities were trimmed to $7,575$ journals for which a JIF could be retrieved from the 2007 JCR. All usage transition probabilities combined thus resulted in the $7,575 \\times 7,575$ matrix labeled $U$. Each entry $u_{i,j}$ of matrix $U$ was the transition probability $P(i,j)$ between two journals $i$ and $j$. Matrix $U$ contained 3,617,368 non-zero connections resulting in a network density of 6.3%. This procedure and the resulting usage network is explained in detail in .\n\n### Social network measures\n\nFour classes of social network measures were applied to both the citation and usage network represented respectively by matrix $C$ and matrix $U$, namely:\n\nDegree centrality\n\n: (Table , IDs 7-10, 14, 15, 26, 29, 30, 35-37) Number of connections pointing to or emerging from a journal in the network.\n\nCloseness centrality\n\n: (Table , IDs 3, 6, 24, 25) The average length of the geodesic connecting a specific journal to all other journals in the network.\n\nBetweenness centrality\n\n: (Table , IDs 21, 22, 33, 34) The number of geodesics between all pairs of journals in the network that pass through the specific journal.\n\nPageRank\n\n: (Table , IDs 16-19, 27, 28, 31, 32) As defined by Brin and Page (1998) and applied to citation networks by Bollen (2006) .\n\nThe definitions of each of the measures in these classes were varied according to the following network factors: (1) Weighted vs. unweighted connections, i.e.\u00a0measures can be calculated by assuming that each non-zero connection valued 1 vs.\u00a0taken into account the actual weight of the connection, (2) Directed vs. undirected connections, i.e. some measures can be calculated to take into account the directionality of journal relations or not, and finally (3) Citation vs. usage network data, i.e. any of these measure variations can be calculated for either the citation or the usage network. \nThese factors result in $2^3=8$ variations for each the above listed 4 classes of social network measures, i.e. 32 variants. However, not all permutations make equal sense. For example, in the case of Betweenness Centrality we calculated only two of these variants that both ignored connection directionality (irrelevant for betweenness) but one took into account connection weights (weighted geodesics) and another ignored connections weights (all connections weighted $>0$). Each of these variants were however calculated for the citation and usage-network. The final list of social network measures thus to some degree reflect our judgment on which of these permutations were meaningful.\n\n## Hybrid Measures\n\nIn addition to the existing measures and the social network measure, we calculated, a number of measures that did not fit any the above outlined classes, namely\n\nY-Factor\n\n: (Table , ID 20) A measure that results from multiplying a journal's Impact Factor with its PageRank, described in Bollen (2006) .\n\nJournal Cite Probability\n\n: (Table , ID 13) We calculated the Journal Cite Probability from the citation numbers listed in the 2007 JCR 2007.\n\nJournal Use Probability\n\n: (Table , ID 38) The normalized frequency by which a journal will be used according to the MESUR usage log data.\n\nUsage Impact Factor\n\n: (Table , ID 39) Same definition as the JIF, but expressing the 2-year \"usage\" average for articles published in a journal.\n\n## Measures overview\n\nIn total, we calculated 32 citation- and usage-based impact measures; 16 social network measures on the basis of matrix $C$ (citation network) and 16 social network measures on the basis of matrix $U$ (usage network). 4 journal impact measures published by the Scimago group (`http:\/\/www.scimagojr.com\/`) and 3 pre-calculated impact measures from the 2007 JCR were added, bringing the total to 39 measures. A list of measures is provided in Table along with information on the data they have been derived from and the various network factors that were applied in their calculation. A list of mathematical definitions is provided in Appendix S1. \nThe set of selected measures was intended to capture the major classes of statistics and social network measures presently proposed as alternatives to the JIF. In summary, the set of all measures can be categorized in 4 major classes. First, *citation and usage statistics* such as Citation Probability (number of one journal's citations over total citations), Usage Probability (amount of one journal's usage over total usage), the JIF, the Scimago Cites per Doc, and a Usage Impact Factor (UIF) whose definition follows that of the JIF but is based on usage counts. Second, *citation and usage social network measures* such as Closeness Centrality (the mean length of geodesics between a journal and all other journals), Betweenness Centrality (number of times that a journal sits on the geodesics between all pairs of journals) and PageRank (cf. Eigenvector Centrality). Third, a set of *citation and usage degree centrality* measures such as Out-Degree Centrality, In-Degree Centrality and Undirected Degree Centrality. Finally, we included a set of recently introduced measures such as the Scimago Journal Rank (SJR), the Y-factor (Bollen, 2007) , the Scimago H-index and Scimago Total Cites. \n\n## Analysis\n\nSpearman rank-order correlations were then calculated for each pair of journal rankings. Because $C$, $U$ and the Scimago rankings pertained to slightly different sets of journals, correlation values were only calculated for the intersections of those sets, i.e.\u00a0N=7,388, N=7,575 or N=6,913 journals. For 39 measures. this resulted in a $39 \\times 39$ correlation matrix $R$ of which each entry $r_{i,j} \\in [-1,1]$ is the Spearman rank-order correlation between the journal rankings produced by measure $i$ and measure $j$. \nA sample of matrix R for 10 selected measures is shown below. For example, the Spearman rank-order correlation between the Citation H-index and Usage PageRank is 0.66. The IDs listed in Table precede each measure name. \n$$R_{10\\times10}=\\begin{pmatrix} \n1.00 & 0.71 & 0.77 & 0.52 & 0.79 & 0.55 & 0.69 & 0.63 & 0.60 & 0.18\\\\\n0.71 & 0.99 & 0.52 & 0.69 & 0.79 & 0.85 & 0.49 & 0.44 & 0.49 & 0.22\\\\\n0.77 & 0.52 & 1.00 & 0.62 & 0.63 & 0.39 & 0.70 & 0.73 & 0.68 & 0.20\\\\\n0.52 & 0.69 & 0.62 & 1.00 & 0.68 & 0.78 & 0.49 & 0.56 & 0.65 & 0.06\\\\\n0.79 & 0.79 & 0.63 & 0.68 & 1.00 & 0.82 & 0.66 & 0.62 & 0.66 & 0.15\\\\\n0.55 & 0.85 & 0.39 & 0.78 & 0.82 & 1.00 & 0.40 & 0.40 & 0.50 & 0.13\\\\\n0.69 & 0.49 & 0.70 & 0.49 & 0.66 & 0.40 & 1.00 & 0.89 & 0.85 & 0.53\\\\\n0.63 & 0.44 & 0.73 & 0.56 & 0.62 & 0.40 & 0.89 & 1.00 & 0.97 & 0.45\\\\\n0.60 & 0.49 & 0.68 & 0.65 & 0.66 & 0.50 & 0.85 & 0.97 & 1.00 & 0.42\\\\\n0.18 & 0.22 & 0.20 & 0.06 & 0.15 & 0.13 & 0.53 & 0.45 & 0.42 & 1.00\n\\end{pmatrix}\n\\begin{array}{l}\n\\mbox{19: Citation PageRank}\\\\\n\\mbox{5: Journal Impact Factor}\\\\\n\\mbox{22: Citation Betweenness}\\\\\n\\mbox{6: Citation Closeness}\\\\\n\\mbox{11: Citation H-index}\\\\\n\\mbox{1: Citation Scimago Journal Rank}\\\\\n\\mbox{31: Usage PageRank}\\\\\n\\mbox{34: Usage Betweenness}\\\\\n\\mbox{24: Usage Closeness}\\\\\n\\mbox{39: Usage Impact Factor}\\\\\n\\end{array}$$\n\nNot all pair-wise correlations were statistically significant. Two measures in particular lacked significant correlations ($N=39, p>0.05$) with any of the other measures, namely Citation Half-Life and the UIF. They were for that reason removed from the list of measures under consideration. All other Spearman rank-order correlations were statistically significant ($U$: $N=39, p<0.05$). The reduced $37 \\times 37$ correlation matrix $R$ was subjected to a Principal Component Analysis which by means of an eigenvalue decomposition identified 37 orthogonal components of the original correlation matrix $R$. \nThe resulting PCA components were ranked according to the degree by which they explain the variances in $R$'s values (eigenvalues transformed to component loadings). The component loadings are listed in Table . The first component, PC1, represents 66.1% of the variance in measure correlations, with each successive component representing less variance, i.e.\u00a0PC2 17%, PC3 9% and PC4 4%. Retention of the first 2 components will thus yield a model that covers 83.4% of variance in measure correlations. The addition of the third component will yield a model that covers 92.6% of variation in measure correlations. \n\n```latex\n\\begin{table*}\n\\caption{\\label{PC_loadings} Component loadings of Principal Component Analysis of journal ranking correlations (37 measures).}\n\\begin{center}\n\\begin{tabular}{l||lllll}\n & PC1 & PC2 & PC3 & PC4 & PC5 \\\\\\hline\\hline\nProportion of Variance & 66.1\\% & 17.3\\% & 9.2\\% & 4.8\\% & 0.9\\% \\\\\nCumulative Proportion & 66.1\\% & 83.4\\% & 92.6\\% & 97.4\\% & 98.3\\% \\\\\\hline\\hline\n\\end{tabular}\n\\end{center}\n\\end{table*}\n```\n\nWe projected all measures unto the first two components, PC1 and PC2, to create a 2-dimensional map of measures. A varimax rotation was applied to the measure loadings to arrive at a structure that was more amenable to interpretation. The measure loadings for each component are listed in Table (\"PC1\" and \"PC2\"). The resulting 2-dimensional map of measure similarities is shown in Fig. . Measures are identified in the map by their \"ID\" in Table . Black circles indicate citation-based measures. White circles indicate usage-based measures. The JIF is marked by a blue circle (ID 5). The hue of any map location indicates how strongly measures are concentrated in that particular area, i.e.\u00a0red means highly clustered. \n\nTo cross-validate the PCA results, a hierarchical cluster analysis (single linkage, euclidean distances over $R$'s row vectors) and a k-means cluster analysis were applied to the measure correlations in $R$ to identify clusters of measures that produce similar journal rankings.\n\n# Results and discussion\n\n## Results\n\nThe map in Fig. reveals a number of clusters. First, we observe a cluster in the top right quadrant that contains all usage-based measures (IDs 24-37), with the exception of Usage Probability (ID 38). In the upper-left and bottom-left quadrants of the map we find most citation-based measures. The bottom-left quadrant contains the JIF that is among others surrounded by the Scimago Cites per Doc , the Scimago Journal Rank, the JCR immediacy index (IDs 1-8) and in the upper section the various permutations of citation degree centrality measures (IDs 9-10, 14-15), a group of Total Cite measures (IDs 12-13) and most prominently the H-index (ID 11). The arrangement of the H-index and Citation Total Cites is quite similar to that found by Leydesdorff (2007) . The upper-left quadrant nearly uniquely contains citation PageRank and Betweenness Centrality measures (IDs 16-22). The Y-factor (ID 20) is naturally positioned between the two clusters since it is defined as the product of citation PageRank and the JIF. \nA complete linkage hierarchical cluster analysis based on the Euclidean distances of the measure $R$'s row vectors confirms these general distinctions. When we cut the dendrogram in Fig. at the $1.1$ distance level, we find 4 main clusters. First, at the top of Fig. we find the first cluster which contains the JIF, SJR and other related measures that express citation normalized per document. Followingly, a second cluster contains the Citation Betweenness Centrality and Pagerank measures that rely on the graph-properties of the citation network. The third cluster contains Total Citation rates, various degree centralities and the H-index that express various distribution parameters of total citation counts. At the bottom of Fig. , we find the fourth cluster that contains all usage measures. \nTable lists the results of a 5 cluster k-means analysis of matrix $R$ that further corroborates the observed clustering in the PCA and hierarchical cluster analysis. \n\n```latex\n\\begin{table*}\n\\caption{\\label{kmeans} Results of a k-means cluster analysis of measures.}\n\\begin{center}\n\\begin{tabular}{c||l|l}\nCluster & Measures & Interpretation \\\\\\hline\\hline\n1 & 38 & Journal Use Probability \\\\\n2 & 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37 & Usage measures \\\\\n3 & 1, 2, 3, 4, 5 & JIF, SJR, Cites per Document measures \\\\\n4 & 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 & Total Citation rates and distributions \\\\\n5 & 16, 17, 18, 19, 20, 21, 22 & Citation Betweenness and PageRank \\\\\\hline\\hline\n\\end{tabular}\n\\end{center}\n\\end{table*}\n```\n\nThe pattern of clusters indicate that some measures express a more distinct aspect of scientific impact and will thus be farther removed from all other measures. Table lists the $\\bar{\\rho}$ values of each measure, defined as the mean Spearman rank-order correlation of a measure to all other 38 measures in $R$. The $\\bar{\\rho}$ of Citation Half-Life (ID 23) and the Usage Impact Factor (ID 39) fell below the significance threshold of $p<0.05$ for $N=39$, further justifying their removal as outliers. Most $\\bar{\\rho}$ values range from 0.6 to 0.7 indicating a moderate but significant congruence in the rankings produced by a majority of measures. However, a cluster of five particular measures has low $\\bar{\\rho}$ values in the range 0.5-0.6. They form a separate, but poorly defined cluster in the lower bottom-left quadrant of Fig. (ID 1-5: SJR, Immediacy Index, Citation Undirected Weighted Closeness Centrality, Scimago Cites per Doc, and the 2007 JIF), indicating they produce rankings removed from the \"mainstream\" in Fig. .\n\n## Discussion\n\nTo interprete the meaning of PC1 and PC2 we need to investigate the distribution of measures along either axis of the map in Fig. . Fig. shows a simplified schema of the distribution of impact measures along the PC1 and PC2 axes. Each of the observed cluster of measures has been given an intuitive \"group\" name to simplify the general pattern. \n\nPC1 clearly separates usage measures from citation measures. On the positive end of PC1, we find a sharply demarcated cluster of all usage measures, with the exception of the Journal Use Probability (ID 38) which sits isolated on the extreme positive end of PC1. On the negative end of PC1, we find most citation measures. Surprisingly, some citation measures are positioned close to the cluster of usage measures in terms of their PC1 coordinates. Citation Closeness (ID 3) and in particular Citation Immediacy Index (ID 2) are located on the positive end of PC1, i.e. closest to the usage measures. Citation Betweenness Centrality (IDs 21 and 22) are also positioned closely to the cluster of usage measures according to PC1. \nThis particular distribution of citation measures along PC1 points to an interesting, alternative interpretation of PC1 simply separating the usage from the citation measures. In the center, we find Citation Immediacy Index (ID 2) positioned close to the cluster of usage measures in terms of its PC1 coordinates. The Citation Immediacy Index is intended to be a \"rapid\" indicator of scientific impact since it is based on same-year citations. Its proximity to the usage measures according to PC1 may thus indicate that the usage measures are equally rapid indicators, if not more so. The assumption that usage measures are \"Rapid\" indicators of scientific impact is furthermore warranted for the following reasons. First, usage log data is generally considered a more \"rapid\" indicator of scientific impact than citation data, since usage log data is nearly immediately affected by changes in scientific habits and interests whereas citation data is subject to extensive publication delays. It has in fact been shown that present usage rates predict future citation rates . Second, our usage log data was recorded slightly more recently (April 2006 through March 2007) than the 2007 JCR citation data (January 2006 through December 2007). It may therefore reflect more recent scientific activity. These observations combined lead to a speculative interpretation of PC1 in terms of \"Rapid\" vs. \"Delayed\" measures of impact. The \"Rapid\" indicators are mostly usage measures due to the nature of the usage log data that they have been calculated for. They are however approximated by the Citation Immediacy Index whose definition focuses on same-year citation statistics and two Citation Betweenness Centrality measures (IDs 21 and 22) that may, due to their focus on interdisciplinary power, anticipate emerging scientific activities. \nPC2 separates citation statistics such as Scimago Total Cites (ID12), JIF (Table , ID 5) and Cites per Doc (ID 4) on its negative end from the social network measures such as Citation Betweenness centrality (IDs 21 and 22) and Citation PageRank (ID 16-19) including the Y-factor (ID 20) on its positive end. Measures such as the JIF (ID 5), Scimago Total Cites (ID 12), Journal Cite Probability (ID13), and Journal Use Probability (ID 38) express the rate at which journals indiscriminately receive citations or usage from a variety of sources, i.e.\u00a0their Popularity, whereas the mentioned social network measures rely on network structure to express various facets of journal Prestige or interdisciplinary power . PC2 can thus plausibly be interpreted as separating impact measures according to whether they stress scientific Popularity vs. Prestige. \nConsequently, the PCA results could be interpreted in terms of a separation of measures along two dimensions: \"Rapid\" vs.\"Delayed\" (PC1) and \"Popularity\" vs. \"Prestige\" (PC2). Surprisingly, most usage-based measures would then fall in the \"Rapid, \"Prestige\" quadrant, approximated in this aspect only by two Citation Betweenness Centrality measures. The majority of citation-based measures can then be classified as \"Delayed\", but with the social network measures being indicative of aspects of \"Prestige\" and the normalized citation measures such as the JIF, Scimago Journal Rank (ID 1) and Cites per Doc indicative of journal \"Popularity\". We also note that the Scimago Journal Rank is positioned among measures such as the JIF and Cites per Doc. This indicates it too expresses \"Delayed\" \"Popularity\", in spite of the fact that SJR rankings originate from 2007 citation data and that the SJR has been explicitly defined to \"transfer(s) (of) prestige from a journal to another one\" (). \nAnother interesting aspect of the distribution of measures along PC1 and PC2 relates to the determination of a \"consensus\" view of scientific impact. The $\\bar{\\rho}$ values indicate the average Spearman rank-order correlation of a particular measure to all other measures, i.e.\u00a0the degree to which it approximates the results of all other measures. The measure which best succeeds in approximating the most general sense of scholarly impact will therefore have the highest $\\bar{\\rho}$ and will therefore be the best candidate for a \"consensus\" measure. As shown in Table that measure would be Usage Closeness Centrality (ID: 25) whose $\\bar{\\rho}=0.731$. Conversely, the Citation Scimago Journal Rank (ID1), Citation Immediacy Index (ID 2), Citation Closeness Centrality (ID 3), Citaton Cites per doc (ID 4) and Citation Journal Impact Factor (ID:5) have the lowest $\\bar{\\rho}$ values indicating that they represent the most particular view of scientific impact.\n\n## Future research\n\nThe presented results pertain to what we believe to be the largest and most thorough survey of usage- and citation based measures of scientific impact. Nevertheless, a number of issues need to be addressed in future research efforts. \nFirst, although an attempt was made to establish a representative sample of existing and plausible scientific impact measures, several other conceivable impact measures could have been included in this analysis. For example, the HITS algorithm has been successfully applied to web page rankings. Like Google's PageRank it could be calculated for our citation and usage journal networks. Other possible measures that should be considered for inclusion include the Eigenfactor.org measures, and various information-theoretical indexes. The addition of more measures may furthermore enable statistical significance to be achieved on the correlations with now-removed measures such as Citation Half-Life and the Usage Impact Factor, so that they could be included on the generated PCA map of measures. \nSecond, we projected measure correlations onto a space spanned by the 2 highest-ranked components, the first of which seems to make a rather superficial distinction between usage- and citation-derived impact measures and the second of which seems to make a meaningful distinction between \"degree\" and \"quality\" of endorsement. Future analysis should focus on including additional components, different combinations of lower-valued components and even the smallest-valued components to determine whether they reveal additional useful distinctions. In addition, non-linear dimensionality reduction methods could be leveraged to reveal non-linear patterns of measure correlations. \nThird, a significant number of the measures surveyed in this article have been standard tools for decades in social network analysis, but they are not in common use in the domain of scientific impact assessment. To increase the \"face-validity\" of these rankings, all have been made available to the public on the MESUR web site and can be freely explored and interacted with by users at the following URL: . \nFourth, the implemented MESUR services can be enhanced to support the development of novel measures by allowing users to submit their own rankings which can then automatically be placed in the context of existing measures. Such a service could foster the free and open exchange of scientific impact measures by allowing the public to evaluate where any newly proposed measure can be positioned among existing measures. If the measure is deemed to similar to existing measures, it need not be developed. If however, it covers a part of the measure space that was previously unsampled, the new measure may make a significant contribution and could therefore be considered for wider adoption by those involved in scientific assessment.\n\n## Conclusion\n\nOur results indicate that scientific impact is a multi-dimensional construct. The component loadings of a PCA indicate that 92% of the variances between the correlations of journal rankings produced by 37 impact measures can be explained by the first 3 components. To surpass the 95% limit, a 4-component model would have to be adopted. \nA projection of measure correlations onto the first 2 components (83.4%) nevertheless reveals a number of useful distinctions. We found that the most salient distinction is made by PC1 which separates usage from citation measures with the exception of Citation Betweenness centrality and Citation Immediacy. The position of the latter and the time periods for which usage was recorded suggests an interpretation of PC1 as making a distinction between measures that provide a \"rapid\" vs \"delayed\" view of scientific impact. \nPC2 seems to separate measures that express Popularity from those that express Prestige. Four general clusters of impact measures can be superimposed on this projection: (1) usage measures, (2) a group of distinctive yet dispersed measures expressing per document citation popularity, (3) measures based on total citation rates and distributions, and (4) finally a set of citation social network measures. These 4 clusters along with the PCA components allows us to quantitatively interpret the landscape of presently available impact measures and determine which aspects of scientific impact they represent. Future research will focus on determining whether these distinctions are stable across a greater variety of measures as well other usage and citation data sets. \nFour more general conclusions can be drawn from these results; each has significant implications for the developing science of scientific assessment. \nFirst, the set of usage measures is more strongly correlated (average Spearman rank-order correlation = 0.93, incl. Usage Probability) than the set of citation measures (average Spearman rank-order correlation = 0.65). This indicates a greater reliability of usage measures calculated from the same usage log data than between citation measures calculated from the same citation data. This effect is possibly caused by the significantly greater density of the usage matrix $U$ in comparison to the citation matrix $C$. As mentioned in the introduction, the amount of usage data that can be collected is much higher than the total amount of citation data in existence because papers can contain only a limited set of citations and once they are published that set is fixed in perpetuity. This limitation may place an upper bound on the reliability that can be achieved with citation measures, but it does not apply to usage measures. \nSecond, if our interpretation of PC2 is correct, usage-based measures are actually *stronger* indicators of scientific Prestige than many presently available citation measures. Contrary to expectations, the IF as well as the SJR most strongly express scientific Popularity. \nThird, some citation measures are more closely related to their usage counterparts than they are to other citation measures such as the JIF. For example, the Spearman rank-order correlation between Citation Betweenness Centrality and Usage Betweenness Centrality is 0.747. In comparison, the Spearman rank-order correlation between the JIF and Citation Betweenness Centrality is only 0.52. This indicates that contrary to what would be expected, usage impact measures can be closer to a \"consensus ranking\" of journals than some common citation measures. \nFourth, and related, when we rank measures according to their average correlation to all other measures $\\bar{\\rho}$, i.e.\u00a0how close they are to all other measures, we find that the JIF and SJR rank 34rd and 38th respectively among 39 measures, indicating their isolated position among the studied set of measures. The JCR Citation Immediacy Index and the Scimago Cites per Doc are in a similar position. On the other hand, Usage Closeness centrality (measure 25) is positioned closest to all other measures (max. $\\bar{\\rho}=0.731$). These results should give pause to those who consider the JIF the \"golden standard\" of scientific impact. Our results indicate that the JIF and SJR express a rather particular aspect of scientific impact that may not be at the core of the notion of scientific \"impact\". Usage-based measures such as Usage Closeness centrality may in fact be better \"consensus\" measures.\n\n# Data files\n\nThe ranking data produced to support the discussed Principal Component Analysis is available upon request from the corresponding author with the exception of those that have been obtained under proprietary licenses.","meta":{"dup_signals":{"dup_doc_count":19,"dup_dump_count":10,"dup_details":{"curated_sources":1,"2024-10":2,"2017-13":2,"2015-18":4,"2015-11":2,"2015-06":1,"2014-10":1,"2013-48":1,"2024-22":1,"unknown":4}},"filename":"out\/0902.2183_extract_PLoSOne_jbollen09.tex.md"},"subset":"arxiv"} +{"text":"abstract: Type your abstract here. Type your abstract here. Type your abstract here. Type your abstract here. Type your abstract here. Type your abstract here. Type your abstract here. Type your abstract here. Type your abstract here. Type your abstract here. Type your abstract here. Type your abstract here. Type your abstract here. Type your abstract here. Type your abstract here. Type your abstract here. Type your abstract here. Type your abstract here. Type your abstract here. Type your abstract here. Type your abstract here. Type your abstract here. Type your abstract here. Type your abstract here. Type your abstract here. Type your abstract here. Type your abstract here. Type your abstract here. Type your abstract here. Type your abstract here. Type your abstract here. Type your abstract here. Type your abstract here. Type your abstract here. Type your abstract here. Type your abstract here. Type your abstract here.\n .\n Type your abstract here. Type your abstract here. Type your abstract here. Type your abstract here. Type your abstract here. Type your abstract here. Type your abstract here. Type your abstract here. Type your abstract here. Type your abstract here. Type your abstract here. Type your abstract here. Type your abstract here. Type your abstract here. Type your abstract here.\naddress: Affiliation 1, Address, City and Postal Code, Country; Affiliation 2, Address, City and Postal Code, Country\nauthor: Given-name ; Given-name ; Given-name\nbibliography: refs.bib\ntitle: Type the title of your paper, only capitalize first word and proper nouns\n\n```latex\n\\begin{table*}[!th]\n\n\\begin{minipage}{.9\\textwidth}\n\\baselineskip12pt\n\\ifpreprint\n \\vspace*{1pc}\n\\else\n \\vspace*{-6pc}\n\\fi\n\n\\noindent {\\LARGE\\itshape Computer Vision and Image Understanding}\n\\vskip6pt\n\n\\noindent {\\Large\\bfseries Authorship Confirmation}\n\n\\vskip1pc\n\n\n{\\bf Please save a copy of this file, complete and upload as the \n``Confirmation of Authorship'' file.}\n\n\\vskip1pc\n\nAs corresponding author \nI, \\underline{\\hphantom{\\hspace*{7cm}}}, \nhereby confirm on behalf of all authors that:\n\n\\vskip1pc\n\n\\begin{enumerate}\n\\itemsep=3pt\n\\item This manuscript, or a large part of it, \\underline {has not been\npublished, was not, and is not being submitted to} any other journal. \n\n\\item If \\underline {presented} at or \\underline {submitted} to or\n\\underline {published }at a conference(s), the conference(s) is (are)\nidentified and substantial \\underline {justification for\nre-publication} is presented below. A \\underline {copy of\nconference paper(s) }is(are) uploaded with the manuscript.\n\n\\item If the manuscript appears as a preprint anywhere on the web, e.g.\narXiv, etc., it is identified below. The \\underline {preprint should\ninclude a statement that the paper is under consideration at Computer Vision and Image Understanding}.\n\n\\item All text and graphics, except for those marked with sources, are\n\\underline {original works} of the authors, and all necessary\npermissions for publication were secured prior to submission of the\nmanuscript.\n\n\\item All authors each made a significant contribution to the research\nreported and have \\underline {read} and \\underline {approved} the\nsubmitted manuscript. \n\\end{enumerate}\n\nSignature\\underline{\\hphantom{\\hspace*{7cm}}} Date\\underline{\\hphantom{\\hspace*{4cm}}} \n\\vskip1pc\n\n\\rule{\\textwidth}{2pt}\n\\vskip1pc\n\n{\\bf List any pre-prints:}\n\\vskip5pc\n\n\n\\rule{\\textwidth}{2pt}\n\\vskip1pc\n\n{\\bf Relevant Conference publication(s) (submitted, accepted, or\npublished):}\n\\vskip5pc\n\n\n\n{\\bf Justification for re-publication:}\n\n\\end{minipage}\n\\end{table*}\n```\n\n```latex\n\\begin{table*}[!th]\\ifpreprint\\else\\vspace*{-5pc}\\fi\n\n\\section*{Research Highlights (Required)}\n\nTo create your highlights, please type the highlights against each\n\\verb+\\item+ command. \n\n\\vskip1pc\n\n\\fboxsep=6pt\n\\fbox{\n\\begin{minipage}{.95\\textwidth}\nIt should be short collection of bullet points that convey the core\nfindings of the article. It should include 3 to 5 bullet points\n(maximum 85 characters, including spaces, per bullet point.) \n\\vskip1pc\n\\begin{itemize}\n\n \\item \n\n \\item \n\n \\item\n\n \\item\n\n \\item\n\n\\end{itemize}\n\\vskip1pc\n\\end{minipage}\n}\n\n\\end{table*}\n```\n\n# Note\n\nPlease use `elsarticle.cls` for typesetting your paper. Additionally load the package `ycviu.sty` in the preamble using the following command:\n\n \\usepackage{ycviu}\n\nFollowing commands are defined for this journal which are not in `elsarticle.cls`.\n\n \\received{}\n \\finalform{}\n \\accepted{}\n \\availableonline{}\n \\communicated{}\n\nAny instructions relavant to the `elsarticle.cls` are applicable here as well. See the online instruction available on:\n\n http:\/\/support.river-valley.com\/wiki\/\n index.php?title=Elsarticle.cls\n\n http:\/\/support.river-valley.com\/wiki\/index.php?title=Elsarticle.cls\n\n## Entering text\n\n**Please note that Full Length Papers can have 7 pages (plus one page after revision) and Special Issue Papers can have 10 pages (plus one page after revision). The only exception is the review article that is submitted to a Special Issue. These limits include all materials e.g. narrative, figures, tables, references, etc.**<\/span>\n\n# The first page\n\nAvoid using abbreviations in the title. Next, list all authors with their first names or initials and surnames (in that order). Indicate the author for correspondence (see elsarticle documentation).\n\nPresent addresses can be inserted as footnotes. After having listed all authors' names, you should list their respective affiliations. Link authors and affiliations using superscript lower case letters.\n\n## The Abstract\n\nAn Abstract is required for every paper; it should succinctly summarize the reason for the work, the main findings, and the conclusions of the study. The abstract should be no longer than 200 words. Do not include artwork, tables, elaborate equations or references to other parts of the paper or to the reference listing at the end. \"Comment\" papers are exceptions, where the commented paper should be referenced in full in the Abstract.\n\nThe reason is that the Abstract should be understandable in itself to be suitable for storage in textual information retrieval systems.\n\n*Example of an abstract: A biometric sample collected in an uncontrolled outdoor environment varies significantly from its indoor version. Sample variations due to outdoor environmental conditions degrade the performance of biometric systems that otherwise perform well with indoor samples. In this study, we quantitatively evaluate such performance degradation in the case of a face and a voice biometric system. We also investigate how elementary combination schemes involving min-max or z normalization followed by the sum or max fusion rule can improve performance of the multi-biometric system. We use commercial biometric systems to collect face and voice samples from the same subjects in an environment that closely mimics the operational scenario. This realistic evaluation on a dataset of 116 subjects shows that the system performance degrades in outdoor scenarios but by multimodal score fusion the performance is enhanced by 20%. We also find that max rule fusion performs better than sum rule fusion on this dataset. More interestingly, we see that by using multiple samples of the same biometric modality, the performance of a unimodal system can approach that of a multimodal system.*\n\n# The main text\n\nPlease divide your article into (numbered) sections (You can find the information about the sections at ). Ensure that all tables, figures and schemes are cited in the text in numerical order. Trade names should have an initial capital letter, and trademark protection should be acknowledged in the standard fashion, using the superscripted characters for trademarks and registered trademarks respectively. All measurements and data should be given in SI units where possible, or other internationally accepted units. Abbreviations should be used consistently throughout the text, and all nonstandard abbreviations should be defined on first usage .\n\n```latex\n\\begin{table*}[!t]\\caption{\\label{tab1}Summary of different works pertaining to face and\nspeech fusion}\n\\centering\n\\begin{tabular}{|p{2.25cm}|p{2cm}|l|p{4cm}|p{3cm}|p{2cm}|}\n\\hline\nStudy & Algorithm used & DB Size & Covariates of interest & \nTop individual performance & Fusion\\newline Performance\\\\\n\\hline\nUK-BWG\n(Mansfield et al.,\n2001) &\nFace, voice:\\newline\nCommercial & 200 & Time: 1--2 month\\newline\nseparation (indoor) & \nTAR$^*$ at 1\\% FAR$^{\\#}$\\newline\nFace: 96.5\\%\\newline\nVoice: 96\\%\n& --\\\\\n\\hline\nBrunelli\n(Brunelli and\nFalavigna, 1995) & \nFace:\\newline\nHierarchical\\newline\ncorrelation\\newline\nVoice:\\newline\nMFCC & \n87 & \nTime: 3 sessions, time\\newline\nunknown (indoor) & \nFace:\\newline\nTAR = 92\\% at\\newline\n4.5\\% FAR\\newline\nVoice:\\newline\nTAR = 63\\% at\\newline\n15\\% FAR\n&\nTAR =98.5\\%\\newline\nat 0.5\\% FAR\\\\\n\\hline\nJain\n(Jain et al., 1999)\n&\nFace:\\newline\nEigenface\\newline\nVoice:\\newline\nCepstrum\\newline\nCoeff. Based\n&\n50\n&\nTime: Two weeks (indoor)\n&\nTAR at 1\\% FAR\\newline\nFace: 43\\%\\newline\nVoice: 96.5\\%\\newline\nFingerprint: 96\\%\n& \nFace $+$ Voice $+$\\newline\nFingerprint $=$\\newline\n98.5\\%\\\\\n\\hline\nSanderson\n(Sanderson and\nPaliwal, 2002)\n&\nFace: PCA\\newline\nVoice: MFCC &\n43 \n& Time: 3 sessions (indoor)\\newline\nNoise addition to voice & \nEqual Error Rate\\newline\nFace: 10\\%\\newline\nVoice: 12.41\\%\n&\nEqual Error\\newline\nRate 2.86\\% \\\\\n\\hline\nProposed study & \nFace, voice:\\newline\nCommercial & 116 &\nLocation: Indoor and\\newline\nOutdoor (same day)\\newline\nNoise addition to eye\\newline \ncoordinates \n&\nTARs at 1\\% FAR\\newline\nIndoor-Outdoor\\newline\nFace: 80\\%\\newline\nVoice: 67.5\\%\n&\nTAR = 98\\%\\newline\nat 1\\% FAR\\\\\n\\hline\n\\multicolumn{6}{@{}l}{$^*$TAR--True Acceptance Rate\\qquad \n$^{\\#}$ FAR--False Acceptance Rate}\n\\end{tabular}\n\\end{table*}\n```\n\n## Tables, figures and schemes\n\nGraphics and tables may be positioned as they should appear in the final manuscript. Figures, Schemes, and Tables should be numbered. Structures in schemes should also be numbered consecutively, for ease of discussion and reference in the text. **Figures should be maximum half a page size.**<\/span> All numbers and letters in figures and diagrams should be at least of the same font size as that of the figure caption.\n\nDepending on the amount of detail, you can choose to display artwork in one column (20 pica wide) or across the page (42 pica wide). Scale your artwork in your graphics program before incorporating it in your text. If the artwork turns out to be too large or too small, resize it again in your graphics program and re-import it. The text should not run along the sides of any figure. This is an example for citation .\n\nYou might find positioning your artwork within the text difficult anyway. In that case you may choose to place all artwork at the end of the text and insert a marker in the text at the desired place. In any case, please keep in mind that the placement of artwork may vary somewhat in relation to the page lay-out .\n\nThis can easily be achieved using `endfloat.sty` package. Please refer the following documentation to use this package.\n\n http:\/\/mirrors.ctan.org\/macros\/latex\/contrib\/\n endfloat\/endfloat.pdf\n\n http:\/\/mirrors.ctan.org\/macros\/latex\/contrib\/endfloat\/endfloat.pdf\n\n**You should insert a caption for the figures below the figures and for the tables the caption should be above the tables.**<\/span>\n\nPlease remember that we will always also need highresolution versions of your artwork for printing, submitted as separate files in standard format (i.e. TIFF or EPS), not included in the text document. Before preparing your artwork, please take a look at our Web page: .\n\n## Lists\n\nFor tabular summations that do not deserve to be presented as a table, lists are often used. Lists may be either numbered or bulleted. Below you see examples of both.\n\n1. The first entry in this list\n\n2. The second entry\n\n 1. A subentry\n\n3. The last entry\n\n- A bulleted list item\n\n- Another one\n\n## Equations\n\nConventionally, in mathematical equations, variables and anything that represents a value appear in italics. All equations should be numbered for easy referencing. The number should appear at the right margin. $$S_{\\rm pg}'=\\frac{S_{\\rm pg}-\\min(S_{\\rm pG})}\n {\\max(S_{\\rm pG}-\\min(S_{\\rm pG})}$$ In mathematical expressions in running text \"\/\" should be used for division (not a horizontal line).\n\n# Acknowledgments\n\nAcknowledgments should be inserted at the end of the paper, before the references, not as a footnote to the title. Use the unnumbered Acknowledgements Head style for the Acknowledgments heading.\n\n# References\n\nPlease ensure that every reference cited in the text is also present in the reference list (and vice versa).\n\n# *Reference style*\n\nText: All citations in the text should refer to:\n\n1. Single author: the author's name (without initials, unless there is ambiguity) and the year of publication;\n\n2. Two authors: both authors' names and the year of publication;\n\n3. Three or more authors: first author's name followed by 'et al.' and the year of publication.\n\nCitations may be made directly (or parenthetically). Groups of references should be listed first alphabetically, then chronologically.\n\n# Supplementary Material\n\nSupplementary material that may be helpful in the review process should be prepared and provided as a separate electronic file. That file can then be transformed into PDF format and submitted along with the manuscript and graphic files to the appropriate editorial office.","meta":{"dup_signals":{"dup_doc_count":14,"dup_dump_count":2,"dup_details":{"curated_sources":7,"unknown":7}},"filename":"out\/2208.09424_extract_ycviu-template-with-authorship-referees.tex.md"},"subset":"arxiv"} +{"text":"abstract: This article is a brief guide to the field of algorithmic information theory (AIT), its underlying philosophy, and the most important concepts. AIT arises by mixing information theory and computation theory to obtain an objective and absolute notion of information in an individual object, and in so doing gives rise to an objective and robust notion of randomness of individual objects. This is in contrast to classical information theory that is based on random variables and communication, and has no bearing on information and randomness of individual objects. After a brief overview, the major subfields, applications, history, and a map of the field are presented. =-2.5ex\nauthor: **Marcus Hutter** \nRSISE$\\,$@$\\,$ANU and SML$\\,$@$\\,$NICTA \nCanberra, ACT, 0200, Australia \n`email@example.com \u00a0\u00a0www.hutter1.net`\ndate: March 2007\ntitle: ****\n .\n .\n ------------------------------------------------------------------------\n .\n height5pt Algorithmic Information Theory \n \\[ a brief non-technical guide to the field \\]\n .\n .\n ------------------------------------------------------------------------\n .\n height2pt\n\n# Overview\n\nAlgorithmic Information Theory (AIT) is a the [information theory](http:\/\/www.scholarpedia.org\/article\/Information_Theory) of individual objects, using [computer science](http:\/\/www.scholarpedia.org\/article\/Computer_Science), and concerns itself with the relationship between computation, information, and randomness.\n\nThe information content or complexity of an object can be measured by the length of its shortest description. For instance the string \"0101010101010101010101010101010101010101010101010101010101010101\" has the short description \"32 repetitions of '01\"', while \"1100100001100001110111101110110011111010010000100101011110010110\" presumable has no simple description other than writing down the string itself. More formally, the [Algorithmic \"Kolmogorov\" Complexity](http:\/\/www.scholarpedia.org\/article\/Algorithmic_Complexity) (AC) of a string $x$ is defined as the length of the shortest program that computes or outputs $x$, where the program is run on some fixed universal computer.\n\nA closely related notion is the probability that a universal computer outputs some string $x$ when fed with a program chosen at random. This [Algorithmic \"Solomonoff\" Probability](http:\/\/www.scholarpedia.org\/article\/Algorithmic_Probability) (AP) is key in addressing the old philosophical problem of [induction](http:\/\/www.scholarpedia.org\/article\/Induction) in a formal way.\n\nThe major drawback of AC and AP are their incomputability. Time-bounded \"Levin\" complexity penalizes a slow program by adding the logarithm of its running time to its length. This leads to computable variants of AC and AP, and [Universal \"Levin\" Search](http:\/\/www.scholarpedia.org\/article\/Universal_Search) (US) that solves all inversion problems in optimal (apart from some huge multiplicative constant) time.\n\nAC and AP also allow a formal and rigorous definition of randomness of individual strings that does not depend on physical or philosophical intuitions about nondeterminism or likelihood. Roughly, a string is [Algorithmically \"Martin-Loef\" Random](http:\/\/www.scholarpedia.org\/article\/Algorithmic_Randomness) (AR) if it is incompressible in the sense that its algorithmic complexity is equal to its length.\n\nAC, AP, US, and AR are the core subdisciplines of AIT, but AIT spans into many other areas. It serves as the foundation of the [Minimum Description Length](http:\/\/www.scholarpedia.org\/article\/Minimum_Description_Length) (MDL) principle, can simplify proofs in computational complexity theory, has been used to define a universal similarity metric between objects, solves the Maxwell demon problem, and many others.\n\n# Algorithmic \"Kolmogorov\" Complexity (AC)\n\n[Algorithmic complexity](http:\/\/www.scholarpedia.org\/article\/Algorithmic_Complexity) formalizes the concept of simplicity and complexity. Intuitively, a string is simple if it can be described in a few words, like \"the string of one million ones\", and is complex if there is no such short description, like for a random string whose shortest description is specifying it bit by bit. Typically one is only interested in descriptions or *codes* that are effective in the sense that decoders are [*algorithms*](http:\/\/www.scholarpedia.org\/article\/Algorithm) on some computer. The universal [Turing machine](http:\/\/www.scholarpedia.org\/article\/Turing_machine) $U$ is the standard abstract model of a general-purpose computer in theoretical computer science. We say that program $p$ is a description of string $x$ if $p$ run on $U$ outputs $x$, and write $U(p)=x$. The length of the shortest description is denoted by $$K(x) := \\min_p\\{\\ell(p): U(p)=x\\}$$ where $\\ell(p)$ is the length of $p$ measured in bits. One can show that this definition is nearly independent of the choice of $U$ in the sense that $K(x)$ changes by at most an additive constant independent of $x$. The statement and proof of this invariance theorem in is often regarded as the birth of algorithmic information theory. This can be termed Kolmogorov's Thesis: the intuitive notion of 'shortest effective code' in its widest sense is captured by the formal notion of Kolmogorov complexity, and no formal mechanism can yield an essentially shorter code. Note that the shortest code is one for which there is a general decompressor: the Kolmogorov complexity establishes the ultimate limits to how short a file can be compressed by a general purpose compressor.\n\nThere are many variants, mainly for technical reasons: The historically first \"plain\" complexity, the now more important \"prefix\" complexity, and many others. Most of them coincide within an additive term logarithmic in the length of the string.\n\nIn this article we use $K$ for the prefix complexity variant. A [prefix Turing machine](http:\/\/www.scholarpedia.org\/article\/Prefix_Turing_Machine) has a separate input tape which it reads from left-to-right without backing up, a separate worktape on which the computation takes place, and a separate output tape on which the output is written. We define a [halting program](http:\/\/www.scholarpedia.org\/article\/Halting_Program) as the initial segment of the input that is scanned at the time when the machine halts, and the [output](http:\/\/www.scholarpedia.org\/article\/Output) is the string that has been written to the separate output tape at that time. The conditional prefix complexity $$K(x|y):=\\min_p\\{\\ell(p):U(y, p)=x\\}$$ is the length of the shortest binary program $p\\in\\{0,1\\}^*$ on a universal prefix Turing machine $U$ with output $x$ and input $y$ . For non-string objects (like numbers $n$, pairs of strings $(x,y)$, or computable functions $f$) one can specify some default coding $\\langle\\cdot\\rangle$ and define $K(\\mbox{\\it\nobject}):=K(\\langle\\mbox{\\it object}\\rangle)$. The most important properties are:\n\n- that $K$ is approximable from above in the limit but not computable,\n\n- the upper bounds $K(x|\\ell(x))\\leq\\ell(x)$ and $K(n)\\leq \\log n+2\\log\\log n$,\n\n- Kraft's inequality implies $\\sum_x 2^{-K(x)}\\leq 1$,\n\n- the lower bound $K(x)\\geq\\ell(x)$ for \"most\" $x$ and $K(x)\\to\\infty$ for $\\ell(x)\\to\\infty$,\n\n- extra information bounds $K(x|y)\\leq K(x)\\leq K(x,y)$,\n\n- subadditivity $K(xy)\\leq K(x,y)\\leq K(y)+K(x|y)$,\n\n- symmetry of information $K(x,y)=K(x|y,K(y))+K(y)=K(y,x)$,\n\n- information non-increase $K(f(x))\\leq K(x)+K(f)$ for computable functions $f$,\n\n- and coding relative to a probability distribution (MDL) \n $K(x)\\leq -\\log P(x)+K(P)$ for computable probability distributions $P$,\n\nwhere all (in)equalities hold within an additive constant. Furthermore, it shares many properties with Shannon's entropy (information measure), but $K$ has many advantages. The properties above allow us to draw a schematic graph of $K$ as depicted in Figure .\n\n# Algorithmic \"Solomonoff\" Probability (AP)\n\nSolomonoff (1964) considered the probability that a universal computer outputs some string when fed with a program chosen at random. This Algorithmic \"Solomonoff\" Probability (AP) is key in addressing the old philosophical problem of induction in a formal way. It is based on\n\n- [Occam's razor](http:\/\/www.scholarpedia.org\/article\/Occam's_razor) (choose the simplest model consistent with the data),\n\n- Epicurus' principle of multiple explanations (keep all explanations consistent with the data),\n\n- Bayes's Rule (transform the a priori distribution to a posterior distribution according to the evidence, experimentally obtained data),\n\n- (universal) Turing machines (to compute, quantify and assign codes to all quantities of interest), and\n\n- algorithmic complexity (to define what simplicity\/complexity means).\n\nOccam's razor (appropriately interpreted and in compromise with Epicurus' principle of indifference) tells us to assign high\/low a priori plausibility to simple\/complex strings $x$. Using $K$ as the complexity measure, one could choose any monotone decreasing function of $K$, e.g.\u00a0$2^{-K(x)}$. The precise definition of [Algorithmic \"Solomonoff\" Probability](http:\/\/www.scholarpedia.org\/article\/Algorithmic_Probability) (AP), also called universal a priori probability, $M(x)$ is the probability that the output of a (so-called monotone) universal Turing machine $U$ starts with $x$ when provided with fair coin flips on the input tape. Formally, $M$ can be defined as $$M(x) \\;:=\\; \\sum_{p\\;:\\;U(p)=x*} 2^{-\\ell(p)}$$ where the sum is over all (so-called minimal, not necessarily halting, denoted by \\*) programs $p$ for which $U$ outputs a string starting with $x$. Since the shortest programs $p$ dominate the sum, $M(x)$ is roughly $2^{-K(x)}$.\n\n$M$ has similar remarkable properties as $K$. Additionally, the predictive distribution $M(x_{n+1}|x_1...x_n):=M(x_1...x_{n+1})\/M(x_1...x_n)$ converges rapidly to 1 on (hence predicts) any computable sequence $x_1 x_2 x_3 ...$. It can also be shown that $M$ leads to excellent predictions and decisions in general stochastic environments. If married with sequential [decision theory](http:\/\/www.scholarpedia.org\/article\/Decision_Theory), it leads to an optimal reinforcement learning agent embedded in an arbitrary unknown environment , and a formal definition and test of intelligence.\n\nA formally related quantity is the probability that $U$ halts when provided with fair coin flips on the input tape (i.e.\u00a0that a random computer program will eventually halt). This halting probability, also known as Chaitin's constant $\\Omega$, or 'the number of wisdom' has numerous remarkable mathematical properties, and can be used for instance to quantify Goedel's Incompleteness Theorem.\n\n# Universal \"Levin\" Search (US)\n\nConsider a problem to solve for which we have two potential algorithms $A$ and $B$, for instance breadth versus depth first search in a finite (game) tree. Much has been written about which algorithm is better under which circumstances. Consider the following alternative very simple solution to the problem: A meta-algorithm $US$ runs $A$ and $B$ in parallel and waits for the first algorithm to halt with the answer. Since $US$ emulates $A$ and $B$ with half-speed, the running time of $US$ is the minimum of $2\\times$time$(A)$ and $2\\times$time$(B)$, i.e.\u00a0$US$ is as fast as the faster of the two, apart from a factor of 2. Small factors like 2 are often minor compared to potentially much larger difference in running time of $A$ and $B$.\n\n[Universal \"Levin\" Search](http:\/\/www.scholarpedia.org\/article\/Universal_Search) (US) extends this idea from two algorithms to *all* algorithms. First, since there are infinitely many algorithms, computation time has to be assigned non-uniformly. The optimal way is that $US$ devotes a time fraction of $2^{-\\ell(p)}$ to each (prefix) program $p$. Second, since not all programs solve the problem (some never halt, some just print \"Hello World\", etc.) $US$ has to verify whether the output is really a solution, and if not discard it and continue.\n\nHow does this fit into AIT? A problem of AC $K$ is its incomputability. Time-bounded \"Levin\" complexity penalizes a slow program by adding the logarithm of its running time to its length: $$Kt(x) \\;=\\; \\min_p \\{\\ell(p)+\\log(\\mbox{time}(p)) : U(p)=x \\}$$ It is easy to see that $Kt(x)$ is just the logarithm of the running time (without verification) of $US$, and is therefore computable.\n\nWhile universal search is nice in theory, it is not applicable in this form due to huge hidden multiplicative constants in the running time. Another restriction is that verification needs to be fast. Hutter developed a more general asymptotically fastest algorithm, which removes the multiplicative constant and necessity of verification, unfortunately at the expense of an even larger additive constant. Schmidhuber developed the first practical variants of $US$ by carefully choosing the programming language ($U$), allocating time in $US$ adaptively, designing training sequences of increasing complexity, reusing subroutines from earlier simpler problems, and various other \"tricks\". He also defined the Speed Prior, which is to $Kt$ what AP is to AC.\n\n# Algorithmic \"Martin-Loef\" Randomness (AR)\n\nThe mathematical formalization of the concept of probability or chance has a long intertwined history. The (now) standard axioms of probability, learned by all students, are due to Kolmogorov (1933).\n\nWhile mathematically convincing, the semantics is far from clear. Frequentists interpret probabilities as limits of observed relatives frequencies, objectivists think of them as real aspects of the world, subjectivists regard them as one's degree of belief (often elicited from betting ratios), while Cournot only assigns meaning to events of high probability, namely as happening for sure in our world.\n\nNone of these approaches answers the question of whether some *specific individual* object or observation, like the binary strings above, is random. Kolmogorov's axioms do not allow one to ask such questions.\n\nVon Mises (1919), with refinements to his approach by Wald (1937), and Church (1940) attempted to formalize the intuitive notion of one string looking more random than another (see the example in the introduction) with partial success. For instance, if the relative frequency of 1s in an infinite sequence does not converge to 1\/2 it is clearly non-random, but the reverse is not true: For instance \"0101010101...\" is not random, since the pair \"01\" occurs too often. Pseudo-random sequences, like the digits of $\\pi$, cause the most difficulties. Unfortunately no sequence can satisfy \"all\" randomness tests. The Mises-Wald-Church approach seemed satisfactory untill Ville (1939) showed that some sequences are random according to their definition and yet lack certain properties that are universally agreed to be satisfied by random sequences. For example, the relative frequency of '1's in increasingly long initial segments should infinitely often switch from above 1\/2 to below 1\/2 and vice versa.\n\nMartin-Loef (1966), rather than give a definition and check whether it satisfied all requirements, took the approach to formalize the notion of all effectively testable requirements in the form of tests for randomness. The tests are constructive (namely all and only lower semi-computable) ones, which are typically all one ever cares about. Since the tests are constructed from Turing machines, they can be effectively enumerated according to the effective enumeration of the Turing machines they derive from. Since the set of sequences satisfying a test (having the randomness property the test verifies) has measure one, and there are only countably many tests, the set of sequences satisfying \"all\" such tests also has measure one. These are the ones called Algorithmic Random\\|Algorithmically \"Martin-Loef\" Random (AR). The theory is developed for both finite strings and infinite sequences. In the latter case the notion of test is more complicated and we speak of sequential tests.\n\nFor infinite sequences one can show that these are exactly the sequences which are incompressible in the sense that the algorithmic prefix complexity of every initial segment is at least equal to their length. More precisely, the infinite sequence $$x_1 x_2 x_3... \\mbox{ is AR}\n \\quad\\Longleftrightarrow\\quad K(x_1...x_n)\\geq n\\;\\;\n \\mbox{for all suff.\\ large } n$$ an important result due to G.J. Chaitin and C. Schnorr. This notion makes intuitive sense: A string can be compressed \"iff\" there are some regularities in the string \"iff\" the string is non-random.\n\n- ML-random sequences cannot be effectively constructed. Yet we can give a natural example: The [halting probability](http:\/\/en.wikipedia.org\/wiki\/Halting_Probability), $\\Omega$ is a real number between 0 and 1, and the sequence of bits in its binary expansion is an infinite ML-random sequence.\n\n- Randomness of other objects than strings and sequences can also be defined.\n\n- Coupling the theory of AR with recursion theory (Downey and Hirschfeldt 2007), we find a hierarchy of notions of randomness, at least if we leave the realm of computability according to Turing. Many variants can be obtained depending on the precise definition of \"constructive\". In particular \"relative randomness\" based on (halting) oracle machines leads to a rich field connected to recursion theory.\n\n- Finally, the crude binary separation of random versus non-random strings can be refined, roughly by considering strings with $K(x_1...x_n)=\\alpha n$ for some $0<\\alpha<1$. If strings are interpreted as (the expansion of) real numbers, this leads to the notion of constructive or effective Hausdorff (fractal) dimension.\n\n# Applications of AIT\n\nDespite the incomputability of its core concepts, AIT has many, often unexpected, applications.\n\nAIT helps to tackle many philosophical problems in the sense that it allows one to formalize and quantify many intuitive but vague concepts of great importance as we have seen above, and hence allows one to talk about them in a meaningful and rigorous way, thus leading to a deeper understanding than without AIT.\n\nMost importantly, AC formalizes and quantifies the concepts of simplicity and complexity in an essentially unique way. A core scientific paradigm is Occam's razor, usually interpreted as \"among two models that describe the data equally well, the simpler one should be preferred.\" Using AC to quantify \"simple\" allowed Solomonoff and others to develop their universal theories of induction and action, in the field of [artificial intelligence](http:\/\/www.scholarpedia.org\/article\/Artificial_Intelligence).\n\nAIT is also useful in the foundations of thermodynamic and its second theorem about entropy increase, and in particular for solving the problem of [Maxwell's demon](http:\/\/en.wikipedia.org\/wiki\/Maxwell's_demon).\n\nBy (often crudely) approximating the \"ideal\" concepts, AIT has been applied to various problems of practical interest, e.g.\u00a0in linguistics and genetics. The principle idea is to replace the universal Turing machine $U$ by more limited \"Turing\" machines, often adapted to the problem at hand. The major problem is that the approximation accuracy is hard to assess and most theorems in AIT break down.\n\nThe universal similarity metric by Vitanyi and others is probably the greatest practical success of AIT: A reasonable definition for the similarity between two objects is how difficult it is to transform them into each other. More formally one could define the similarity between strings $x$ and $y$ as the length of the shortest program that computes $x$ from $y$ (which is $K(x|y)$). Symmetrization and normalization leads to the universal similarity metric. Finally, approximating $K$ by standard compressors like Lempel-Ziv (zip) or bzip(2) leads to the normalized compression distance, which has been used to fully automatically reconstruct language and phytogenetic trees, and many other clustering problems.\n\nSee [Applications of AIT](http:\/\/www.scholarpedia.org\/article\/Applications_of_Algorithmic_Information_Theory) for details and references.\n\nIn science itself, AIT can constructivize other fields: For instance, statements in Shannon information theory and classical probability theory necessarily only hold in expectation or with high probability. Theorems are typically of the form \"there exists a set of measure X for which Y holds\", i.e.\u00a0they are useful for (large) samples. AR on the other hand can construct high-probability sets, and results hold for individual observations\/strings. [Hausdorff dimension](http:\/\/en.wikipedia.org\/wiki\/Hausdorff_Dimension) and real numbers also have constructive counterparts.\n\nNaturally, AIT concepts have also been exploited in theoretical computer science itself: AIT, via the incompressibility method, has resolved many open problems in computational complexity theory and mathematics, simplified many proofs, and is important in understanding (dissipationless) reversible computing. It has found applications in Statistics, Cognitive Sciences, Biology, Physics, and Economics.\n\nAIT can also serve as an umbrella theory for other more practical fields, e.g., in machine learning, the Minimum Description Length (MDL) principle can be regarded as a downscaled practical version of AC.\n\n# History, References, Notation, Nomenclature\n\n[Andrey Kolmogorov](http:\/\/en.wikipedia.org\/wiki\/Andrey_Kolmogorov) suggested to define the information content of an object as the length of the shortest program computing a representation of it. [Ray Solomonoff](http:\/\/en.wikipedia.org\/wiki\/Ray_Solomonoff) invented the closely related universal a priori probability distribution and used it for time series forecasting. Together with [Gregory Chaitin](http:\/\/en.wikipedia.org\/wiki\/Gregory_Chaitin) , this initiated the field of algorithmic information theory in the 1960s. [Leonid Levin](http:\/\/en.wikipedia.org\/wiki\/Leonid_Levin) and others significantly contributed to the field in the 1970s (see e.g.\u00a0). In particular the prefix complexity and time-bounded complexity are (mainly) due to him.\n\nLi and Vitanyi is the standard AIT textbook. The book by Calude focusses on AC and AR, Hutter on AP and US, and Downey and Hirschfeldt on AR. The at http:\/\/www.hutter1.net\/ait.htm\n\ncontains further references, a list of active researchers, a mailing list, a list of AIT events, and more.\n\nThere is still no generally agreed upon notation and nomenclature in the field. One reason is that researchers of different background (mathematicians, logicians, and computer scientists) moved into this field. Another is that many definitions are named after their inventors, but if there are many inventors or one definition is a minor variant of another, things become difficult. This article uses descriptive naming with contributors in quotation marks.\n\nNot even the name of the whole field is generally agreed upon. *Algorithmic Information Theory*, coined by Gregory Chaitin, seems most appropriate, since it is descriptive and impersonal, but the field is also often referred to by the more narrow and personal term *Kolmogorov complexity*.\n\n# Map of the Field\n\nThe AIT field may be subdivided into about 4 separate subfields: AC, AP, US, and AR. The fifth item below refers to applications.\n\n- [Algorithmic \"Kolmogorov\" Complexity](http:\/\/www.scholarpedia.org\/article\/Algorithmic_Complexity) (AC)\n\n - Philosophical considerations\n\n - Properties of AC\n\n - Plain (Kolmogorov) complexity\n\n - Prefix complexity\n\n - Resource bounded complexity\n\n - Other complexity variants\n\n- [Algorithmic \"Solomonoff\" Probability](http:\/\/www.scholarpedia.org\/article\/Algorithmic_Probability) (AP)\n\n - Occam's razor and Epicurus' principle\n\n - Discrete algorithmic probability\n\n - Continuous algorithmic probability = a priori semimeasure\n\n - Universal sequence prediction\n\n - The halting probability = Chaitin's Omega = The number of Wisdom\n\n- [Universal \"Levin\" Search](http:\/\/www.scholarpedia.org\/article\/Universal_Search) (US)\n\n - Levin search\n\n - Levin complexity and speed prior\n\n - Adaptive Levin search\n\n - Fastest algorithms for general problems\n\n - Optimal ordered problem solver\n\n - Goedel machines\n\n- [Algorithmic \"Martin-Loef\" Randomness](http:\/\/www.scholarpedia.org\/article\/Algorithmic_Randomness) (AR) \/ Recursion Theory\n\n - Recursion theory\n\n - Effective real numbers\n\n - Randomness of reals\n\n - van Mises-Wald-Church randomness\n\n - Martin-Loef randomness\n\n - More randomness concepts and relative randomness\n\n - Effective Hausdorff Dimension\n\n- [Applications of AIT](http:\/\/www.scholarpedia.org\/article\/Applications_of_AIT)\n\n - Minimum Description\/Message Length\n\n - Machine Learning\n\n - Artificial Intelligence\n\n - Computational Complexity\n\n - The Incompressibility Method\n\n - (Shannon) information theory\n\n - Reversible computing\n\n - Universal similarity metric\n\n - Thermodynamics\n\n - Entropy and Maxwell demon\n\n - Compression in nature\n\nI would like to thank Paul Vit\u00e1nyi for his help on improving the first draft of this article.","meta":{"dup_signals":{"dup_doc_count":21,"dup_dump_count":13,"dup_details":{"curated_sources":1,"2024-26":2,"2024-22":1,"2024-18":3,"2024-10":2,"2017-13":3,"2015-18":1,"2015-11":1,"2015-06":1,"2014-10":1,"2013-48":1,"2013-20":2,"unknown":2}},"filename":"out\/cs0703024_extract_ait.tex.md"},"subset":"arxiv"} +{"text":"author: Michele Trenti & Piet Hut\ntitle: N-body Simulations\n\n**Gravitational N-body Simulations**\n\nMichele Trenti[^1]\n\nSpace Telescope Science Institute, Baltimore, MD, 21210, U.S.\n\nAND\n\nPiet Hut\n\nInstitute for Advanced Study, Princeton, NJ, 08540, U.S.\n\n*published in Scholarpedia, 3(5):3930 \u2014 accepted May 20, 2008*\n\n**ABSTRACT**\n\nGravitational *N-body simulations*, that is numerical solutions of the equations of motions for N particles interacting gravitationally, are widely used tools in astrophysics, with applications from few body or solar system like systems all the way up to galactic and cosmological scales. In this article we present a summary review of the field highlighting the main methods for N-body simulations and the astrophysical context in which they are usually applied.\n\n# Introduction\n\nThe underlying dynamics relevant in the astrophysical context for of a system of N particles interacting gravitationally is typically Newton's law plus, in case, an external potential field (see however below for a discussion of N-body simulations in general relativity). The force $\\vec{F}_i$ acting on particle $i$ of mass $m_i$ is:\n\n$$\\label{eq:newton}\n\\vec{F}_i = - \\sum_{j \\ne i} G \\frac{m_i m_j (\\vec{r}_i-\\vec{r}_j)}{|\\vec{r_i}-\\vec{r_j}|^3 } - \\vec{\\nabla} \\cdot \\phi_{ext}(\\vec{r}_i),$$\n\nwhere $G=6.67300 \\cdot 10^{-11}$ $m^{3}$ $kg^{-1}$ $s^{-2}$ is the gravitational constant, and $\\phi_{ext}$ is the external potential. The problem is thus a set of non-linear second order ordinary differential equations relating the acceleration $\\partial^2 \\vec{r_i} \/\n\\partial t^2 = \\vec{F}_i \/m_i$ with the position of all the particles in the system.\n\nOnce a set of initial condition is specified (for example the initial positions $\\vec{r}_i$ and velocities $\\vec{v}_i \\equiv\n\\partial \\vec{r}_i \/ \\partial t$ of all particles) it exists a unique solution, analytical only for up to two bodies, while larger N require numerical integration (e.g. see Press et al. 2007). However special care must be employed to ensure both accuracy and efficiency. In fact, the gravitational force (eq. ) presents a singularity when the distance of two particles approaches 0, which can lead to arbitrarily large relative velocities. In addition, given the non-linear nature of the equations, the singularities are movable, that is they depend on the specific choice of initial conditions. In contrast, all singularities in linear ordinary differential equations are independent of initial conditions and thus easier to treat. Therefore constant timestep methods are unable to guarantee a given accuracy in the case of gravitational dynamics and lead to unphysical accelerations during close encounters, which in turn may create unbound stars. A shared adaptive timestep scheme can correctly follow a close encounter, but the price is paid in terms of efficiency as all the other particles of the system are evolved on the timescale of the encounter, which may be several orders of magnitude smaller than the global timescale, resulting essentially in a freezing of the system.\n\nThe singularity may be avoided by introducing a smoothing length in Eq.\u00a0 (e.g. see Aarseth 1963), that is by modifying the gravitational interaction at small scales. For example: $$\\vec{F}_i = - \\sum_{j \\neq i} \\frac{G m_i m_j\n (\\vec{r}_i - \\vec{r}_j)}{(|\\vec{r}_i - \\vec{r}_j|^2 + \\epsilon^2)^{3\/2}} ,$$ where $\\epsilon > 0$ is the softening, or smoothing length, that is a typical distance below which the gravitational interaction is suppressed. To minimize the force errors and the global impact of the softening for distances larger than $\\epsilon$, finite size kernels that ensure continuous derivatives of the force may be employed (e.g., see Dehnen 2001). This strategy effectively suppresses binary formation and strong gravitational interactions, but at the price of altering the dynamics of the system.\n\nThe computational complexity of the numerical solution of a N-body system for a fixed number of timesteps scales as $N^2$, as the evaluation of the force on each particle requires to take into account contributions from all other members of the system. For example, considering a single state of the art cpu core (speed $\\approx 5$ GFlops), a single force evaluation through a direct method would require about 1 second for a system with $N=10^4$ particles (assuming 10 floating point operations per pair of particles) and more than a week for $N=10^7$.\n\nThe arbitrarily large dynamic range in the unsoftened dynamics and the expensive evaluation of the force have led to the development of a wide number of numerical techniques aimed at obtaining a reliable numerical solution with the minimum amount of computational resources, depending on the astrophysical problem of interest. Here we start by discussing the different astrophysical contexts where N-body simulations are routinely employed and we then present the state of the art techniques for these problems.\n\n# Astrophysical domains and timescales\n\nN-body simulations are applied to a wide range of different astrophysical problems so that the most appropriate technique to use depends on the specific context, and in particular on the timescale and collisionality of the problem.\n\n## Timescales, Equilibrium and Collisionality\n\nA system of N particles interacting gravitationally with total mass M and a reference dimension R (for example the radius containing half of the total mass) reaches a dynamic equilibrium state on a timescale comparable to a few times the typical time ($T_{cr}$) needed for a particle to cross the system ($T_{cr} \\approx 1\/\\sqrt{GM\/R^3}$). This is the response time needed to settle down to virial equilibrium, that is $2K\/|W|=1$, where $K$ is the kinetic energy of the system: $K=1\/2\n\\sum_{i=1,N} m_i |\\vec{v}_i|^2$, and W is its potential energy: $W =\n- 1\/2 \\sum_{i \\ne j} G m_i m_j \/|\\vec{r}_i-\\vec{r}_j|$ (assuming no external field). If the system is initially out of equilibrium, this is reached through mixing in phase space due to fluctuations of the gravitational potential, a process called violent relaxation (Lynden-Bell 1967).\n\nOnce the system is in dynamic equilibrium a long term evolution is possible, driven by two-body relaxation. Energy is slowly exchanged between particles and the system tends to evolve toward thermodynamic equilibrium and energy equipartition. The timescale ($T_{rel}$) for this process depends on the number of particles and on the geometry of the system: $T_{rel} \\propto N\/log(0.11 N) T_{cr}$ (e.g. see Spitzer 1987). N-body systems such as galaxies and dark matter halos have a relaxation time much longer than the life of the Universe and are thus considered collisionless systems. Smaller systems, such as globular and open clusters, are instead collisional, as the relaxation time is shorter than their age. Two body relaxation is also suppressed when one particle in the system dominates the gravitational potential, such as in the case of solar system dynamics, where planets are essentially quasi-test particles.\n\nClose encounters between three or more particles not only contribute to energy exchange, but can also lead to the formation of bound subsystems (mainly binaries). The formation and evolution of a binary population is best followed through direct, unsoftened, N-body techniques.\n\nA self-gravitating N-body system made of single particles has a negative specific heat, that is it increases its kinetic energy as a result of energy losses (Lynden-Bell & Wood 1968). This is a consequence of the virial theorem and qualitatively it is analogous to the acceleration of a Earth artificial satellite in presence of atmospheric drag. A negative specific heat system is thermodynamically unstable and over the two body relaxation timescale it evolves toward a gravothermal collapse, creating a core-halo structure, where the core progressively increases its concentration, fueling an overall halo expansion. The collapse is eventually halted once three body interactions lead to the formation of binaries. The so called \"core collapsed globular clusters\" are considered to be formed as a result of this mechanism.\n\n## Mean field approach: the Boltzmann equation\n\nA system of N particles interacting gravitationally defines a 6N+1 dimensional phase space given by the N position and velocity vectors associated to each particle at each time t. The solution of the N-body problem defines a trajectory in this phase space. If the number of particles is large enough, that is if the two body relaxation time is long compared to the time-frame one is interested in, then a statistical description of the problem is possible. This allows us to pass from a 6N+1 to a 6+1 dimension phase space. The idea is to construct a mean field description of the dynamical system in terms of a single particle distribution function $f(\\vec{x},\\vec{v},t)$, where $f(\\vec{x},\\vec{v},t) d^3x d^3v$ is proportional to the probability of finding a particle in a 6D element of volume $d^3x d^3v$ centered around position $\\vec{r}$ and velocity $\\vec{v}$ at time t. Within this simplified framework the knowledge of the distribution function uniquely defines all the properties of the system. The dynamic is described by the collisionless Boltzmann equation, which derives essentially from the Liouville theorem: $$\\frac{D f}{D t} = \\frac{\\partial f}{\\partial t} + \\vec{v} \\cdot \\frac{\\partial f}{\\partial \\vec{x}} - \\frac{\\partial \\phi_T}{\\partial \\vec{x}} \\cdot \\frac{\\partial f}{\\partial \\vec{v}}= 0,$$ where the total potential field $\\phi_T = \\phi_{ext}(\\vec{x},t)+\n\\phi(\\vec{x},t)$ is the sum of an external potential plus the self-consistent field $\\phi(\\vec{x},t)$ defined from the distribution function itself through the solution of the Poisson equation: $$\\nabla^2 \\phi(\\vec{x},t) = 4 \\pi G \\rho(\\vec{r},t),$$ where $\\rho(\\vec{r},t) = \\int f(\\vec{x},\\vec{v},t) d^3v$.\n\nGiven its high dimensionality (6+1), the collisionless Boltzmann equation is usually solved by sampling the initial distribution function and then by evolving the resulting N-body system by means of a numerical method that suppresses two body interactions at small scales. The interaction is softened not only for computational convenience to limit the maximum relative velocity during close encounters but especially to prevent artificial formation of binaries. This is because a simulation particle in a collisionless run represents in reality an ensemble of real particles (e.g. galaxies contain $10^{11}$ stars but simulations typically use only $N \\in\n[10^6:10^9]$). Note however that two body relaxation is driven by close as well as by distant encounters, so softening does not suppress it. In principle any numerical method that has a small scale softening is appropriate for following collisionless dynamics.\n\nA mean field description for an N-body system is possible also for collisional systems, that is when the relaxation time is comparable to or shorter than the timeframe of interest. In this case the collisionless Boltzmann equation is modified by the introduction of a collision operator $C[f]$ on its right side:\n\n$$\\frac{D f}{D t} = \\frac{\\partial f}{\\partial t} + \\vec{v} \\cdot \\frac{\\partial f}{\\partial \\vec{x}} - \\frac{\\partial \\phi_T}{\\partial \\vec{x}} \\cdot \\frac{\\partial f}{\\partial \\vec{v}}= C[f].$$\n\nIn this framework the operator $C[f]$ describes the probability for particles to enter\/leave a phase space element as a result of gravitational encounters. The collision operator C is generally constructed assuming that encounters are:\n\n1. Markov processes, that is C depends only on the present state of the system;\n\n2. local, that is only the velocity of the particles are changed and not their positions;\n\n3. weak, that is the typical velocity change is much smaller than the velocity itself.\n\nUnder these assumptions Monte Carlo methods are available to solve the dynamics of the system (see next section). Applications of the collision operator include dynamics of globular clusters and of self-interacting dark matter.\n\n## Mean Field Approach: analogies and differences with fluid dynamics\n\nThe velocity moments of the Boltzmann Equation define a set of equations known as the Jeans Equations (e.g. Binney & Tremaine 2007). The first three equations of the set are formally identical to the Navier-Stokes equations for a self-gravitating gas and, like in the fluid-dynamics analogy, express the conservation of mass, momentum and energy. Therefore the numerical algorithms developed to follow the dynamics of N-body systems find a wide application also in the context of fluid-dynamics, with one important example being the Smoothed Particle Hydrodynamics (SPH) method (Gingold & Monaghan 1977). The fundamental difference between the two cases is that the Jeans equations are derived in the limit of a collisionless system, while the Navier-Stokes equations assume a highly collisional system, with the mean free path of a particle approaching zero. For fluids, this leads to the definition of an equation of state, which closes the Navier-Stokes equations. The Jeans Equations are instead an infinite open set, where the *n-th* velocity moment depends on the *n-th+1* moment.\n\n## Astrophysical domains\n\nBased on the previous considerations about collisionality and timescales, four main astrophysical domains for N-body simulations can be identified, each requiring a different numerical technique to guarantee maximum performance and accuracy:\n\n*Celestial mechanics* (solar and extrasolar planetary systems). Here a single body dominates the gravitational field and all the other objects move almost like test particles, subject to reciprocal perturbations. In this framework very high accuracy is required to correctly evaluate the perturbative terms and to avoid being dominated by numerical noise such as time discretization and round-offs errors.\n\n*Dense stellar systems*, such as open clusters and globular clusters. These collisional systems made of components of roughly equal mass present a rich dynamics, with multiple close encounters of stars. The evolution requires to be followed on a relaxation timescale with a correct description of the short range interactions.\n\n*Sphere of influence of a massive BH* at the center of a stellar system. The sphere of influence of a BH is the volume within which the gravity of the BH dominates over that of the other particles. The situation resembles that of solar system dynamics, but here given the very high density of stars two body encounters are frequent, making the problem a difficult hybrid between the two previous cases. In addition, Post Newtonian physics may need to be included if high accuracy is required in the proximity of the BH.\n\n*Galaxy dynamics and cosmology*. Galaxies, and especially dark matter halos, are constituted by a very large number of particles, so that their dynamics can be well described in terms of a mean field. Close encounters are not important and softening is usually employed in these N-body simulations to avoid the unphysical formation of binaries. Within this class, Self-Interacting Dark Matter Particles need a special mention: if dark matter halos are made of Weakly Interacting Massive Particles, then their dynamics can be modified by non-gravitational self-interactions, especially effective at the center of cuspy dark halos. The dynamics of such a system is described by the Collisional Boltzmann Equation, which can be approximately solved using Fokker-Plank methods.\n\n# Newtonian gravity: methods\n\nThe history of N-body simulations starts with a pioneering attempt by Holmberg (1941), who followed the evolution of a 37 particle system, where the force was calculated using lightbulbs and galvanometers (taking advantage of the same $r^{-2}$ scaling of electromagnetic and gravitational interactions). Computer simulations started in the early sixties using up to 100 particles (e.g. see von Hoerner 1960 and Aarseth 1963) and had their full bloom in the eighties with the development of fast and efficient algorithms to deal with collisionless systems, such as particle-mesh codes (see Hockney & Eastwood 1988 and references therein) and the tree method (Barnes & Hut 1986). At the same time regularization techniques were developed to deal with close encounters and binary dynamics in the case of direct simulations of a collisional system (e.g. see Aarseth's NBODY-X code series based on KS and chain regularization - Aarseth 2003 and references therein). These algorithm advancements were coupled with tremendous progresses in the hardware, with the cpu speed growing exponentially. In addition to parallelization of serial codes, the field advanced also thanks to special purpose hardware, such as the GRAPE (Makino et al. 1997). Today's (2008) N-body simulations are performed with up to $N=10^5$ (e.g. see Baumgardt & Makino 2003) for direct integration codes over a two-body relaxation timescale and up to $N=10^{10}$ for collisionless dynamics\/cosmology (e.g. see the Millennium Run - Springel et al. 2005). In the context of planetary dynamics, self-gravitating systems of disk\/ring particles with $N\\approx 10^6$ can be followed over hundreds of dynamical times (e.g. Richardson et al. 2000). Major breakthroughs are also expected in the near future thanks both to the next generation GRAPE-DR and to double precision graphic processing units, which provide extremely cost competitive high performance computing capabilities.\n\n## Direct methods\n\nDirect methods do not introduce approximations in the solution of the equations of motions and thus deliver the highest accuracy at the price of the longest computation time, of order $O(N^2)$ per timestep. Integration is performed using adaptive (individual) timesteps and commonly a fourth order Hermite integrator. Close encounters and bound subsystems are treated exactly in terms of Kustaanheimo-Steifel transformations. These essentially consist in transformations of coordinates using a perturbative approach over the analytical two body solution. If more than two particles have a strong mutual interaction, then a chain regularization strategy (Mikkola 1990) can be used, which consists in recasting the problem in terms of a series of separate Kustaanheimo-Steifel interactions. A state of the art, publicly available, serial direct N-body integrator is Aarseth's NBODY6. Even with this specialized software, the number of particles that can be effectively followed for timescales comparable to the Hubble time is limited. For example, if one is interested in the dynamical evolution of globular clusters, currently about $N=20000$ is the practical limit for a serial run, as such a run takes about $1000$ cpu hours. A run with $10^6$ particles carried out for a similar number of relaxation times $T_{rel}$ would require about $10^8$ cpu hours. The algorithm can be parallelized, but in practice load imbalances may saturate the gain in efficiency, so some of the most cpu demanding simulations have been carried out on special purpose hardware, such as the GRAPE, where the chip architecture has been optimized to compute gravitational interactions, thus delivering Teraflops performance.\n\n## Tree codes\n\nThe tree code method (Barnes & Hut 1986) provides a fast, general integrator for collisionless systems, when close encounters are not important and where the force contributions from very distant particles does not need to be computed at very high accuracy. In fact, with a tree code, small scale, strong interactions are typically softened (but see McMillan & Aarseth 1993), while the potentials due to distant groups of particles are approximated by multipole expansions about the group centers of mass. The resulting computation time that scale as $O(N log(N))$ but the approximations introduce some (small) errors. The errors in the long-range component of the gravitational acceleration are controlled by a single parameter (the so called opening angle) that determines how small and distant a group of particles must be to use the approximation. This strategy works well to keep the average force error low, but a worst case scenario analysis highlights that unbound errors can arise for rare, but astrophysically reasonable configurations, such as that of the classic \"exploding galaxy\" (Salmon & Warren 1994). In addition, force errors from the tree code may lead to violation of momentum conservation. Typical implementations of the tree code expand the potentials to quadrupole order and construct a tree hierarchy of particles using a recursive binary splitting algorithm. The tree does not need to be recomputed from scratch at every timestep, saving significant cpu time. Systems with several hundred thousands of collisionless particles can be easily simulated on a GFlops workstation for a Hubble time using this method.\n\n## Fast Multipole Methods\n\nA standard tree code implementation does not take advantage of the fact that nearby particles will be subject to a similar acceleration due to distant groups of particles. The Fast Multipole Method (Greengard & Rokhlin 1987) exploit this idea and uses a multipole expansion to compute the force from a distant source cell within a sink cell. This additional approximation of the gravitational interaction was claimed to reduce the complexity from $O(N log(N))$ to $O(N)$, but the exact scaling seems implementation dependent and has been debated in the literature (e.g. see Dehnen 2000 and references therein). One advantage of the fast multipole method is that the symmetry in the treatment of sink and source cells with respect to the multipole expansion can guarantee an exact conservation of the momentum. Recent successful implementations of fast multipole codes or hybrids with tree code scheme, include Dehnen's Cartesian expansion scheme (the GyrfalcON code- Dehnen 2000) and PKDGRAV (Stadel 2001).\n\n## Particle-mesh codes\n\nThe particle mesh method represents another route to speed up direct force evaluation for collisionless systems. In this case the gravitational potential of the system is constructed over a grid starting from the density field and by solving the associated Poisson equation. Particles do not interact directly between each other but only through a mean field. The method essentially softens the gravitational interactions at small scales, that is below the cell length. The density field is constructed using a kernel to split the mass of the particles to the grid cells around the particle position. The simplest choice is to assign all the mass to a single cell, but this leads to significant force fluctuations, which can be reduced using a cloud in cell (8 points) or a triangular shaped cloud (27 points) kernel. The Poisson equation is typically solved using a Fast Fourier Transform, but other grid methods such as successive overrelaxation can also be used - e.g. see Bodenheimer et al. (2007). The deriving force, defined on the grid, is then assigned back to the particles using the same kernel employed for the density field construction, in order to avoid spurious self forces. The method achieves a linear complexity in the number of particles and ($O(N_g log(N_g)$) in the number of grid cells (this latter scaling is that of the FFT method). The price to pay is in terms of short range accuracy as the force is a poor approximation of Newton's law up to several grid spacing of distance.\n\n## Adaptive Mesh Refinement method\n\nThe dynamic range of particle-mesh codes can be increased by using an adaptive rather than a static grid to solve the Poisson Equation. In the Adaptive Mesh Refinement (AMR) method the grid elements are concentrated where a higher resolution is needed, for example around the highest density regions. One possibility to obtain an adaptive resolution is to first construct a low-resolution solution of the Poisson Equation and then to progressively refine regions where the local truncation error (estimated through the Richardson extrapolation) is highest. A multigrid structure needs to take into account issues such as matching the solution at the grid interfaces. AMR codes are well suited for cosmological simulations (e.g. see the ENZO code, Bryan & Norman 1998).\n\n## Self consistent field methods\n\nA variant over the Particle Mesh code is the expansion of the density and potential of the system in terms of a basis of orthogonal eigenfunctions. Clutton-Brock (1972) was one of the first to apply this idea in stellar dynamics, while a modern implementation is that of Hernquist & Ostriker (1992). This method guarantees at fixed computational resources a higher accuracy than the tree code and the particle mesh algorithms, provided that the set of basis function is appropriately selected. This limits in practice a general application of the method, which remains however very competitive for the study of the dynamical stability of collisionless systems constructed from distributions functions models.\n\n## P3M and PM-Tree codes\n\nIn order to increase the force resolution of particle mesh codes it has been proposed to couple a mean field description on large scales with a direct, softened, treatment of the gravitational interactions on distances of the order of or below a few grid spacing. This method is called $P^3M$ (Hockney & Eastwood 1988): Particle-Particle-Particle-Mesh and efficiently increases the dynamic range of the parent PM algorithm. However in presence of strong clustering a large number of particles will interact directly between each other, slowing down significantly the computation to $O(N^2)$. This problem can be resolved by using adaptive meshes, so that the spatial resolution is refined in regions of high density. Adaptive $P^3M$ codes have a computational cost which scales as $O(N log(N))$, like in a tree code. Finally another possibility is to resort to a tree code for the short range force evaluation leading to a hybrid PM-Tree scheme. These methods are generally extremely well suited for cosmological simulations, for example see Gadget2 (Springel 2005).\n\n## Celestial mechanics codes\n\nComputational Celestial Mechanics refers to a series of methods targeted at studying the dynamics of small N systems ($N \\lesssim 20$). The smallest non trivial N is N=3, that is the three body problem, which has many applications ranging from space flight to planets satellite motions and to binary-single stars encounters. Celestial mechanics requires extremely high precision given the chaotic nature of the N-body problem. Numerical methods are based on the use of local system of coordinates, to fight round-off errors in systems with a wide dynamic range, such as in the study of star-planet-satellite problems, as well as on the variational equations formalism and on perturbation theory to take advantage of the analytical, unperturbed motion of planets in the gravitation field of their star (e.g. see Beutler 2005). In this context symplectic integrators are widely used (e.g. see Wisdom & Holman 1991; Leimkuhler & Reich 2005).\n\n# Mean Field Methods\n\nAs an alternative to particle based N-body methods, the dynamics of a system of particles interacting gravitationally can be followed by solving the time dependent Boltzmann Equation coupled with the self-consistent Poisson equation.\n\n## Grid based solvers for the Collisionless Boltzmann Equation\n\nThis approach can take advantage of standard computational methods developed to solve partial differential equations, such as successive over-relaxation and conjugate gradient methods. However it requires to solve a highly dimensional (6D+time) non-linear system of partial differential equations. In general, the bottleneck is thus the very large amount of memory needed (for example, Terabites just to have a moderate resolution grid with 100 elements in each dimension). However this method is competitive if the astrophysical problem of interest presents symmetries that reduce the number of dimensions needed in the model. For example, in the case of globular cluster dynamics a very good approximation can be obtained via a 3 dimensional model by assuming spherical symmetry in the position space (1D) and radial anisotropy in the velocity space (2D).\n\n## Fokker-Planck and Monte Carlo methods\n\nThese methods solve the collisional Boltzmann equation starting from a given distribution function and by following test particles in the six dimensional position-velocity phase space. At each timestep the velocity of the particles is perturbed by random fluctuations accordingly to the assumed form for the collision operator $C[f]$, which depends on computed cross sections for two, three and four body encounters. The complexity of Monte Carlo codes is linear with the number of particles and thus a realistic number of particles can be used for simulations of collisional systems with $N>10^5$ with a serial code. The method is ideal for exploring grids of initial conditions, after proper validation through comparison with direct integration (e.g. see Heggie et al. 2006).\n\n## Beyond Newton: strong gravitational fields\n\nIn presence of a strong gravitational field, such as that in the proximity of the event horizon of a black hole, N-body simulations cannot be based on Newtonian physics, but must take into account a general relativity framework. As a numerical solution of the Einstein equation is extremely challenging, Post-Newtonian approximations are used when the gravitational field does not deviate too much from the Newtonian case. Post-Newtonian corrections are typically good enough to treat most astrophysical problems of the dynamics of stars around a black hole. A full general relativity framework is only required to study the merging and gravitational waves emission of two black-holes (e.g. see Baker et al. 2006).\n\n# Hardware\n\nAn alternative approach to increase the efficiency of numerical solution of the N-body problem is to optimize the hardware. For direct simulations this approach can be very effective, thanks to the fact that the bottle neck of computation is just the evaluation of the gravitational force, which has a very simple expression. Along this route the GRAPE (GRavityPipE) concept has been extremely effective. The basic idea is to optimize a hardware pipeline to compute $(\\vec{r_i}-\\vec{r_j})\/| \\vec{r_i}-\\vec{r_j}|^3$. This special purpose hardware can then be interfaced with a general purpose computer, which takes care of all the other numerical operations required to solve the equations of motions. With the GRAPE-6, the largest simulation on a collisional timescale published to date has N=131028 (Baumgardt & Makino 2003).\n\nAnother recent promising hardware development is the possibility to use Graphic Cards (GPUs) to carry out the cpu intensive force evaluation. The performance of current generation of GPUs appears to be superior (in terms of Flops\/\\$ ratio) to that of the GRAPE6 series (Portegies-Zwart et al. 2007) even if one important limitation of GPUs is that they currently operate in single precision.\n\n# Simulation environments\n\nIn addition to the availability of stand-alone codes, several software environments have been created that contain various tools to set up initial conditions, run simulations, and analyze and visualize their results. Some examples are NEMO, Starlab, ACS and MUSE (see below for links to their web-pages).\n\n# Suggested readings\n\n## Books\n\n- \"Computer Simulation Using Particles\" Hockney, R.W. and Eastwood, J.W. 1988\n\n- \"Gravitational N-Body Simulations: Tools and Algorithms\" Aarseth, S. 2003\n\n- \"The Gravitational MillionBody Problem\" Heggie, D.C. and Hut, P. 2003\n\n- \"Methods of Celestial Mechanics\" Beutler, G. 2005\n\n- \"Numerical Recipes\" Press, W.H., Teukolsky, S.A., Vetterling, W.T. and Flannery B.P. 2007\n\n- \"Numerical Methods in Astrophysics: An Introduction\" Bodenheimer, P., Laughlin, G.P., Rozyczka, M. and Yorke, H.W. 2007\n\n## Review articles\n\n\\* \"Simulations of Structure Formation in the Universe\" Bertschinger, E. 1998, ARA&A, 36, 599\n\n## Web Material\n\n- \"The N-body Constitution\" by Lake, G, Katz, N., Quinn T. and Stadel. J. (http:\/\/www-hpcc.astro.washington.edu\/old_content\/siamhtml\/siamhtml.html)\n\n# Open source codes\n\n- Aarseth's direct integration codes: http:\/\/www.ast.cam.ac.uk\/s\u0303verre\/web\/pages\/nbody.htm\n\n- ACS, a collection of tools and introductory texts: http:\/\/www.artcompsci.org\/\n\n- ENZO, a cosmological AMR code: http:\/\/lca.ucsd.edu\/portal\/software\/enzo\n\n- Gadget2, a cosmological PM-tree+SPH code (massively parallel): http:\/\/www.mpa-garching.mpg.de\/gadget\/\n\n- Mercury (a mixed variable symplectic integrator code for planetary dynamics): http:\/\/www.arm.ac.uk\/j\u0303ec\/home.html\n\n- MUSE, a software framework for simulations of dense stellar systems: http:\/\/muse.li\/\n\n- NEMO collection (includes particle-grid and tree codes): http:\/\/bima.astro.umd.edu\/nemo\/\n\n- Starlab (including the direct integration Kira code): http:\/\/www.ids.ias.edu\/s\u0303tarlab\/starlab.html\n\n# Acknowledgments\n\nWe thank Douglas Heggie, Derek Richardson and two anonymous referees for useful comments and suggestions. Further suggestions and comments are very welcome as it is in the spirit of Scholarpedia to keep the articles up-to-date.\n\n# References\n\n1. Aarseth, S. 1963, MNRAS, 126, 223\n\n2. Aarseth, S. 2003, \"Gravitational N-Body Simulations: Tools and Algorithms\", Cambridge University Press\n\n3. Baker, J.G. et al. 2006, ApJ, 653, 93\n\n4. Barnes, J.E. and Hut, P. 1986, Nature, 324, 466\n\n5. Baumgardt, H. and Makino, J. 2003, MNRAS, 340, 227\n\n6. Beutler G. 2005, \"Methods of Celestial Mechanics\", Springer\n\n7. Binney J. & Tremaine S. 1987, \"Galactic Dynamics\", Princeton University Press\n\n8. Bodenheimer, P., Laughlin, G.P., Rozyczka, M. and Yorke, H.W. 2007, \"Numerical Methods in Astrophysics: An Introduction\", Taylor & Francis\n\n9. Bryan G.L. and Norman, M.L. 1998, ApJ, 495, 80\n\n10. Clutton-Brock, M. 1972, Ap&SS, 16, 101\n\n11. Dehnen, W. 2001, MNRAS, 324, 273\n\n12. Dehnen, W. 2000, ApJL, 536, 39\n\n13. Gingold, R.A. and Monaghan, J.J. 1977, MNRAS, 181, 375\n\n14. Greengard, L. & Rokhlin, V. 1987, J. comput. Phys., 73, 325\n\n15. Heggie, D.C., Trenti, M. and Hut, P. 2006, MNRAS, 368, 677\n\n16. Hernquist, L and Barnes, J.E. 1990, ApJ, 349, 562\n\n17. Hockney, R.W. and Eastwood, J.W. 1988, \"Computer Simulation Using Particles\", Taylor & Francis\n\n18. Holmberg, E. 1941, ApJ, 94, 385\n\n19. Leimkuhler, B. and Sebastian R. 2005, \"Simulating Hamiltonian Dynamics\", Cambridge University Press\n\n20. Lynden-Bell, D. 1967, MNRAS, 136, 101\n\n21. Lynden-Bell, D. and Wood, R. 1968, MNRAS, 138, 495\n\n22. Makino, J., Fukushige, T., Koga, M. and Namura, K. 2003, PASJ, 55, 1163\n\n23. Portegies-Zwart, S.F., Belleman, R.G. and Geldof, P.M. 2007, New Astronomy, 12, 641\n\n24. Press, W.H., Teukolsky, S.A., Vetterling, W.T. and Flannery B.P. 2007, \"Numerical Recipes\", Cambridge University Press\n\n25. Richardson, D.C., Quinn, T., Stadel, J. and Lake, G. 2000, Icarus, 143, 45\n\n26. Salmon J.K. and Warren M.S. 1994, J. Comp. Phys., 111, 136.\n\n27. Spitzer, L. 1987, \"Dynamical Evolution of Globular Clusters\", Princeton University Press\n\n28. Springel, V. 2005, MNRAS, 364, 1105\n\n29. Springel, V. et al. 2005, Nature, 435, 629\n\n30. Stadel J. 2001, PhD. Thesis, University of Washington\n\n31. von Hoerner, S. 1960, Z. Astrophys. 50, 184\n\n32. Wisdom, J. and Holman, M. 1991, AJ, 102, 1528\n\n[^1]: email@example.com","meta":{"dup_signals":{"dup_doc_count":23,"dup_dump_count":12,"dup_details":{"curated_sources":2,"2024-26":1,"2024-18":2,"2024-10":4,"2017-13":3,"2015-18":1,"2015-11":1,"2015-06":1,"2013-48":2,"2013-20":3,"2024-30":1,"unknown":2}},"filename":"out\/0806.3950.tex.md"},"subset":"arxiv"} +{"text":"abstract: Citation networks have been widely used to study the evolution of science through the lenses of the underlying patterns of knowledge flows among academic papers, authors, research sub-fields, and scientific journals. Here we focus on citation networks to cast light on the salience of homophily, namely the principle that similarity breeds connection, for knowledge transfer between papers. To this end, we assess the degree to which citations tend to occur between papers that are concerned with seemingly related topics or research problems. Drawing on a large data set of articles published in the journals of the American Physical Society between 1893 and 2009, we propose a novel method for measuring the similarity between articles through the statistical validation of the overlap between their bibliographies. Results suggest that the probability of a citation made by one article to another is indeed an increasing function of the similarity between the two articles. Our study also enables us to uncover missing citations between pairs of highly related articles, and may thus help identify barriers to effective knowledge flows. By quantifying the proportion of missing citations, we conduct a comparative assessment of distinct journals and research sub-fields in terms of their ability to facilitate or impede the dissemination of knowledge. Findings indicate that Electromagnetism and Interdisciplinary Physics are the two sub-fields in physics with the smallest percentage of missing citations. Moreover, knowledge transfer seems to be more effectively facilitated by journals of wide visibility, such as Physical Review Letters, than by lower-impact ones. Our study has important implications for authors, editors and reviewers of scientific journals, as well as public preprint repositories, as it provides a procedure for recommending relevant yet missing references and properly integrating bibliographies of papers.\nauthor: Valerio Ciotti; Moreno Bonaventura; Vincenzo Nicosia; Pietro Panzarasa; Vito Latora\ntitle: Homophily and missing links in citation networks\n\n# Introduction\n\nAmong the broad category of information networks, including the Word Wide Web , email exchange networks , and phone call networks , the networks of citations between academic papers have been widely investigated to uncover patterns and dynamics of knowledge transfer, sharing, and creation in science . The nodes of citation networks are academic papers, each containing a bibliography with references to previously published work. Typically, a directed link is established from one paper to another if the former cites the latter in its bibliography. Because papers can only cite other papers that have already been published, all directed links in citation networks necessarily point backward in time. Citation networks are therefore *directed acyclic graphs*, i.e., they do not contain any closed loops of directed links .\n\nSince the seminal work by Derek de Solla Price on the distribution of citations received by scientific articles , citation networks have extensively been studied to shed light on the mechanisms underpinning the evolution, diffusion, recombination, and sharing of knowledge over time . The reason why citation networks are crucial to understanding and modelling scientific production is clear. Although citations can serve different functions \u2013 for instance, they acknowledge the relevance of previous work, they help the reader of a paper to gather additional information about a specific topic, they point to related work or, sometimes, they can also express disagreement with, or level criticism against, a position endorsed in a paper \u2013 the number of citations received is generally regarded as an indication of the relevance and quality of a paper as well as of its authors' prestige and scientific success . Certainly, citation networks can be used to reconstruct the communication flows among different scientific communities and infer the relation among different research topics and sub-fields . Recent work on citation networks has indeed proposed a new method for highlighting the role of citations as conduits of knowledge. For instance, Clough et al. \u00a0 have proposed reduction methods to filter out the relevant citations preserving the causal structure of the underlying network and of knowledge flows.\n\nIn this paper, we study citations from a different perspective. First, we assess the extent to which the occurrence of a citation between two papers is driven by the similarity between them. Specifically, we investigate empirically a large data set of articles published in the journals of the American Physical Society (APS) , and we measure the similarity between any two articles by drawing on, and extending, a method originally proposed by Tumminello et al. in Ref.\u00a0 that enables us to statistically validate the overlap between the bibliographies of the two articles. Results suggest that the number citations made by one article to another is indeed an increasing function of the similarity between the two articles. Our findings thus indicate that the creation of links in citation networks can be seen as governed by *homophily*, namely the principle that similarity breeds connection .\n\nSecond, we propose a novel method for identifying missing links in citation networks. The gist of our argument is simple. We focus on pairs of articles characterised by high degrees of similarity; if a citation between them is missing, we regard the lack of a directed link as a signature of a relevant yet unrecorded flow of knowledge in the network. By uncovering pairs of published articles with missing citations, we rank the APS journals and topics according to the incidence of missing data on knowledge flows.\n\nOur method has important implications for the analysis not only of published articles, but also of newly posted preprints on online archives, or of manuscripts submitted to scientific journals. Specifically, our method can be used to suggest interesting work and relevant literature that could, in principle, be included in the bibliography of recently posted or submitted preprints. As we witness a continuously increasing production of preprints and publication of new articles, it has become particularly difficult for authors to keep abreast of scientific developments and relevant works related to the domain of interest. As a result, lack of knowledge of prior or current related work and missing relevant citations may occur quite often. The method presented in this paper can help the scientific community precisely to address this problem. In particular, it can be used not only by authors to integrate the bibliographies of their work, but also by editors of scientific journals to uncover missing citations and identify the appropriate reviewers for the papers they are considering for publication.\n\nThe paper is organised as follows. In Section\u00a0, we introduce and discuss our method for evaluating similarity between articles based on the statistical significance of the overlap between their respective bibliographies. In Section\u00a0, we apply our method to all articles published in the journals of the APS. We show that citations between articles are positively correlated with their similarity, and we then identify missing links between similar articles published in different fields and in different journals. In Section\u00a0, we summarise our findings and discuss implications, limitations, and avenues for future work. Finally, in Section\u00a0, we describe the data set and the validation technique used in our analysis.\n\n# Quantifying similarity between articles\n\nSimilarity between two articles can be measured in a number of ways. A straightforward, yet labour-intensive way of comparing articles is to semantically analyse their entire texts. Alternatively, similarity can be simply based on the co-occurence of a few relevant concepts or keywords in the titles or abstracts of the articles. Moreover, similarity can be measured through the co-occurrence of classification codes, such as those included in the Physics and Astronomy Classification Scheme (PACS), which help identify the research areas to which each article belongs . Here, we propose an alternative measure of similarity based on the comparison between the bibliographic lists of references included in two articles. Our hypothesis is that, if two articles are concerned with related aspects of the same discipline or research problem, then their bibliographies will exhibit a substantial overlap. We shall therefore introduce a method for assessing the statistical significance of the overlap between the lists of references of two articles, and we shall then use the statistically validated overlap as as measure of the similarity between the two articles.\n\n## Overlap between reference lists as a measure of similarity between articles\n\nA natural way to quantify the overlap between two given sets $Q_i$ and $Q_j$ is the Jaccard index, which is defined as the ratio between the number of common elements in the two sets and the total number of elements in the union of the two sets: $$J_{ij} = \\frac{ |Q_i \\cap Q_j|}{|Q_i \\cup Q_j|}.\n\\label{eq:jacard}$$ Notice that, in general, if two sets share a higher number of elements, then their Jaccard index will increase, and in particular $J_{ij} = 1$ only if $Q_i \\equiv Q_j$, while $J_{ij} = 0$ if the two sets do not share any element. An example of the suitability of the Jaccard index for measuring the similarity between the bibliographies of two papers is provided in Fig.\u00a0(a)-(b). Here the two sets $Q_i$ and $Q_j$ represent, respectively, the articles in the two reference lists of the two articles $i$ and $j$. Since article P1 and article P2 share only one reference over a total of five, their Jaccard index is equal to $0.2$. Conversely, the two articles P3 and P4 in panel (b) have a Jaccard index equal to $1.0$, since the overlap between their reference lists is complete.\n\nHowever, the use of the Jaccard index has some drawbacks. First, the value of $J_{ij}$ is always bounded from above by $\\frac{\\min(|Q_i|,\n |Q_j|)}{|Q_i| + |Q_j|}$. This means that if the sizes of the two sets are remarkably different, their similarity is primarily determined by the size of the smallest of the two sets. As a consequence, large sets tend to be characterised by relatively small values of similarities with other smaller sets. In addition to this, the Jaccard index does not distinguish between pairs of identical sets having different sizes. In particular, if we consider two identical sets $(Q_i, Q_j)$ of size $N_1$ and two other identical sets $(Q_m,\nQ_n)$ of size $N_2$, then we have $J_{ij} = J_{mn} = 1$, regardless of the values of their sizes $N_1$ and $N_2$. For instance, the Jaccard index of articles P5 and P6 is equal to $1.0$ and is identical to that of articles P3 and P4, even though P3 and P4 share a larger number of references. In the case of bibliographic references, this degeneracy of the Jaccard index is very important. In fact, if we interpret references as proxies for knowledge flows from cited to citing articles, then it would be reasonable to associate a higher value of similarity to a pair of articles that share a large number of references than to a pair sharing only few references, since the former pair is expected to draw on a more similar scientific background. In particular, we would expect the two articles in panel (b) to be assigned a value of similarity larger than the two articles in panel (c).\n\nAnother drawback of a bare count of the number of common references is that some citations can, in principle, be more important than others. Consider the two cases depicted in Fig.\u00a0(d)-(e). In panel (d), articles P7 and P8 have an identical set of references, consisting in the citation of a single highly-cited article. Also in panel (e), both articles P9 and P10 cite the same article. However, in this case the cited article does not receive any citation from other articles. Now, since our aim is to quantify the similarity between articles, a citation to a highly-cited paper, such as a review article, should be considered less relevant than a citation to a more specialised or less visible article, which is cited only by articles concerned with a certain specific topic. In other words, it would be preferable to associate a higher relevance to the single citation shared by articles P9 and P10 in Fig.\u00a0(e) than to the citation to other highly cited articles shared by articles P7 and P8 in Fig.\u00a0(d), and thus to conclude that articles P9 and P10 are more similar than article P7 and P8.\n\n## Defining statistically significant bibliographic overlaps\n\nThe method we propose here allows us to overcome the drawbacks of the Jaccard index discussed above and illustrated in Fig.\u00a0. The method is based on an extension of the so-called *Statistically Validated Network (SVN)* approach to the case of directed unipartite graphs. Statistically Validated Networks were introduced by Tumminello et al.\u00a0 as a method to filter out statistically irrelevant information from bipartite graphs, such as user-item networks deriving from purchase systems or product reviews. In such systems, a set $A$ of nodes (e.g., buyers, users) express preferences over another set $B$ of nodes (e.g., books, movies, services). Those preferences or selections are represented by directed links from nodes in set $A$ to nodes in set $B$. The idea behind SVNs is that the similarity between two nodes $i$ and $j$ in the set $A$ can be expressed in terms of the co-occurrence of their selections of nodes in $B$, and in particular that it is possible to attach a statistical significance, namely a $p$-value, to each set of common selections made by $i$ and $j$.\n\nCitation networks are not bipartite graphs. They are also different from user-item networks because each article in general can only cite other articles that have already been published, and can only receive citations from other articles that will be published after its publication date. Nevertheless, it is possible to draw upon the same idea used to construct bipartite statistically validated networks, and define a similarity between two articles based on the overlap between their reference lists.\n\nLet us consider two sets of nodes, $A$ and $B$. The set $A$ contains all the articles with more than zero outgoing citations, $A = \\{ i \\in\nV \\, | \\, k_i^{\\rm out}>0\\}$, while the set $B$ contains all the articles that have received at least two citations, $B = \\{ i \\in V \\,\n| \\, k_i^{\\rm in}>1\\}$. It is worth noticing that $A\\cap B \\neq\n\\emptyset$, i.e., the two sets may share some articles, since in general each article cites and is cited by other articles. We denote by $N_A = |A|$ and $N_B=|B|$ the cardinality of the two sets. The method associates a statistical significance to the similarity between a pair of nodes $(i,j)$ in $A$ by comparing the number of co-occurrences of citations in their reference lists against the null hypothesis of random co-occurrence of citations to one or more articles in $B$. In this way, the method allows us to identify pairs of nodes in $A$ characterised by overlaps between citations to elements in $B$ which are statistically different from those expected in the null model.\n\nThe method works as follows. For each value $k$ of in-degree observed in the citation network, we consider the set of nodes $S^k = S^k_B\n\\cup S^k_A$, where $S^k_B \\subset B$ contains all $N_B^k = |S^k_B|$ articles with in-degree equal to $k$, and $S^k_A \\subset A$ contains all articles that cite at least one element in $S^k_B$. Notice that the set $S^k$ is, by construction, homogeneous with respect to the in-degree of the elements belonging to the set $B$. Then, for each pair of articles $i,j\\in S_A^{k}$, we indicate by $d_i$ and $d_j$ their respective number of citations directed towards the elements of $S^k_B$. Under the hypothesis that the articles $i$ and $j$ cite, respectively, $d_i$ and $d_j$ distinct elements uniformly at random from $S^k_B$, the probability that they select the same $X$ articles is given by the hypergeometric probability function:\n\n$$\\mathcal{P}(X \\,| \\,N_B^k,d_i ,d_j) = \\frac{{{d_i}\\choose{X}} {{N_B^k-d_i}\\choose{d_j - X}}}{{{N_B^k}\\choose{d_j}}}.\n\\label{hyper}$$ Thus, we can associate a $p$-value to each pair of nodes $i,j\\in S_A^{k}$:\n\n$$q_{ij}(k) = 1 - \\sum_{X=0}^{N_{ij}^k -1} \\mathcal{P}(X \\, | \\,\nN_B^k,d_i,d_j),\n\\label{eq:p_value}$$ where $N_{ij}^k$ is the measured number of references that $i$ and $j$ have in common in the set $S^k_B$. The $p$-value, $q_{ij}(k)$, is therefore the probability that the number of articles in the set $S^k_B$ that both $i$ and $j$ happen to jointly cite by chance is $N_{ij}^k$ or more. We repeat the procedure for all possible values of in-degree $k$ from $k_{\\rm min}$ to $k_{\\rm max}$, so that each pair of articles $(i,j)$ is, in general, associated with several $p$-values, one for each value of in-degree $k$ of the articles in their reference lists. Once all the $p$-values have been computed, we set a significance threshold $p^*$ and validate all the pairs of nodes that are associated with a $p$-value smaller than the threshold $p^*$. Given a value of the statistical threshold, only the validated pairs of articles are considered similar at that significance level.\n\nHowever, because each pair of articles $(i,j)$ can be associated with multiple $p$-values, it is necessary to perform hypothesis-testing multiple times. In this case, if we choose a confidence level or significance threshold $p^*$, say $1\\%$ confidence level ($p^*=0.01$), the various $p$-values associated with the same pair of nodes are not compared directly with the chosen significance threshold $p^*$, but with a rescaled threshold that appropriately takes the number of tests performed into account. As a method for multiple testing we use the False Discovery Rate (FDR)\u00a0 (see Section\u00a0 for details). Ultimately, we identify the set $\\mathcal{M}(p^*)$ of all pairs of nodes whose similarity is statistically significant at the confidence threshold $p^*$. In what follows, we shall denote by $M(p^{*})=\\left|\\mathcal{M}(p^*)\\right|$ the cardinality of such set. In principle, since each pair of articles $(i,j)$ can belong to different sets $S^k$ (and, as a result, can be associated with several $p$-values $q_{ij}(k)$), it would be possible to define a similarity weight $w_{ij}(p^*)$ for each pair $(i,j)$ as the number of times that the pair is validated at the confidence threshold $p^*$. In other words, $w_{ij}(p^*)$ would be the number of sets $S^{k}$ for which $q_{ij}(k)$ passes the statistical test. However, we do not consider this possibility here, but simply assume that a pair of articles $(i,j)$ belongs to the set $\\mathcal{M}(p^*)$ if at least one of the $p$-values $q_{ij}(k)$ passes the statistical test at the confidence threshold $p^*$.\n\nNotice that the definition of the $p$-value associated with a pair of articles in terms of the hypergeometric null model provided in Eq.\u00a0 does not depend on the order in which two articles are assessed. The resulting symmetric value of similarity between any two papers is rooted in the invariance of the hypergeometric distribution in Eq.\u00a0 under permutation of the pair $i$ and $j$, i.e., of the two quantities $d_i, d_j$. Moreover, Eq.\u00a0 rectifies some of the problems of measures of similarity based on a bare count of co-occurrences. In particular, two articles that share a small number $N_{ij}^k$ of citations will be assigned a higher $p$-value (i.e., a smaller statistical significance of their similarity) than two articles sharing a large number of citations. This means that, for instance, the $p$-value $q_{P3, P4}(2)$ associated with the pairs of articles $(P3, P4)$ in Fig.\u00a0(b) will be smaller than the $p$-value $q_{P5,P6}(2)$ associated with the pair of articles $(P5,\nP6)$ in Fig.\u00a0(c), since $P3$ and $P4$ share a larger number of references (namely, four instead of two) to other articles each receiving two citations. Moreover, the $p$-value associated with the pair $(P7, P8)$ will be larger (i.e., the similarity between the pair is less statistically significant) than the $p$-value associated with the pair $(P9, P10)$. The reason lies in the fact that, according to the hypergeometric null-model, the co-occurrence of a reference to a highly-cited article is more likely to take place by chance than the co-occurrence of a reference to an article with a relatively small number of citations.\n\n# Results\n\nWe now show how the proposed method for assigning a statistical significance level to the similarity between any pair of articles based on the statistically validated overlap between the respective bibliographies can indeed turn very useful and help uncover important properties of a citation network.\n\nAs an example of the possible applications of the method, we analyse the citation network among articles published in the journals of the APS during the period between 1893 and 2009. The data set is described in detail in Section\u00a0. We shall start by studying empirically the probability $P_{i\\to j}(p^*)$ of the occurrence of a citation from an article $i$ to an article $j$ validated at a certain statistical threshold $p^*$. We shall then discuss how the method can be used to identify missing and potentially relevant references and also to rank journals and scientific topics based on the relative occurrence of missing citations.\n\n## Homophily in citation patterns\n\nWe start from the observation that if we consider progressively smaller values of the statistical threshold $p^*$, the set $\\mathcal{M}(p^*)$ will shrink and contain only pairs of articles characterised by an overlap between bibliographies that is highly significant, since it has passed a more stringent statistical test. Thus, small values of $p^*$ single out pairs of articles that have a highly significant combination of common cited articles. But if two articles share significantly similar bibliographies, then there is a high probability that they are concerned with the same topic or research problem. As a result, it would be reasonable to expect a citation to occur from the more recently published article to the one published at an earlier date. For each value of the statistical threshold $p^*$, we computed the number of pairs of articles $M(p^*)$ validated at that threshold in the APS citation network, and the number $K(p^*)$ of existing citations between those validated pairs. Then, we define the probability $P_{i\\to j}(p^*)$ that there exists a citation between any two articles whose similarity is validated at the threshold $p^*$ as: $$P_{i\\to j}(p^*) = \\frac{K(p^*)}{M(p^*)}.$$\n\nThe obtained values of $P_{i\\to j}(p^*)$ are reported in Fig.\u00a0 as a function of $p^*$. The plot clearly suggests that the probability of finding a citation between two articles characterised by a highly statistically significant overlap between the respective reference lists (i.e., the similarity between that pair of articles is validated at a small value of $p^*$) is higher than the probability of finding a citation between articles whose reference lists are only moderately significantly similar. For instance, a citation between a pair of articles $(i,j)$ whose overlap between reference lists is validated at $p^*=10^{-2}$ occurs only with probability $P_{i\\to j}\\simeq 0.35$, while citations occur within up to $73\\%$ of the pairs of articles validated at $p^*=10^{-7}$. In other words, the probability that an article $i$ cites another article $j$ is an increasing function of the similarity between the two articles.\n\nIn the social sciences, the principle that similarity breeds connection is traditionally referred to as homophily. This principle has been documented in a variety of empirical domains . It is interesting to observe that homophily can also be found to govern citation networks where it plays an important role in shaping the structure and evolution of knowledge transfer between academic papers.\n\n## Suggesting missing references\n\nThe identification of a statistically significant similarity between two articles can be used to uncover potentially missing references. For instance, the implementation of a recommendation procedure based on statistically significant overlaps between bibliographies might be useful to assist the editor of a scientific journal in suggesting a list of possibly relevant (and missing) references to the authors of a submitted paper.\n\nFig.\u00a0 shows a typical problem that could be fruitfully addressed through an appropriate reference recommendation system based on the identification of statistically significant overlaps between bibliographies of papers. We report a subgraph of the APS citation network consisting of several pairs of articles validated at $p^{*}=10^{-7}$. Each article is represented as a node, and validated pairs of nodes are connected through a link. The color of each link indicates whether the older article was (green) or was not (red) cited by the more recent one. Note that there is a prevalence of green links, which is consistent with the fact that, for a significance level $p^*=10^{-7}$, a citation between a validated pair of articles occurs in more than $73\\%$ of the cases (see Fig.\u00a0). However, we notice that article A has a considerable number of missing citations, resulting from the fact that it was not cited by any of the four articles that were published after its publication date and with which it shares a statistically significant portion of its bibliography (namely, nodes C, D, E, F). This could mean that either the authors of articles C-F were not aware of the existence of article A, despite the substantial overlap between their reference lists, or that article A was not particularly relevant to the topics addressed in the other articles.\n\nSurprisingly, a more in-depth analysis of the articles in Fig.\u00a0 suggests that, not only did all of them appear in the same journal (Physical Review E), but indeed they are all concerned with the same topic (electric discharges) and share a relatively large fraction of PACS codes (05.45.-a, 52.80.Hc). The high degree of similarity between topics can also be easily inferred from the abstracts and introductions of these articles. Interestingly, we found that articles B-F (yellow nodes) were all co-authored by the same research group $G_1$, while article A (the only blue node) was the result of the work of a different research group $G_2$. The fact that also article A does not cite article B suggests that the researchers in group $G_1$ were likely to be unaware of the work conducted by group $G_2$ in the same research field, and vice-versa.\n\nIn this particular case, the quantification of statistically significant overlaps between bibliographies could have been used to facilitate the flow of knowledge between different research groups. For instance, the editor of Physical Review E or the selected reviewers could have brought article B to the attention of the authors of article A, and similarly, when articles C-F were submitted to the same journal, the editor or the reviewers could have advised the authors of group $G_2$ to include article A in the bibliographies of their submitted papers.\n\n## Ranking journals and disciplines by (lack of) knowledge flows\n\nSo far our analysis has been focused on the whole APS citation network. Physics is a very broad disciplinary area, including sub-fields as diverse as atomic physics, astronomy, particle physics, statistical mechanics, just to mention a few . It is therefore reasonable to perform our analysis of the probability $P_{i\\to j}(p^*)$ at the level of sub-fields. Specifically, we argue that the percentage $P_{i\\to\n j}(p^*)$ of citations occurring between pairs of articles associated with a similarity that is validated at the statistical threshold $p^*$ can serve as a proxy for the knowledge flows taking place within a sub-field. In what follows we restrict our analysis to the six citation sub-graphs induced by the articles published in each of the six research journals published by APS (in order to quantify the ability of each journal to facilitate or impede the dissemination of knowledge), and to the ten sub-graphs associated with the highest levels in the PACS taxonomy (which could shed light on the typical patterns of knowledge dissemination in different sub-fields). The lack of knowledge flows within a journal or a sub-field at a certain confidence level $p^*$ can be quantified by the fraction of missing links:\n\n$$U(p^*) = 1 - \\frac{K(p^*)}{M(p^{*})} = 1 - P_{i\\to j}(p^*).$$\n\nIn general, the lower the value of $U(p^*)$, the more likely it is that a citation occurs between a pair of articles characterised by a similarity validated at the statistical threshold $p^*$. Fig.\u00a0(a)-(b) shows how $U(p^*)$ behaves as a function of $p^*$, respectively, for all articles whose main PACS code is either in group 40 (Electromagnetism) or in group 50 (Gases and Plasmas), and for all the articles published in Physical Review Letters and in Physical Review C. The figure clearly shows that, even though in all cases $U(p^*)$ decreases when $p^*\\to 0$, different journals and different sub-fields tend to be characterised by slightly different profiles of $U(p^*)$, namely by different propensities to obstruct knowledge flows between similar academic papers. A comparative assessment of journals and sub-fields according to their typical ability to facilitate the dissemination of knowledge would, of course, be based on $\\frac{K(p^*)}{M(p^{*})}$. Moreover, the ranking will in general depend on the chosen value of the statistical threshold $p^*$.\n\nFrom a theoretical point of view, a suitable approach to the ranking would be to compute the quantity:\n\n$$U_{0} = \\lim_{p^*\\to 0} U(p^*),$$ namely the limiting value of $U(p^*)$ when we let the statistical threshold $p^*$ go to zero. However, this quantity cannot be computed accurately for a finite network, since for a certain value $p^*>0$ the number $M(p^*)$ of validated pairs at $p^*$ will be equal to $0$, and the ratio $\\frac{K(p^*)}{M(p^*)}$ would therefore be undetermined. Here we employ a simple workaround, namely we consider the tangent at the curve $U(p^*)$ at the smallest value of $p^*$ for which the number of validated pairs is still large enough for the construction of a network of a reasonable size (we found that $10^{-7}$ is an appropriate choice in our case), and we compute the intercept at which this tangent crosses the vertical axis. The value obtained is denoted as $\\widetilde{U}_{0}$, and is used as an approximation of $U_{0}$. The procedure used to determine $\\widetilde{U}_{0}$ is sketched in Fig.\u00a0(c).\n\nIn Fig.\u00a0(d)-(e) we report the ranking induced by $\\widetilde{U}_{0}$ respectively for the ten high-level families of PACS codes (panel d) and for the journals published by APS (panel e). It is worth noticing that Electromagnetism and Interdisciplinary Physics are the two sub-fields with the smallest percentage of missing links, i.e., those in which knowledge flows effectively among articles (and authors), as would be expected if the occurrence of citations were driven by overlaps between topics or research problems. Interestingly, the rate of occurrence of missing citations in Physical Review C ($\\widetilde{U}_0\\simeq 0.27$) is almost nine times as large as the one observed in Physical Review Letters ($\\widetilde{U}_0\\simeq 0.03$), which is the APS journal with the widest visibility and largest impact.\n\n# Conclusions\n\nIn our study we have proposed a novel method for quantifying the similarity between papers based on their bibliographies. The identification of a statistically significant similarity between papers can be used to uncover potentially interesting or relevant references that are missing from their bibliographies. Our method can thus assist the authors of scientific papers in compiling a list of relevant references, or the editors and reviewers of scientific journals in suggesting otherwise neglected references to the authors of manuscripts submitted for publication. Moreover, public preprint repositories, such as arXiv.org, could automatically quantify the similarity between the bibliography of a newly posted paper and the bibliographies of all other papers in their data set, and then propose a list of papers that the authors might find relevant to their work. The implementation of a recommendation procedure based on statistically significant overlaps between bibliographies might also facilitate the dissemination of scientific results within a scientific field. Problems such as the one shown in Fig.\u00a0 can be aptly overcome through the use of our method that enables missing and relevant references to be promptly identified.\n\nSince our analysis was based on the APS data set, the evaluation of the similarity between any two articles was restricted to the overlap between the citations the two papers made only to other papers published in the APS journals. The assessment of similarity could not therefore reflect the entire bibliographies of the two articles. This limitation can be easily overcome through further analysis of other citation networks extracted from different data sets, such as ISI Web Of Science, or arXiv.org. Moreover, our framework can be extended beyond the domain of citations between academic papers, and used for uncovering missing and potentially relevant links in other citation networks, such as those between patents or between the US Supreme Court verdicts .\n\n# Materials and Methods\n\n## The APS data set\n\nThe APS data set includes bibliographic information on all the articles published by the American Physical Society between 1893 and 2009 . The citation graph $G=(V,E)$ includes $|V| = 450,084$ articles, and $|E| = 4,710,547$ directed links. The citations refer only to articles that have been published on APS journals. For each article we extracted the publication date, the main research subject (according to the PACS taxonomy), and its bibliography. Each article belongs to a specific journal. We restrict the analysis to the seven major journals, namely Physics Review A, B, C, D, E and Letter, which are specialised in different sub-fields of physics.\n\nWe performed our analysis at three levels, namely the entire citation network, the sub-graphs of the citation network induced by articles in each of the ten main sub-fields of physics, as identified by the highest levels of the PACS hierarchy, and the six sub-graphs induced by articles published in Physical Review Letters and in Physical Review A-E. In our analysis, we discarded articles appeared in Review of Modern Physics, which publishes almost exclusively review articles. In Table\u00a0 we report the description of the ten main categories in the PACS taxonomy and the topics covered by each of the six journals here considered.\n\n```latex\n\\begin{table*}[h!]\\caption{The scientific domains associated with the PACS codes and journals}\n \\begin{tabularx}{\\textwidth}{c|X}\n % \\begin{tabular*}{\\textwidth}{c|c}\n \\hline\n \\\\\n \\textbf{PACS code} & \\textbf{Domain} \\\\\n \\\\ \n \\hline\n \\\\\n 00 & General \\\\\n 10 & The Physics of Elementary Particles and Fields\\\\\n 20 & Nuclear Physics\\\\\n 30 & Atomic and Molecular Physics\\\\\n 40 & Electromagnetism, Optics, Acoustics, Heat Transfer, Classical Mechanics, and Fluid Dynamics \\\\\n 50 & Physics of Gases, Plasmas, and Electric Discharges\\\\\n 60 & Condensed Matter: Structural, Mechanical and Thermal Properties\\\\\n 70 & Condensed Matter: Electronic Structure, Electrical, Magnetic, and Optical Properties \\\\\n 80 & Interdisciplinary Physics and Related Areas of Science and Technology\\\\\n 90 & Geophysics, Astronomy, and Astrophysics\\\\\n \\\\\n \\hline\n \\\\\n \\textbf{Journal} & \\textbf{Domain}\\\\\n \\\\ \n \\hline\n \\\\\n Physics Review A & Atomic, molecular, and optical physics\\\\ \n Physics Review B & Condensed matter and materials physics\\\\ \n Physics Review C & Nuclear physics\\\\ \n Physics Review D & Particles, fields, gravitation, and cosmology\\\\ \n Physics Review E & Statistical, non-linear, and soft matter physics\\\\\n Physics Review Letter & Moving physics forward\\\\\n \\\\ \n \\hline \n % \\end{tabular*}\n \\end{tabularx}\n \\label{MacroPacs}\n\\end{table*}\n```\n\n## False Discovery Rate (FDR) statistical test\n\nThe validation of a given pair $(i,j)$ in the FDR method is performed as follows\u00a0. We set a statistical threshold $p^*$ and we assume that there are in total $N_t$ tests. Then, the $p$-values of different tests are first arranged in increasing order $(q_1 < q_2\n<...< q_{N_t})$, and the rescaled threshold is obtained by finding the largest $t_{max}$ such that $$q_{t_{max}} < \\frac{p^* t_{max}}{N_t},$$ where $N_t$ is the number of tests. In this specific case, $N_t$ is the number of distinct pairs of papers that are tested over all the sets $S^{k}$ of in-degree classes in the citation network. Then we compare each $p$-value $q_{ij}(k)$ with the rescaled threshold, and we validate the pair $(i,j)$ if $q_{ij}(k) < p^*\n\\,t_{max} \/N_t$.","meta":{"dup_signals":{"dup_doc_count":17,"dup_dump_count":16,"dup_details":{"curated_sources":2,"2023-14":1,"2022-49":1,"2022-33":1,"2022-05":1,"2021-39":1,"2021-17":1,"2021-04":1,"2020-45":1,"2020-34":1,"2020-16":1,"2020-10":1,"2019-47":1,"2019-39":1,"2023-40":1,"2024-22":1}},"filename":"out\/1511.07643_extract_homophily_missing_links.tex.md"},"subset":"arxiv"} +{"text":"author: Arseni\u00a0Goussev$^1$, Rodolfo\u00a0A.\u00a0Jalabert$^2$, Horacio\u00a0M.\u00a0Pastawski$^3$, and Diego\u00a0Wisniacki$^4$ \n \n$^1$Max Planck Institute for the Physics of Complex Systems, N\u00f6thnitzer Stra\u00dfe 38, D-01187 Dresden, Germany \n$^2$Institut de Physique et Chimie des Mat\u00e9riaux de Strasbourg, UMR 7504, CNRS-UdS, \n23 rue du Loess, BP 43, 67034 Strasbourg Cedex 2, France \n$^3$Instituto de F\u00edsica Enrique Gaviola (CONICET-UNC) and Facultad de Matem\u00e1tica Astronom\u00eda y F\u00edsica, \nUniversidad Nacional de C\u00f3rdoba, Ciudad Universitaria, C\u00f3rdoba 5000, Argentina \n$^4$Departamento de F\u00edsica, FCEyN, UBA, Ciudad Universitaria, Buenos Aires C1428EGA Argentina\ndate: 2024-09-30\ntitle: Loschmidt Echo\n\nThe Loschmidt echo is a measure of the revival occurring when an imperfect time-reversal procedure is applied to a complex quantum system. It allows to quantify the sensitivity of quantum evolution to perturbations. An initial quantum state $| \\psi_0 \\rangle$ evolves during a time $t$ under a Hamiltonian $H_1$ reaching the state $|\n\\psi_t \\rangle$. Aiming to recover the initial state $| \\psi_0\n\\rangle$ a new Hamiltonian $-H_2$ is applied between $t$ and $2t$. Perfect recover of $| \\psi_0 \\rangle$ would be achieved by choosing $H_2$ to be equal to $H_1$. This is not possible in realistic setups, and there always appears a difference between $H_2$ and $H_1$, leading to a non-perfect recovery of the initial state. The forward evolution between $t$ and $2t$ under the Hamiltonian $-H_2$ is equivalent to a backward evolution from $t$ to 0 under $H_2$, which embodies the notion of time-reversal. The Loschmidt echo studies are focused on the cases where the dynamics induced by the Hamiltonians $H_1$ and $H_2$ are non-trivial, or sufficiently complex (like that of a classically chaotic one-particle system or a many-body system).\n\n# Introduction\n\nThe concept of time-reversal has captured the imagination of physicists for centuries, leading to numerous vivid discussions. An emblematic example of these was the controversy around the second law of thermodynamics between Ludwig Boltzmann and Joseph Loschmidt. When Boltzmann was trying to develop the microscopic theory of the second law of thermodynamics, Loschmidt raised an objection that had profound influence on the subsequent development of the theory. He argued that, due to the time-reversal invariance of classical mechanics, evolution in which the entropy can decrease must exist. These states could be reached by reversing velocities of all molecules of the system. After such a reversal the entropy would then no longer grow but decrease, seemingly violating the second law of thermodynamics. As a response Boltzmann argued that such a time-reversal experiment would be impossible and put forward a statistical interpretation of the second law. Undoubtedly, Boltzmann's argument is a valid approach to the problem of the \"arrow of time\" for generic macroscopic systems. However, in quantum systems with few degrees of freedom, today's technological advances make it meaningful to address time-reversal experiments.\n\n## Pioneering experiments\n\n**Refs.\u00a0**\n\nThe first controlled experimental implementation of time-reversal was achieved in the fifties with the inversion of nuclear spins precessing around a magnetic field. Such a reversal was able to compensate the local-field inhomogeneities responsible for the decay of the free induction signal. The time-reversal was implemented through a radio-frequency pulse, leading to the formation of a revival in the induction signal, known as spin echo or Hahn echo. Early on Erwin Hahn recognized that his procedure, which he viewed as a change in the sign of the system Hamiltonian, provided a quantum implementation of the Loschmidt proposal. Interactions between spins, not reversed in Hahn's procedure, were the main cause of the decay of the echo, within a time scale known as $T_2$. This experiment was followed by a number of variants, both within magnetic resonance as well in other time-dependent spectroscopies. In particular, the dynamical decoupling techniques implemented in quantum registries to isolate them from their environment are variants of Hahn's original experiment.\n\nThe next level of complexity was developed with the reversal of the many-body interactions. No general recipe is available in this case, but the sign of the truncated spin-spin dipolar interaction, which is associated with the cosine of the angle between the inter-spin vector and the quantizing magnetic field, can be reversed by rotating the spins into a quantization axis. This was the proposal of the Magic Echo procedure, implemented by Won-Kyu Rhim, Alex Pines and John Stewart Waugh in the seventies. Substantially simpler is the Polarization Echo sequence, implemented by Richard Ernst and collaborators in the nineties. In this last case the initial state has a local nature as it is labeled by the presence of a rare $^{13}$C which plays the role of a local probe to inject and later on detect the polarization of a nearby $^1$H immersed in a $^1$H network.\n\nThis idea was further exploited by Patricia Levstein, Horacio Pastawski and Gonzalo Usaj to test the stability of many-body dynamics. They suggested that the inefficiency of the time-reversal procedure found in all the previous experiments has a connection with quantum chaos and the inherent dynamical complexity of the many-body spin system.\n\n## Definition\n\n**Refs.\u00a0**\n\nThe Loschmidt echo is defined as $$M(t) = \\left| \\langle \\psi_0 | e^{i H_2 t \/ \\hbar} e^{-i H_1 t \/\n \\hbar} | \\psi_0 \\rangle \\right|^2\n\\label{LE_definition}$$ where\n\n- $| \\psi_0 \\rangle$ is the state of the system at time 0\n\n- $H_1$ is the Hamiltonian governing the forward evolution\n\n- $H_2$ is the Hamiltonian governing the backward evolution\n\n- $t$ is the instant at which the reversal takes place\n\nThe time evolution appearing in Eq.\u00a0() is schematically represented in Fig.\u00a0(a), where the Loschmidt echo quantifies the degree of irreversibility. Alternatively, Eq.\u00a0() can be interpreted as the overlap at time $t$ of two states evolved from $| \\psi_0 \\rangle$ under the action of the Hamiltonian operators $H_1$ and $H_2$. In this case the Loschmidt echo is a measure of the sensitivity of quantum evolution to perturbations. The quantity $M(t)$ interpreted in this manner is illustrated in Fig.\u00a0(b) and is usually referred to as fidelity. The equivalence between Loschmidt echo and fidelity is displayed in Fig.\u00a0 for the example of an initially localized wave-packet in a Lorentz gas. The probability density of the evolved state under $H_1$ ($H_2$) is plotted in Fig.\u00a0 (b) (c), while Fig.\u00a0(d) presents the state resulting from the combined evolution $H_1$ and then $-H_2$ both for a time $t$. The overlap squared between the states (a) and (d) defines the Loschmidt echo, while the overlap squared between the states (b) and (c) defines the fidelity. It is important to remark that even though the probability density distributions in figures (b) and (c) are seemingly identical, the phase randomization due to the difference between the two Hamiltonians leads to a weak Loschmidt echo.\n\nThe definition given by Eq.\u00a0() assumes that the Hamiltonian operators $H_1$ and $H_2$ are independent of time. A generalization to the case of time dependent Hamiltonians is straightforward. The case of non-Hermitian operators $H_1$ and $H_2$ can also be considered, but in such a case the equivalence between the Loschmidt echo and fidelity would not hold.\n\n## Loschmidt echo and decoherence\n\n**Refs.\u00a0**\n\nIn an isolated quantum system the evolution is unitary; initially pure states remain pure. When the system is connected to the external world the purity of an initial state is generically lost in the course of time evolution. This ubiquitous process is referred to as decoherence. The coupling to the environment degrees of freedom, or alternatively the lack of complete knowledge of the system Hamiltonian, typically wash the specific interference properties that would be observed within a unitary evolution. Since interference is one of the most important signatures of quantum mechanics, decoherence has both, fundamental and practical interest. Concerning the former aspect, decoherence has been proposed as a road towards the classical behavior observed in macroscopic systems. On the second aspect, it is clear that decoherence represents a limitation in the implementation of quantum computers or nanoscopic devices based on quantum effects, specially when the number of qubits is scaled up, increasing the complexity of the system. The Loschmidt echo, by including a non-controlled part of the Hamiltonian, or the noisy effects of the environmental degrees of freedom, gives a way to quantify decoherence effect. By attempting the time-reversal of the controlled part of the Hamiltonian, one can quantify at which rate the neighborhood of the system (either a unitary part of larger system Hamiltonian or though the effect of infinite uncontrolled environmental degrees of freedom) acts as decoherence. Moreover, other quantifiers of decoherence, such as for instance the purity, can be shown, under certain assumptions, to be characterized by the same dependence on the underlying classical dynamics as the Loschmidt echo.\n\n## Broad interest of the Loschmidt echo\n\nSince the beginning of this century, the Loschmidt echo has been a subject of intensive studies by researchers from different scientific communities. A list of problems, in which the concept of the Loschmidt echo appears naturally, includes\n\n- Quantum chaos, or the quantum theory of classically chaotic systems\n\n- Decoherence or the emergence of \"classical world\" (open quantum systems)\n\n- Quantum computation and quantum information\n\n- Spin echo in Nuclear Magnetic Resonance\n\n- Linear waves (Elastic waves, microwaves, time-reversal mirrors, etc.)\n\n- Nonlinear waves (Bose-Einstein condensates, Loschmidt cooling, etc.)\n\n- Statistical mechanics of small systems\n\n- Quantum chemistry and molecular dynamics\n\n- Quantum phase transitions and quantum criticality\n\n- Mathematical aspects of the Loschmidt echo\n\nSuch broad interest stems from the fact that the Loschmidt echo is a measurable quantity exhibiting, in certain regimes, a robust and universal behavior. It is by means of the forward and backward time-evolutions that the Loschmidt echo filters out some uninteresting effects, while amplifying other, more important physical processes taking place in complex quantum systems.\n\n# Loschmidt echo: techniques and main results\n\nThe Loschmidt echo is typically a decreasing function of $t$. It is therefore of foremost importance to determine the form of the decay and the associated characteristic times in various physical situations as a function of the characteristic quantities of the problem. Even though the Loschmidt echo has been addressed in a variety of scenarios, the main effort has been directed towards the simplest set-up of one-body dynamics. Well established results for the case of one-body systems, together with techniques adapted from the field of quantum chaos, are summarized in this section.\n\n## Characteristic quantities\n\nA number of physical quantities, describing characteristic time and energy scales of a system under consideration, prove to be especially important in the theoretical studies of the Loschmidt echo. These quantities include\n\n- $g$, *mean density of states.* \u2013 By definition, $g(E) =\n \\frac{1}{(2\\pi\\hbar)^{d}} \\frac{{\\rm d}V(E)}{{\\rm d}E}$, where $V(E)$ is the volume of the phase-space enclosed by a surface of constant energy $E$. The mean level spacing $\\Delta$ is given by an inverse of the density of states, $\\Delta = 1\/g$. Note that $g$ is the smooth part (or average) of the full density of states, $g_{\\mathrm{full}} (E) = \\sum\\limits_n \\delta(E-E_n)$, where $\\{E_n\\}$ are the eigenenergies of the Hamiltonian.\n\n- $g_{\\mathrm{local}}$, *local density of states.* \u2013 Let $|\n \\psi \\rangle$ be a reference quantum state. The local density of states with respect to $| \\psi \\rangle$ is defined as $g_{\\mathrm{local}} (E, | \\psi \\rangle) = \\sum\\limits_n \\big|\n \\langle n | \\psi \\rangle \\big|^2 \\delta (E-E_n)$, where $\\{|n\\rangle\\}$ and $\\{E_n\\}$ are respectively the eigenstates and eigenenergies of the Hamiltonian in question.\n\n- $N$, *effective size of the Hilbert space for a quantum state.* \u2013 An initial quantum state $| \\psi_0 \\rangle$ of a closed quantum system can be viewed as a linear combination of $N$ eigenstates of the Hamiltonian. In a $d$-dimensional system, $N$ is given by the volume of the phase space, accessible to $| \\psi_0\n \\rangle$ in the course of its time evolution, divided by the volume of the Planck's cell, $(2\\pi\\hbar)^d$.\n\n- $\\lambda$, *mean Lyapunov exponent.* \u2013 In classical systems with chaotic dynamics, a distance between two typical trajectories, initiating at infinitesimally close points in phase-space, grows exponentially with time. The rate of this growth, averaged over all trajectory pairs at a given energy, is the mean Lyapunov exponent $\\lambda$. The latter characterizes dynamical instability of a classical system with respect to perturbations of its initial conditions.\n\n- $t_{\\mathrm{E}}$, *Ehrenfest time.* \u2013 The center of an initially localized wave packet stays close to the phase-space trajectory of the corresponding classical particle for times shorter than the so-called Ehrenfest time, $t_{\\mathrm{E}}$. The latter, in chaotic systems, can be estimated as $t_{\\mathrm{E}} =\n \\frac{1}{\\lambda} \\ln \\frac{L}{\\sigma}$, where $\\lambda$ is the mean Lyapunov exponent of the classical system, $L$ is the characteristic linear size of the system, and $\\sigma$ is the initial dispersion of the wave packet.\n\n- $t_{\\mathrm{H}}$, *Heisenberg time.* \u2013 The discreteness of the energy spectrum of a closed quantum system becomes dynamically important at times comparable to (and longer than) the so-called Heisenberg time, $t_{\\mathrm{H}} = \\hbar\/\\Delta = \\hbar g$, where $\\Delta$ and $g$ are the mean energy level spacing and the mean density of states, respectively. Semiclassical (short-wavelength) approximations to quantum dynamics are known to break down beyond the Heisenberg time.\n\n## Calculational techniques\n\n### Semiclassics\n\n**Refs.\u00a0**\n\nIn this context, semiclassics stands for the short-wavelength approximation to the quantum evolution in terms of classical trajectories. The key ingredient is the Van Vleck-Gutzwiller approximation to the propagator (matrix element of the evolution operator in the position representation) $$\\langle {\\bf q}' | e^{-i H t \/ \\hbar} | {\\bf q} \\rangle = \\left(\n\\frac{1}{2\\pi i \\hbar} \\right)^{d\/2} \\sum_{\\gamma ({\\bf q} \\rightarrow\n {\\bf q}', t)} C_\\gamma^{1\/2} \\exp\\left( \\frac{i}{\\hbar} R_\\gamma -\n\\frac{i \\pi}{2} \\nu_\\gamma \\right)\n\\label{VanVleck-Gutzwiller}$$\n\n- $\\gamma$ stands for the classical trajectories going from ${\\bf\n q}$ to ${\\bf q}'$ in time $t$.\n\n- $R_\\gamma = \\int_0^t d\\tau \\mathcal{L}_{\\gamma}$ is the Hamilton's principal function for the trajectory $\\gamma$. Here, $\\mathcal{L}_{\\gamma}$ denotes the Lagrangian along $\\gamma$.\n\n- $C_\\gamma = |\\det (-\\nabla_{{\\bf q}'} \\nabla_{{\\bf q}}\n R_\\gamma)|$ is the stability factor of $\\gamma$.\n\n- $\\nu_\\gamma$ is the number of conjugate points along $\\gamma$.\n\n- $d$ is the number of dimensions of the position space.\n\nA combination of Eqs.\u00a0() and () leads to the semiclassical approximation to the Loschmidt echo, $$\\includegraphics[width=4in]{fig-LE-semiclassics}\n\\label{LE_semiclassics}$$ Here, the summand is a product of terms with the following pictorial representation:\n\n- A red circles at a point ${\\bf q}_j$ corresponds to $\\langle\n {\\bf q}_j | \\psi \\rangle$.\n\n- A blue circles at a point ${\\bf q}_j$ correspond to $\\langle\n \\psi | {\\bf q}_j \\rangle$. That is, every blue circle is a complex conjugate of the corresponding red circle.\n\n- A solid curve leading from ${\\bf q}_j$ to ${\\bf q}_k$ along a trajectory $\\gamma$ corresponds to $(2\\pi i \\hbar)^{-d\/2} \\,\n C_{\\gamma}^{1\/2} \\exp \\big( i R_{\\gamma} \/ \\hbar - i \\pi\n \\nu_{\\gamma} \/ 2 \\big)$. The Hamiltonian $H_1$ is used if $\\gamma$ equals $\\alpha_1$, and the Hamiltonian $H_2$ is used if $\\gamma$ equals $\\beta_2$.\n\n- A dashed curve leading from ${\\bf q}_j$ to ${\\bf q}_k$ along a trajectory $\\gamma$ corresponds to $(- 2\\pi i \\hbar)^{-d\/2} \\,\n C_{\\gamma}^{1\/2} \\exp \\big( -i R_{\\gamma} \/ \\hbar + i \\pi\n \\nu_{\\gamma} \/ 2 \\big)$. The Hamiltonian $H_1$ is used if $\\gamma$ equals $\\alpha_2$, and the Hamiltonian $H_2$ is used if $\\gamma$ equals $\\beta_1$. In other words, every dashed curve is a complex conjugate of the corresponding solid curve.\n\nIn Loschmidt echo studies one is typically in interested in the case of $H_2$ being close to $H_1$. It is common to describe the perturbation by an operator defined as $$\\kappa \\Sigma = H_2 - H_1 \\,,\n\\label{perturbation}$$ where $\\kappa$ parametrizes its strength.\n\nThe expression in Eq.\u00a0() is, in general, extremely complicated to calculate as it involves six spacial integrals and the sum over four classical trajectories. Semiclassical evaluations of the Loschmidt echo deal with different approximations to Eq.\u00a0(). The following steps are usually implemented in the semiclassical calculations in order to obtain meaningful analytical results:\n\n- If the initial state is spatially localized around a point ${\\bf q}_0$, the diagram of Eq.\u00a0() can be approximated by a simpler diagram in which the points ${\\bf q}_1$, ${\\bf q}_2$, ${\\bf q}_3$, and ${\\bf q}_4$ merge into ${\\bf\n q}_0$. Within this approximation there are only two spacial integrals to be done (over ${\\bf q}_5$, and ${\\bf q}_6$).\n\n- Assuming that the perturbation $\\kappa \\Sigma$ is classically small but quantum mechanically significant (which means that the perturbation does not change the topology of the trajectories but introduces a phase difference) trajectories $\\alpha_j$ and $\\beta_j$ are identified for $j=1,2$. This is the so-called **diagonal approximation**. The shadowing theorem ensures that the identification of classical trajectories $\\alpha$ and $\\beta$ is always possible and that the resulting action difference can be obtained from the accumulated phase of the perturbation along one of the trajectories. For instance, in the case where $\\Sigma$ only depends on position coordinates $$\\label{eq:DeltaRgamma}\n \\Delta R_{\\gamma} = - \\kappa \\int_0^t {\\rm d}\\tau \\ \\Sigma({\\bf\n q}_{\\gamma}(\\tau)) \\ ,$$ for $\\gamma=\\alpha_1,\\alpha_2$. The diagonal approximation reduces Eq.\u00a0() to a sum over only two trajectories, $\\alpha_1$ and $\\alpha_2$, of the unperturbed system.\n\n- A key step that allows to make further progress in the semiclassical calculation is to regroup the resulting pair of trajectories $(\\alpha_1,\\alpha_2)$ into two different families:\n\n - Trajectories where ${\\bf q}_5$ is near ${\\bf q}_6$ and $\\alpha_1$ evolves close to $\\alpha_2$.\n\n - The rest of pairs, where the two trajectories are uncorrelated.\n\nThe semiclassical calculation usually proceeds by estimating the accumulated phases along the trajectories for the two kinds of resulting pairs, and then performing some kind of average of $M(t)$ (over the perturbation, the initial conditions or the evolution of the classical trajectories). In the case of complex classical dynamics these averages are justified and allow to obtain the mean value $\\overline{M(t)}$ of the Loschmidt echo in terms of the main parameters of problem.\n\nAn alternative semiclassical formulation for fidelity amplitude $m(t)$ (defined throught $M(t)=|m(t)|^2$) which avoids the usual trajectory-search problem of the standard semiclassics, is the so called dephasing representation $$\\label{eq:odr}\n m(t)=\\int {\\rm d}{\\bf q} \\, {\\rm d}{\\bf p} \\, \\exp\\left(-\\frac{i}{\\hbar}\n \\Delta R({\\bf q},{\\bf p},t)\\right) \\, W_0({\\bf q},{\\bf p})$$ where $$\\label{eq:defWF}\n W_0({\\bf q},{\\bf p}) = \\frac{1}{(2\\pi\\hbar)^d}\\int {\\rm d}\n (\\delta\\mathbf{q}) \\, \\exp\\left(-\\frac{i}{\\hbar} \\mathbf{p} \\cdot\n \\delta\\mathbf{q}\\right) \\, \\langle {\\bf\n q}+\\frac{\\delta\\mathbf{q}}{2}|\\psi_0\\rangle \\langle \\psi_0| {\\bf\n q}-\\frac{\\delta\\mathbf{q}}{2}\\rangle$$ is the Wigner function of the initial state $|\\psi_0\\rangle$ and $$\\label{eq:DeltaRdeph}\n \\Delta R({\\bf q},{\\bf p},t)=\\int_0^t {\\rm d}\\tau [\\mathcal{L}_2({\\bar\n {\\bf q}}(\\tau),{\\bar {\\bf p}}(\\tau))-\\mathcal{L}_1({\\bar {\\bf\n q}}(\\tau), {\\bar {\\bf p}}(\\tau))]$$ is the action difference evaluated along the phase-space trajectory $\\big({\\bar {\\bf q}}(\\tau), {\\bar {\\bf p}}(\\tau) \\big)$ evolved from $\\big({\\bf q},{\\bf p}\\big)$ under the unperturbed Hamiltonian $H_1$. In this way the decay can be attributed to the dephasing produced by the perturbation of the actions \u2013thus the name dephasing representation. In the case where $\\Sigma$ only depends on position coordinates (as in Eq.\u00a0()) the phase difference () reads $$\\Delta R({\\bf q},{\\bf p},t) = - \\kappa \\int_0^t {\\rm d}\\tau\n \\ \\Sigma({\\bar {\\bf q}}(\\tau)) \\ .$$ For generic chaotic systems and initially localized states, the calculation of $M(t)$ using the dephasing representation follows the same lines of the standard semiclassics. The standard semiclassical approach becomes an initial-value problem once the integration over the final position is traded by the integration over the initial momentum using the stability factor $C_{\\gamma}$ as the Jacobian of the transformation. However, Eq.\u00a0() bears the advantage that it regards $m(t)$ as the solution of an initial-value problem, as opposed to a boundary-value problem. This proves especially convenient in situations when the sums over classical trajectories are evaluated explicitly.\n\n### Random matrix theory\n\n**Refs.\u00a0**\n\nRandom matrix theory is a powerful technique to understand the statistical behavior of quantum complex systems. Among the latter quantum systems exhibiting chaotic dynamics in the classical limit are particularly important, as their spectral properties are well described by averages taken over appropriate ensembles of matrices that satisfy certain general symmetry restrictions. The simplest Hamiltonian ensemble is that of real symmetric $N \\times N$ matrices (appropriate for cases with time-reversal symmetries) usually referred to as Gaussian Orthogonal Ensemble.\n\nThe invariance of the probability distribution under orthogonal transformations and the statistical independence of the matrix elements lead to a joint probability distribution of the matrix elements of the Hamiltonian, $$P_N(H)=K_N \\exp\\left(-\\frac{\\mathop{\\mathrm{tr}}{\\{H^2\\}}}{4 v}\\right),$$ where $v$ defines the scale of the matrix elements and $K_N$ is a normalization constant. Each of the $N(N+1)\/2$ independent matrix element $H_{ij}$ is a zero-centered Gaussian such that $$\\begin{aligned}\n\\overline{H_{ij}}=& 0 & i \\leq j, \\nonumber \\\\\n\\overline{H^2_{ij}}=& (1+\\delta_{i j}) v^2 & i \\leq j \\nonumber .\n\\end{aligned}$$\n\nThe averages are taken over the matrix ensemble. The Loschmidt echo in complex systems can be addressed by imposing the Random Matrix hypothesis for the forward evolution Hamiltonian $H_1$, for the perturbation $\\kappa \\Sigma$ representing the imperfection of the time-reversal, or for both. Generically, the same results are obtained, independently of the choice of the matrix considered to be random. Averages of the Loschmidt echo over matrix ensembles are justified for ergodic classical dynamics. The absence of finite classical time scales, like the inverse Lyapunov exponent or the escape rate, in the Random matrix theory hinders the description of the Loschmidt echo decays that depend on those quantities.\n\nRecently, using the Random-Matrix-Theory approach, Kohler and coworkers have established relations between the averaged fidelity decay and the so-called cross-form factor, characterizing parametric correlations of the energy spectra of the unperturbed and perturbed systems. Notably, these relations exist not only in the case of fully chaotic unperturbed and perturbed systems, but also in the case of a regular system perturbed by a chaotic perturbation.\n\n### Numerical simulations\n\n**Refs.\u00a0**\n\nA numerical simulation of the time-evolution of initially localized wave-packets under slightly different Hamiltonians, $H_1$ and $H_2$, is a valuable tool in understanding the behavior of the Loschmidt echo in different systems and various regimes. Even though there exist numerous approaches to solving the time-dependent Schr\u00f6dinger equation on a computer, the majority of numerical studies of the Loschmidt echo have been concerned with the following two methods. Both methods provide accurate, efficient, and stable approximations to the evolution operator $e^{-i A \\tau}$, where the operator $A$ corresponds a (properly rescaled) Hamiltonian, and $\\tau$ denotes a sufficiently short propagation time-step.\n\n- *Trotter-Suzuki algorithm.* \u2013 This method involves three implementation stages. First, one decomposes $A$ into a finite (and practically small) number of components, $A = A_1 + A_2 +\n \\ldots + A_n$, such that the operator $e^{-i A_j t}$ can be constructed analytically for all $j = 1,2,\\ldots,n$. Second, one defines the symmetric operator $U_2(\\tau) = e^{-i A_n \\tau\/2}\n \\ldots e^{-i A_2 \\tau\/2} e^{-i A_1 \\tau} e^{-i A_2 \\tau\/2} \\ldots\n e^{-i A_n \\tau\/2}$. Third, one constructs the operator $U_4(\\tau)\n = U_2(p\\tau) U_2(p\\tau) U_2((1-4p)\\tau) U_2(p\\tau) U_2(p\\tau)$ with $p = 1\/(4-4^{1\/3})$. This provides a unitary approximation to the original propagator accurate up to the 4th order in $\\tau$, i.e., $e^{-i A \\tau} = U_4(\\tau) + \\mathcal{O}(\\tau^5)$.\n\n- *Chebyshev-polynomial expansion.* \u2013 This approach requires that the operator $A$ is normalized in such a way that all its eigenvalues lie in the interval between -1 and 1. The method involves constructing the operator $S_N (\\tau) = J_0(\\tau)\n + 2 \\sum_{n=1}^N (-i)^n J_n(\\tau) T_n(A)$, where $\\{ J_n \\}$ and $\\{ T_n \\}$ are, respectively, the Bessel functions of the first kind and the Chebyshev polynomials of the first kind. In actual computations, one makes use of the recurrence relation $T_{n+1}(A)\n = 2 A T_n(A) - T_{n-1}(A)$, with $T_0(A) = 1$ and $T_1(A) = A$, to calculate $S_N(\\tau)$. Then, for large values of $n$ and fixed $\\tau$, the Bessel function $J_n(\\tau)$ rapidly decays as a function of $n$. In fact, $|J_n(\\tau)| \\sim (\\tau\/2)^n \/ n!$ as $n\n \\rightarrow \\infty$. This is why $S_N(\\tau)$, with a sufficiently large $N$, provides an extremely accurate approximation to the propagator in question, i.e., $e^{-i A \\tau} = S_N(\\tau) +\n \\mathcal{O}\\big( (\\tau\/2)^N \/ N! \\big)$.\n\n## Decay of the Loschmidt echo: different regimes\n\nThe decay of the Loschmidt echo mainly depends upon the underlying classical dynamics of the system, the initial state, and the nature and strength of the perturbation $\\kappa \\Sigma$.\n\nThe behavior of the Loschmidt echo is fairly well understood for single-particle quantum systems whose dynamics is fully chaotic in the classical limit. Some progress has been done towards the theory of the Loschmidt echo in systems with regular and mixed phase space.\n\n### Chaotic dynamics\n\n**Refs.\u00a0**\n\nThe time-decay of the Loschmidt echo, averaged over an ensemble of initial states, Hamilton operators, or perturbations, typically exhibits three consecutive stages (see Fig. ):\n\n- **Short-time parabolic decay**, $\\overline{M(t)} \\simeq 1 -\n (\\eta t\/\\hbar)^2$. It is a short, initial stage of the Loschmidt echo decay in all quantum systems. Here, $\\eta$ is an average dispersion of the perturbation operator evaluated with respect to the initial state, i.e., $(\\eta \/ \\kappa)^2 = \\overline{\\langle\n \\psi | \\Sigma^2 | \\psi \\rangle} - \\overline{\\langle \\psi | \\Sigma\n | \\psi \\rangle^2}$. The parabolic decay holds for times $t$ short enough for the propagators of the unperturbed and perturbed systems, $\\exp\\left( -i H_j t \/\\hbar \\right)$ with $j=1,2$, to be reliably approximated by their second order Taylor expansions, $1 - i H_j t \/\n \\hbar - (H_j t)^2 \/ (2 \\hbar^2)$.\n\n- **Intermediate-time asymptotic decay**. The short-time parabolic decay is typically followed by an \"asymptotic\" decay regime, whose functional form depends on the strength of the perturbation.\n\n - **Perturbative\/Gaussian regime**, $\\overline{M(t)} \\simeq\n \\exp(-(\\eta t\/\\hbar)^2)$. This decay holds for \"weak\" Hamiltonian perturbations, such that the absolute value of a characteristic matrix element of the perturbation operator is small compared to the mean energy level-spacing of the unperturbed Hamiltonian. Since $\\eta \\sim \\kappa$, the decay rate is quadratic in the perturbation strength.\n\n - **Non-perturbative\/Exponential regime**, $\\overline{M(t)}\n \\simeq \\exp(-\\Gamma t)$. This non-perturbative decay regime is typically observed for stronger Hamiltonian perturbations, i.e., for perturbations large on the scale of the mean level spacing of the unperturbed Hamiltonian. The functional form of the dependence of the decay rate $\\Gamma$ on the perturbation strength $\\kappa$ is different for \"global\" and \"local\" Hamiltonian perturbations (see below).\n\n- **Long-time saturation**, $\\overline{M(t)} \\sim N^{-1}$. At long times, the asymptotic decay of the Loschmidt echo is followed by a saturation (or freeze) at a value inversely proportional to the size $N$ of the effective Hilbert space of the system. $N$ is given by the volume of the phase space, that is available to the state of the system in the course of its time evolution, divided by the volume of the Planck's cell, $(2\\pi\\hbar)^d$. The value, at which the Loschmidt echo saturates, is independent of the perturbation strength.\n\n An explicit expression is available for the saturation value of the Loschmidt echo in two-dimensional quantum billiards, with strongly chaotic dynamics in the classical limit. Thus, if the initial state of the particle is given by the Gaussian wave function $\\psi({\\bf\n q}) = (\\pi \\sigma^2)^{-1\/2} \\exp\\big(i {\\bf p}_0 ({\\bf q}-{\\bf\n q}_0) \/ \\hbar - ({\\bf q}-{\\bf q}_0)^2\/(2 \\sigma^2) \\big)$ with ${\\bf q}_0$, ${\\bf p}_0$, and $\\sigma$ denoting respectively the average position, momentum, and position uncertainty of the particle, and the area of the billiard is $A$, then the long-time saturation of the average Loschmidt echo is given by $\\overline{M(t)} \\simeq \\sqrt{2\\pi} \\, \\hbar \\sigma \/ (|{\\bf p}_0|\n A)$. This formula holds in the semiclassical regime, such that $\\hbar\/|{\\bf p}_0| \\ll \\sigma \\ll \\sqrt{A}$.\n\nThe above classification is schematically illustrated in Fig.\u00a0.\n\nA Hamiltonian perturbation is said to be \"global\" if it affects all (or a dominant part) of the phase-space that is accessible to the system in the course of its time-evolution. The accessible phase-space depends on the initial state of the system.\n\nFor global perturbations, the dependence of the decay rate $\\Gamma$ on the perturbation strength $\\kappa$ can be approximated by a piecewise-continuous function, shown schematically in Fig.\u00a0. More precisely, the semiclassical analysis of the exponential decay regime of the Loschmidt yields $$\\label{eq:dec_LE}\n\\overline{M(t)}= e^{-c \\kappa^2 t} + b(\\kappa,t) \\ e^{-\\lambda t} \\ .$$ The first term in Eq.\u00a0() stems from uncorrelated pairs $(\\alpha_1,\\alpha_2)$ of contributing trajectories, while the for second term is a contribution of correlated trajectory pairs, such that $\\alpha_1$ evolves close to $\\alpha_2$ (see Sec.\u00a0). Unlike the constant $c$, the prefactor $b$ generally exhibits an explicit dependence on both $t$ and $\\kappa$. However, this dependence is sub-exponential, and, as a consequence, the decay rate $\\Gamma$ of the Loschmidt echo is dominated by the minimum of $c \\kappa^2$ and $\\lambda$, leading to a commonly accepted (but oversimplified) interpretation of the exponential decay (see Fig.\u00a0).\n\nFor perturbation strengths $\\kappa$ that are weak compared to a critical strength $\\kappa_{\\mathrm{c}}$, the dependence is parabolic, $\\Gamma = c \\kappa^2$. The semiclassical calculations give the expression of $c$ in terms of correlation functions of the perturbation, and the random matrix approach leads to the general dependence $c \\kappa^2=2 \\pi \\eta^2\/(\\hbar \\Delta)$, where $\\eta$ is given by the mean value of the width of the local density of states with respect to the initial state $|\\psi_0\\rangle$. Such a behavior is commonly referred to as the **Fermi-golden-rule regime**.\n\nFor perturbations that are stronger than $\\kappa_{\\mathrm{c}}$, but weaker than a certain bound value $\\kappa_{\\mathrm{b}}$, the decay rate $\\Gamma$ is independent of $\\kappa$ and equals the average Lyapunov exponent $\\lambda$ of the underlying classical system. This decay is known as the **perturbation-independent** or **Lyapunov regime**. This regime holds for perturbations up to the breakdown strength $\\kappa_{\\mathrm{b}}$, beyond which it is no longer possible to regard every trajectory of the perturbed system as a result of a continuous deformation of the corresponding unperturbed trajectory, so that bifurcations have to be taken into account. The Lyapunov regime is especially remarkable, as the decay rate is totally independent of the strength of the perturbation which is at the origin of the decay.\n\nThe exponential decay of the Loschmidt echo is generally followed in time by a saturation at a value on the order of $N^{-1}$, where $N$ is the size of the effective Hilbert space. This implies that, in the non-perturbative regime, the saturation occurs at time $t_{\\mathrm{s}}\n\\simeq \\Gamma^{-1} \\ln N$. In the case of a two-dimensional chaotic billiard (introduced above), the saturation time is given by $t_{\\mathrm{s}} \\simeq \\Gamma^{-1} \\ln\\big( |{\\bf p}_0| A \/ (\\hbar\n\\sigma) \\big) - (2\\Gamma)^{-1} \\ln(2\\pi)$. It is interesting to note that the saturation time can, in principle, be arbitrarily long. In particular, $t_{\\mathrm{s}}$ can exceed other important time scales of quantum dynamics, such as the Heisenberg time $t_{\\mathrm{H}} \\equiv\n\\hbar g(E)$ with $g(E)$ denoting the density of states. Indeed, in two-dimensional billiards the saturation time $t_{\\mathrm{s}}\n\\rightarrow \\infty$ as $|{\\bf p}_0|\\rightarrow \\infty$, while the Heisenberg time $t_{\\mathrm{H}}$ is independent of the particle's momentum.\n\nThe exponential decay, $\\overline{M(t)} \\simeq \\exp(-\\Gamma t)$, only holds for perturbations weaker than $\\kappa_{\\mathrm{b}}$. Beyond this threshold, the exponential regime breaks down and gives way to another regime, in which the Loschmidt echo exhibits a Gaussian dependence on time.\n\nSome of the mentioned regimes can be obtained using alternative approaches relating the Loschmidt echo with the two-point time auto-correlation function of the generator of the perturbation.\n\nA Hamiltonian perturbation is said to be \"local\" if it is concentrated in a small region of the phase space accessible to the system. In the case of chaotic dynamics, the phase space extent of a local perturbation can be characterized by a rate $\\gamma$, known as the \"escape\" rate, that is defined as the rate at which trajectories of the corresponding classical system visit the perturbation region. For local perturbations, the escape rate is small compared to the characteristic rate at which a typical trajectory \"explores\" the dynamically available phase space, and also small compared to the average Lyapunov exponent of the system.\n\nFigure\u00a0 illustrates the characteristic dependence of the decay rate $\\Gamma$ on the perturbation strength $\\kappa$. The function $\\Gamma(\\kappa)$ exhibits a number of distinctive features. Thus, for sufficiently weak perturbations, the decay rate grows quadratically with the perturbations strength, $\\Gamma \\sim \\kappa^2$, demonstrating the **Fermi-golden-rule regime**. In the limit of strong perturbations, $\\kappa \\rightarrow\n\\infty$, the decay rate $\\Gamma$ saturates at a perturbation-independent value $2\\gamma$. This saturation is known as the **escape-rate regime**. The crossover from the Fermi-golden-rule to the escape rate regime is non-monotonic, and $\\Gamma(\\kappa)$ generally exhibits well-pronounced **oscillations**. The amplitude and frequency of these oscillations depend on the nature and physical properties of the particular Hamiltonian perturbation. An important point is that the function $\\Gamma(\\kappa)$ is given for all strengths $\\kappa$ by the width of the local density of states.\n\nThe exponential decay, $\\overline{M(t)} \\simeq \\exp(-\\Gamma t)$ with the non-monotonic function $\\Gamma(\\kappa)$ described above and illustrated in Fig.\u00a0, has been obtained by semiclassical analysis. (Its existence has now been observed in numerical and laboratory experiments). The key assumption of any semiclassical reasoning is that the de Broglie wavelength corresponds to the shortest length scale of the system. This assumption is obviously violated for systems, in which the spatial linear size of the Hamiltonian perturbation is small compared to the de Broglie wavelength. Generally, the decay of the Loschmidt echo in such systems is different from the exponential decay. For instance, two-dimensional chaotic systems in the limit of point-like Hamiltonian perturbations, for which $\\gamma \\rightarrow 0$, exhibit the algebraic, inverse-quadratic decay of the Loschmidt echo, $\\overline{M(t)} \\sim\nt^{-2}$.\n\n### Regular and mixed dynamics\n\nTime dependence of the Loschmidt echo in quantum system with regular or mixed classical dynamics is typically more complex than that in fully chaotic systems. As a result of this complexity, a comprehensive classification of decay regimes is still lacking. Nevertheless, a number of important results have been established for the Loschmidt echo decay in systems whose phase space is predominantly regular in the classical limit.\n\nIn the limit of weak Hamiltonian perturbations, $\\kappa \\rightarrow\n0$, the average Loschmidt echo generally exhibits a **Gaussian decay**. The duration $t_{\\mathrm{G}}$ of this decay is inversely proportional to the perturbation strength, $t_{\\mathrm{G}} \\sim\n\\kappa^{-1}$. The existence of the Gaussian decay requires the perturbation to be sufficiently weak such that the decay time $t_{\\mathrm{G}}$ is long in comparison with any relaxation (averaging) time scale of the system.\n\nIf the Hamiltonian perturbation is sufficiently strong and varies rapidly (or is almost \"random\") along a typical classical trajectory of the unperturbed Hamiltonian, then the average Loschmidt echo is known to exhibit the **algebraic decay** $\\overline{M(t)} \\sim\nt^{-3d\/2}$, with $d$ being the dimensionality of the system. This power-law decay is faster than the decay of the overlap of the corresponding classical phase-space densities, $t^{-d}$.\n\nNumerous case studies of the unaveraged (individual-realization) Loschmidt echo $M(t)$ have revealed that the functional form of the decay depends strongly upon the location of the initial quantum state with respect to the phase space of the underlying, unperturbed and perturbed, classical systems. In addition, the decay is sensitive to certain properties of the Hamiltonian perturbation. For instance, the Loschmidt echo decays differently depending on whether the perturbation is global or local in phase space, whether it vanishes under averaging over time, or whether it preserves or destroys integrability of the underlying classical system. A number of different regimes, such as the **exponential decay** and power-law decays with varies decay exponents, have been observed in numerical simulations. Interesting phenomena of **quantum revivals**, closely related to (quasi-)periodicity of the underlying classical motion, and temporary **quantum freeze** of the Loschmidt echo have been reported in various studies. However, a comprehensive classification of all decay regimes of the Loschmidt echo in regular systems still remains a challenge in the field.\n\n### Numerical observation of various decay regimes\n\n**Refs.\u00a0**\n\nNumerical simulations have been of foremost importance for establishing the results summarized in sections and . The main body of numerical work has been devoted to one-particle systems. Various chaotic dynamical systems, globally perturbed, have been studied numerically; among them the Lorentz gas, two-dimensional hard-wall and soft-wall billiards. As a prominent example, Fig. shows the crossover from the perturbative Fermi-golden-rule regime to the Lyapunov regime of the decay rate of the Loschmidt echo as a function of the perturbation strength in the Lorentz gas. The predictions for local perturbations, described in Sec.\u00a0, were also tested in a chaotic billiard system. In this case, a deformation of a small region of the boundary was considered as the perturbation (see Fig.\u00a0).\n\nOther enlightening numerical studies were carried in quantum maps. These systems possess all the essential ingredients of the chaotic dynamics and are, at the same time, extremely simple from a numerical point of view, both at the classical and at the quantum levels. The predictions for locally perturbed chaotic systems were also observed in the paradigmatic cat map. Figure\u00a0 shows the decay rate of the Loschmidt echo as a function of the perturbation strength $\\kappa$ in the cat map under the action of a local perturbation. When the map was perturbed in all its phase space, three decay regimes (instead of two) were observed depending on the perturbation strength: (i) the Fermi-golden-rule regime for weak perturbations, (ii) a regime of an oscillatory dependence of the decay rate on $\\kappa$ at intermediate perturbation strengths, and (iii) the Lyapunov regime for strong perturbations. The complete understanding of the crossover between the last two regimes remains an open problem.\n\n### Beyond the \"standard\" picture: fluctuations, disorder, many-body systems\n\n**Refs.\u00a0**\n\nDecay regimes of the average Loschmidt echo, discussed above, are the ones most commonly observed, and generally regarded as \"standard\", in low-dimensional quantum systems. However, when analyzing the Loschmidt echo beyond a simple averaging over the initial state or perturbation, or in a more complex setup, one often encounters departures from the standard picture.\n\nIt was fist pointed out by Silvestrov and coworkers that, for sufficiently strong perturbations and on time scales short compared to the Ehrenfest time, statistical fluctuations generally play an important role in the problem of the Loschmidt echo decay. In particular, the average Loschmidt echo, $\\overline{M(t)}$, is typically dominated by rare fluctuations, characteristic of only a small fraction of the chaotic phase space. In the main part of the phase space, however, the unaveraged Loschmidt echo, $M(t)$, decays much faster than $\\overline{M(t)}$: this decay can be as fast as double-exponential, $M(t) \\sim \\exp(-\\mathrm{constant} \\times\ne^{2\\lambda t})$. It is only after the Ehrenfest time that $M(t)$ follows $\\overline{M(t)}$ in the main part of the phase space.\n\nThe variance of the Loschmidt echo, $\\overline{M^2} - \\overline{M}^2$, was addressed by Petitjean and Jacquod within both a semiclassical and a Random Matrix Theory approach. The variance was shown to exhibit a rich nonmonotonous dependence on time, characterized by an algebraic growth at short times followed by an exponential decay at long times.\n\nIn disordered systems, the Loschmidt echo may exhibit even richer decay than that in low-dimensional dynamical systems, since the elastic mean-free path and the disorder correlation length enter as relevant scales. For long-range disorder (small-angle scattering) an asymptotic regime governed by the classical Lyapunov exponent emerges. A Josephson flux qubit operated at high energies leads to a chaotic dynamics, and the Lyapunov regime can be obtained when a Loschmidt setup is considered in such a system.\n\nThe Loschmidt echo studies in many-body systems are comparatively less developped than those in one-body cases. The numerical calculation of the many-body Loschmidt echo is a highly demanding computational task, and the standard approximations are in difficulty for describing the small difference obtained after the forward and backward time-evolutions. For this reason, the most reliable results of Loschmidt echo in many-body systems are those of a one dimensional spin chain or in an Ising model with transverse field, where the influence of criticality in the fidelity decay has been established. Trapped Bose-Einstein condensates have been treatead as a many-body Loschmidt echo setup using mean-field approaches.\n\n## Phase-space representation and classical fidelity\n\n**Refs.\u00a0**\n\nAn intuitively appealing representation of the Loschmidt echo in phase space is obtained by expressing Eq.\u00a0() as $$M(t) =\n\\frac{1}{(2\\pi\\hbar)^d}\\int\\mathrm{d}\\mathbf{q}\\int\\mathrm{d}\\mathbf{p}\n\\ W_{H_1}(\\mathbf{q},\\mathbf{p};t)\\ W_{H_2}(\\mathbf{q},\\mathbf{p};t)\n\\ , \\label{eq-LEWigner}$$ where $W_{H_1}$ and $W_{H_2}$ are the Wigner functions resulting from the evolution of $W_0(\\mathbf{q},\\mathbf{p})$, introduced in Eq.\u00a0(), under the action of the Hamiltonian operators $H_1$ and $H_2$ respectively. This approach is naturally connected to the dephasing representation of Sec.\u00a0 and provides a particularly useful framework in the study of decoherence, as the Wigner function is a privileged tool to understand the connection between quantum and classical dynamics.\n\nAn initial Gaussian wave-packet is associated with a Gaussian Wigner function that will develop in phase-space non-positive structures and be deformed under the evolution of $H_1$ and $H_2$. The sensitivity of the overlap () to the non-positive parts and to the deformations of the Wigner function can be related with the different decay regimes discussed in Sec.\u00a0.\n\nThe form () of the quantum fidelity allows to define a classical fidelity by using in a classical problem the Liouville distributions $L_{H_1}$ and $L_{H_2}$ instead of $W_{H_1}$ and $W_{H_2}$ respectively. This definition follows from the logical representation of the Liouville distribution as the classical limit of the Wigner function. However, the definition by an overlap of distributions does not convey the sense of classical reversibility: a trajectory evolving forward in time with $H_1$ and backwards with $H_2$ would give a considerable contribution to the overlap not only if it ends up close to the initial point, but whenever the final point is in the neighborhood of those of the initial distribution. The labeling of trajectories and particles, characteristic of classical mechanics, is then not taken into account in this definition of classical fidelity.\n\nThe correspondence principle dictates that the quantum fidelity follows its classical counterpart up to the Ehrenfest time. However, such a correspondence does not imply that, in the intermediate-time asymptotic decay of globally perturbed chaotic systems, the Lyapunov regime is only present up to this characteristic time scale. In fact, the Lyapunov regime is not simply an effect of the classical-quantum correspondence. For instance, in a two-dimensional billiard, the saturation time $t_{\\rm S}$ (see Sec.\u00a0) that would signal the end of the asymptotic decay (and so the end of the Lyapunov decay if one is in the appropriate regime) can be much larger than the Ehrenfest time.\n\n# Experiments\n\n## Nuclear Magnetic Resonance\n\n**Ref.\u00a0 (see particularly Sec.\u00a08.8 and App.\u00a0E)**\n\nNuclear magnetic resonance has been the main tool to study echoes generated by different time-reversal procedures since the fifties. After the time-reversal of *individual spin* precession implemented by Erwin Hahn in the **spin echo**, one had to wait until the seventies, when Rhim and Kessemeir achieved the reversion of the spin-spin interactions. The resulting reversal of the macroscopic polarization is known as **Magic Echo** and was thoroughly studied by Rhim, Pines and Waugh. In the nineties Ernst et al. implemented a strategy to address a localized spin excitation and Levstein et al. realized that such study was optimal for a theoretical description. They used crystal structures with abundant of interacting $^{1}$H spins, where the dipolar coupling produces the \"diffusion\" of the initial polarization. The time-reversal procedure is then able to refocus the excitation generating a **Polarization Echo**. These last experiments have a direct connection with the concepts discussed in this review.\n\n### Basic concepts on spin dynamics\n\n**Refs.\u00a0**\n\nA typical experiment starts with a sample which constitutes a network with about $10^{23}$ interacting spins in thermal equilibrium in presence of an external magnetic field $B_{0}$. This many-spin state is denoted as $|\\Psi_{\\mathrm{eq}}\\rangle$. In this state every spin has an almost equal probability of being up or down since the thermal energy, $k_{B}T$, is much higher than any other relevant energy scale of the system. At time $t=0$ a sequence of radiofrequency pulses ensures that spin at site $0$th is oriented along $z$ direction. This is represented by the action of the spin operator $S_{0}^{+}$ on $|\\psi_{\\rm eq}\\rangle$. In a system with $m+1$ spins, we represent this initially excited state as $$|\\Psi_{0}\\rangle=\\frac{S_{0}^{+}|\\Psi_{\\mathrm{eq}}\\rangle}{\\left\\vert\n \\langle\\Psi_{\\mathrm{eq}}|\\left. S_{0}^{-}S_{0}^{+}\\right\\vert\n \\Psi_{\\mathrm{eq}}\\rangle\\right\\vert ^{1\/2}}=\\sum_{r=1}^{2^{m}}%\n\\frac{e^{\\mathrm{i}\\phi_{r}}}{2^{m\/2}}\\left\\vert\n\\uparrow_{0}\\right\\rangle \\otimes\\left\\vert \\beta_{r}\\right\\rangle\n,\\label{initial-spin-state}$$ where the denominator $\\left\\vert \\langle\\Psi_{\\mathrm{eq}}|\\left.\nS_{0}^{-} S_{0}^{+}\\right\\vert \\Psi_{\\mathrm{eq}}\\rangle\\right\\vert\n^{1\/2}$ ensures that the initially excited state has a proper normalization and $\\phi_{r}$ is a random phase that describe a mixture of states of the form $$\\left\\vert \\beta_{r}\\right\\rangle =\\left\\vert s_{1}\\right\\rangle\n\\otimes\\left\\vert s_{2}\\right\\rangle \\otimes\\left\\vert\ns_{3}\\right\\rangle \\otimes...\\otimes\\left\\vert s_{m}\\right\\rangle\n\\ \\text{\\ with }\\left\\vert s_{k}\\right\\rangle \\in\\left\\{ \\left\\vert\n\\uparrow\\right\\rangle ,\\left\\vert \\downarrow\\right\\rangle \\right\\}\n.\\label{mixture-terms}$$\n\nThe preparation of this state involves the use of a $^{13}$C nucleus as a \"spy\"\u00a0to inject and detect polarization at the directly bonded $^{1}$H spin (0th). Then, the system evolves under mutual many-body interaction described by $H_{1}$ for a period $t_{1}$. In the polarization echo experiment, this is described by effective dipolar interaction Hamiltonian $H_{1}$, truncated by the Zeeman field, $$H_{1}\\underset{%\n%TCIMACRO{\\QATOP{\\text{Polarization}}{\\text{Echo Experiment}}}%\n%BeginExpansion\n\\genfrac{}{}{0pt}{}{\\text{Polarization}}{\\text{Echo Experiment}}%\n%EndExpansion\n}{\\equiv}%\n%TCIMACRO{\\dsum \\limits_{i,j}}%\n%BeginExpansion\n{\\displaystyle\\sum\\limits_{i,j}}\n%EndExpansion\n\\left[\n 2S_{i}^{z}S_{j}^{z}-\\frac{1}{2}(S_{i}^{+}S_{j}^{-}+S_{i}^{-}S_{j}^{+})\\right]\n.\\label{Hdipolar}$$ $H_{1}$ contains flip-flop or XY operators of the form $S_{i}^{+}S_{j}^{-}+S_{i}^{-}S_{j}^{+}$, as well as Ising terms of the form $S_{i}^{z}S_{j}^{z}$. The dipolar interaction $d_{i,j}$ constant decay with the third power of the distance between sites $i$ and $j$. $H_{1}$ produces the spread of the initially localized excitation. Since total polarization is conserved under $H_{1}$, such process is commonly known as \"spin diffusion\". \u00a0Then, a new pulse sequence rotates all the spins 90 degrees and the irradiation with r.f. field produces a further truncation of the dipolar interaction along the rotating field. The resulting effective Hamiltonian $-H_{2}=-(H_{1}+\\Sigma)$ acts for another period $t_{2}$. Observe that the reversal is not perfect but $\\Sigma$ is a small term which is due to truncations required to form the effective Hamiltonian. This constitutes the \"backwards\"\u00a0evolution period after which the local polarization at site $0$th is measured. We notice that $\\Sigma$ \u00a0acts as a self-energy operator that may account for interactions within the same Hilbert space, and thus is just an Hermitian effective potential. However, it also may describe the interaction with a different subsystem that constitutes the \"environment\". In the last case $\\Sigma=\\Delta-\\mathrm{i}\\Gamma$ would also have an imaginary (non-Hermitian) component. In this last case, evolution of observables should be described by two coupled Lindblad or Keldysh equations for the density matrix requiring a self-consistent evaluation of dynamics of the system and the environment. This last situation would make the analysis less straightforward and might hinder the essential physics. Thus, we first focus on perturbations that ensure unitary evolution. An example discussed by Zangara et al. of this situation is a second frozen spin chain interacting with the first through an Ising interaction. In these cases, a polarization echo is formed when $t_{2}=t_{1}$, i.e. after a total evolution time $2t=t_{2}+t_{1}$. The normalized amplitude is given by the spin autocorrelation function that accounts for the observed local polarization, $$M_{\\mathrm{PE}}(t)=\\frac{\\langle\\Psi_{\\mathrm{eq}}|S_{0}^{-}(2t)~S_{0}^{+}\n |\\Psi_{\\mathrm{eq}}\\rangle}{\\langle\\Psi_{\\mathrm{eq}}|S_{0}^{-}S_{0}^{+}\n |\\Psi_{\\mathrm{eq}}\\rangle} \\,, \\label{PolEcho_definition}$$ where $S_{0}^{-}(2t)=e^{iH_{1}t\/\\hbar}e^{-iH_{2}t\/\\hbar}S_{0}^{-}\ne^{iH_{2}t\/\\hbar}e^{-iH_{1}t\/\\hbar}\\,$\u00a0is the local spin operator, expressed in the Heisenberg representation respect to the acting Hamiltonian $H(\\tau)=H_{1}\\theta(t-\\tau)-H_{2}\\theta(\\tau-t)$. An equivalent definition of $M_{\\mathrm{PE}}(t)$ in terms of $S_{0}^{z}(2t)~S_{0}^{z}$ \u00a0was also proved useful.\n\nThe connection of this experimental observable with the concepts discussed so far are already hinted but not yet clarified by Eq.\u00a0(). Thus, we consider that the $m+1$ spins are arranged in a linear chain or in an odd sized ring where spin-spin interaction is restricted to XY terms acting on nearest-neighbors. In this case the Wigner-Jordan spin-fermion mapping allows to describe the spins system as a gas of non-interacting fermions in a 1-$d$ lattice. In that problem, the initial excitation propagates as a single spinless fermion. In turn, the observed local polarization can be written back in terms of the evolution of an initially localized \"spin wave\": $$|\\psi_{0}\\rangle=\\left\\vert \\uparrow_{0}\\right\\rangle\n\\otimes\\left\\vert \\downarrow_{1}\\right\\rangle \\otimes\\left\\vert\n\\downarrow_{2}\\right\\rangle \\otimes\\left\\vert\n\\downarrow_{3}\\right\\rangle \\otimes...\\otimes\\left\\vert\n\\downarrow_{m}\\right\\rangle \\ \\text{,}\\label{chain-state}$$ where $\\vert$ $\\left\\vert \\uparrow_{n}\\right\\rangle$ ($\\left\\vert \\downarrow_{n}\n\\right\\rangle$) means that the $n$-th site is occupied (unoccupied). Thus, it is clear that the excitation at site $0$-th can spread along the 1-$d$ chain through the nearest neighbor flip-flop interaction. This correspondence between many-body dynamics and spin wave behavior was worked out in detail by Danieli et al. This procedure was originally employed by Pastawski et al. in the context of time-reversal experiments to predict the presence of Poincar\u00e9 recurrences (Mesoscopic Echoes) which were clearly observed by M\u00e1di et al. at the laboratory of Richard Ernst in Zurich. In this condition, the polarization detected after the time-reversal procedure results: $$M_{\\mathrm{PE}}(t)\\underset{%\n%TCIMACRO{\\QATOP{1d~\\text{system}}{\\text{with }XY\\text{ interaction.}}}%\n%BeginExpansion\n\\genfrac{}{}{0pt}{}{1d~\\text{system}}{\\text{with }XY\\text{ interaction}}%\n%EndExpansion\n%\\text{. }\n}{\\equiv}M(t)=\\left\\vert \\langle\\psi_{0}|e^{iH_{2}t\/\\hbar}\ne^{-iH_{1}t\/\\hbar}|\\psi_{0}\\rangle\\right\\vert ^{2}.\\label{LE-chain}$$ Notice that here $|\\psi_{0}\\rangle$ denotes a single particle wave function, there are no other operators than the propagators and the modulus square makes explicit the positivity of $M_{\\mathrm{PE}}(t)$. These features that were not obvious in Eq.\u00a0() and make it to agree with the definition of the Loschmidt echo of Eq.\u00a0().\n\nMany other effective many-body Halmiltonians can be experimentally reversed to obtain the revival of different observables that reduce to a detectable polarization. Thus it is now a common practice to losely call **Loschmidt Echo** to the polarization resulting from any of these possible spin dynamics reversal procedures, regardless the applicability of the single particle correspondence, with an explicit mention of the observable used.\n\n### Time-reversal of the dipolar many-spin interaction\n\n**Refs.\u00a0**\n\nThe experiments have been carried out in different molecular crystals of the cyclopentadienyl (C$_{5}$H$_{5}$) family. There, $^{1}$H nuclei are arranged in five fold rings with strong intra-ring dipolar interactions which nevertheless are not constrained to the molecule but also yield appreciable intermolecular couplings. Polarization can be injected on those $^{1}$H which are close to a \"spy\" $^{13}$C nucleus that acts as a local probe used to inject and eventually detect local polarization as represented in Fig.\u00a0.\n\nSpreading of the initally localized spin excitation proceeds through a dipolar interaction which is truncated either in the basis of external magnetic field (Laboratory frame) or in the rotating frame of a radio frequency field. This choice is crucial in order to achieve the change of the sign and the eventual scale down of the effective Hamiltonian. Thus a spin dynamics with $H_{1}$ in the Laboratory frame is reversed by a dynamics of $H_{2}$ in the rotating frame or vice versa. The role of environment might be played by paramagnetic Co(II) or quadrupolar Mn nuclei that replace the Fe and thus they are absent for pure ferrocene crystals. Besides, in pure ferrocene truncation in the rotating frame give rise to non-inverted non-secular terms which, being supressed by the corresponding Zeeman energy, are at most of the order of a few percent of the matrix elements of $H_{2}$. This would yield an Hermitian $\\Sigma$. The strength of these terms is inversely proportional to $B_{1}$, the strength of the r.f. pulse, and thus one has the possibility to experimentally reduce its importance by increasing the r.f. power.\n\nThe results showing the buildup of the Loschmidt echoes, under $H_{2}$ after different periods of evolution with $H_{1}$, are presented in Fig.\u00a0.\n\nThe universal features of the Loschmidt echo appear when one studies the maximum recovered polarization (Loschmidt Echo intensity) as a function of $t$, the evolution time before reversal. In Ferrocene crystals, once one substrates the background noise it is clear that the Loschmidt echo follows a clear Gaussian law, $$M(t)=\\exp\\left[-\\frac{1}{2}\\left(\\frac{t}{T_{3}}\\right)^{2}\\right] \\,,\n\\label{Loschmidt-Gaussian}$$ for over two orders of magnitude and serves as a definition of the characteristic time $T_3$. The same decay law is observed when, using a specially tailored pulse sequence, the Hamiltonians are scaled down by a factor $n=1,2,8$ and $16$ respectively. i.e. the characteristic decay time is increased by the facton $n$. The scaling of $1\/T_{3}$ with the Hamiltonian strength is also observed when $H_{1}$ and $H_{2}$ are scaled down by using different crystal orientations.\n\nIn contrast to Ferrocene, in Cobaltoncene crystals, one starts with a Gaussian decay, and as the dipolar Hamiltonian is scaled down, an exponential decay of the Loschmidt echo is developed and became stable with a characteristic time $\\tau_{SE}$.\n\nThus, the overall behavior of the recovered local polarization is $$M(t)=\\exp\\left[-\\frac{1}{2}\\left( \\frac{t}{nT_{3}} \\right)^{2} -\n \\frac{t}{\\tau_{SE}} \\right] \\,.\n\\label{PolarizationEchoDecay}$$ One may assign this asymptotic exponential decay rate $1\/\\tau_{SE}$ to a Fermi Golden Rule decoherence rate induced by the perturbation induced by the paramagnetic nature of the Co(II) nuclei which acts as an environment. Further experiments by Pastawski et al. in crystals free of magnetic impurities confirmed the Gaussian decay with a rate $1\/T_{3}$ that depends only weakly on the r.f. power and, when truncation terms becomes too small, saturates to an intrinsic value that scales with the strength of the reverted Hamiltonian.\n\nIn summary, these experiments hinted that there is an intrinsic decoherence rate which is fixed by the inverted Hamiltonian. Thus, other than the difference between the Gaussian and exponential decay, one might say that $1\/T_{3}$ plays a role similar to the Lyapunov exponent in the reversal of chaotic one body systems. This surprising finding suggested that, even in absence of any important perturbation, in the experimental limit of $m\\rightarrow\\infty$, even very small residual terms, in presence of the dynamical instability of a very complex many-body dynamics, are efficient enough to set the Loschmidt Echo decay into a perturbation independent decay regime. This plays the role of an \"intrinsic decoherence\" with a time scale determined by the inverted Hamiltonian.\n\nIt was precisely the hypothesis of an \"intrinsic decoherence rate\" what triggered the theoretical analysis of time-reversal in chaotic systems. Indeed, while dynamics of one body systems in 1D, described by Eq.\u00a0(), can not be chaotic, one deems that disorder and the more complex many-body dynamics present in higher dimensional systems, have mixing properties which makes them assimilable to chaotic systems. This observation led to G. Usaj et al. to propose that quantum chaos contains the underlying physics of a time-reversal experiment in a many-body systems. It was also argued that $M_{\\mathrm{PE}}(t_{R})$ constitutes an entropy measure. These ideas finally boiled down into the model proposed by Jalabert and Pastawski. While Gaussian decays appears quite naturally as a consequence of the large number statistics of many-spin states that are progressively incorporated into the dynamics, the perturbation independent decay have not found yet a straightforward explanation in this context.\n\n### Further time-reversal Experiments\n\n**Refs.\u00a0**\n\nWhile time-reversal of different interactions is almost an unavoidable tool in experimental NMR techniques, there are not so many studies devoted to grasp at the origins of the (in)efficiency of such procedures. In particular, time-reversal was implemented using different procedures and systems that range from 3$d$ and quasi-1$d$ crystals to molecules in oriented liquid crystalline phases. In these cases, also different initial states and effective Hamiltonians were studied.\n\nOf particular interest are the double quantum ($DQ$) Hamiltonians, $$H_{1}\\underset{%\n%TCIMACRO{\\QATOP{\\text{Loschmidt}}{\\text{Echo for }DQ}}%\n%BeginExpansion\n\\genfrac{}{}{0pt}{}{\\text{Loschmidt}}{\\text{Echo for }DQ}%\n%EndExpansion\n\\text{ }}{\\equiv}H_{DQ}=%\n%TCIMACRO{\\dsum \\limits_{i,j}}%\n%BeginExpansion\n{\\displaystyle\\sum\\limits_{i,j}}\n%EndExpansion\n\\widetilde{d}_{i,j}\\left[ S_{i}^{+}S_{j}^{+}+S_{i}^{-}S_{j}^{-}\\right] \\,.\n\\label{HDQ}$$ The notation $\\widetilde{d}$ for the interaction strength recalls at it is an effective Hamiltonians built up from dipole-dipole interaction by suitable pulse sequences whose effect is described with the help of the average Hamiltonian theory. Since $H_{DQ}$ does no commute with the polarization described by $\\sum_{i}S_{i}^{z},$ it gives a more mixing as dynamics explores different subspaces of the Hilbert space inducing the multiple quantum coherences. Besides $H_{DQ}$, there are high order terms that act as an environment whose instantaneous decoherence could be described by a Fermi Golden Rule. One experimental observation is that the larger the number of subspaces correlated by the interaction (coherences of high order) the faster the decoherence.\n\nOne of the interesting properties of 1$d$ systems, is that the dynamics induced by a $H_{DQ}$ has a precise correspondence with an XY chain. Thus, the Loschmidt Echo is given by Eq.\u00a0(). In real systems, such as Hydroxyapatite (HAp), inter-chain interactions are significative, quantum Zeno effect induced by a strong dynamics along the chain prevents the development of coherences beyond second order and Eq.\u00a0() remains valid. This explains why the Loschmidt echo observed by Rufeil-Fiori et al. for the double quantum Hamiltonian shows a Loschmidt echo with an exponential decay described by the Fermi golden rule.\n\nThis decay contrasts with results for an Adamantane crystal, a highly connected 3$d$ system. There, a Fermi-function like decay is indicative of the two time scales. Each molecule has 16 spins which do not have direct interactions, thus remain independent for short times until they interact through neighbor molecules. In this case decoherence remains weak while intermolecular correlation builds up. Once neighboring molecules become fully coupled the density of directly connected states becomes very high and the Fermi-golden-rule controlled exponential decay takes over.\n\nSpin dynamics in various liquid crystals was studied in series of papers whose aim was to reduce the number of interacting spins to those within each molecule. However, the experiences have shown that a number of residual interaction remain significative and this strongly compromises the effectiveness of the Loschmidt echo sequences.\n\n## Microwave billiards\n\n**Refs.\u00a0**\n\nMicrowave-frequency electromagnetic waves in quasi-two-dimensional cavities are effectively governed by the Helmholtz equation, which is equivalent to the stationary Schr\u00f6dinger equation. This equivalence allows one to address the Loschmidt echo in laboratory experiments with microwave billiards.\n\nQuantities directly measured in microwave experiments are frequency-dependent scattering matrix elements, $S_{ab}(\\nu)$ and $S'_{ab}(\\nu)$, corresponding to the unperturbed and perturbed chaotic billiard systems, respectively. Here, the indices $a$ and $b$ refer to the antennae (or scattering channels) involved in the experiment. The change from the frequency domain to the time domain is achieved by the Fourier transforms, $\\hat{S}_{ab}(t) = \\int d\\nu \\, e^{-2\\pi i \\nu t}\nS_{ab}(\\nu)$ and $\\hat{S}'_{ab}(t) = \\int d\\nu \\, e^{-2\\pi i \\nu t}\nS'_{ab}(\\nu)$, performed over an appropriate frequency window. The sensitivity of a scattering process to system perturbations are then quantified by the *scattering fidelity amplitude*\n\n$$f_{ab}(t) = \\frac{\\langle \\hat{S}_{ab}^*(t) \\hat{S}'_{ab}(t)\n \\rangle}{\\sqrt{\\langle |\\hat{S}_{ab}(t)|^2 \\rangle \\langle\n |\\hat{S}_{ab}(t)|^2 \\rangle}} \\,,\n\\label{eq:scat_fid_ampl}$$ where the asterisk stands for the complex conjugation, and the angular brackets denote an ensemble average over different realizations of the experiment (e.g., different positions of scatterers, antennae, etc.). The *scattering fidelity* itself is a real-valued quantity defined as $F_{ab}(t) = |f_{ab}(t)|^2$. The numerator in the right-hand-side of Eq.\u00a0() quantifies correlations between scattering matrix elements of the unperturbed and perturbed system. The decay of the numerator however is dominated by the decay of the autocorrelations, so that the denominator is introduced to compensate for the autocorrelation contributions to the correlation decay.\n\nIn chaotic systems and in the case of a weak coupling of the measuring antennae to the system, the scattering fidelity is known to approach the Loschmidt echo for a random initial state. This makes microwave billiards well suited for experimental studies in the field. In particular, microwave studies provided compelling experimental evidence for the validity of some of the known decay regimes of the Loschmidt echo for global and local Hamiltonian perturbations.\n\n## Elastic waves\n\n**Refs.\u00a0**\n\nSound waves that travel through a elastic medium are multiple scattered by its inhomogeneities generating a slowly decaying wave diffusion. If no change occurs in the medium over time, the usually called coda waves are highly repeatable, that is, for identical excitations the waveforms are indistinguishable. While if the medium is perturbed, the change in the multiple scattered waves will result in an observable change in the coda waves. Lobkis and Weaver have measured the sensitivity to temperature changes of elastic coda waves in aluminum alloy blocks. They used the cross correlation function between two signals obtained at different temperatures $T_1$ and $T_2$ $$X(\\varepsilon)= \\frac{\\int{\\rm d}t\\; S_{T_1}(t)\\; S_{T_2}(t(1+\\varepsilon))}\n {\\sqrt{\\int{\\rm d}t\\; S_{T_1}^2(t)\\; \\int{\\rm d}t\\; S_{T_2}^2(t(1+\\varepsilon))}} \\; .\n\\label{Xeq}$$ to evaluate the distortion that is defined as $D(t) = -\\ln (X_{\\rm\n max})$, where the time dependence is given by the \"age\" of the signal. If formulated as a scattering process, it is shown that $D(t)\n= -\\ln[f(t)]$, where $f(t)$ is the scattering fidelity that was introduced to analyze the fidelity decay in microwave billiards. For sufficiently chaotic dynamics in systems weakly coupled to decay channels, the scattering fidelity approaches the standard fidelity amplitude (the Loschmidt echo is the absolute square of the fidelity amplitude).\n\nThe results obtained in acoustic signals traveling in an aluminum blocks were understood using the random matrix predictions for the standard fidelity (see Fig.\u00a0). A surprising and unexpected finding is that the scattering fidelity decay for blocks with chaotic and regular regular classical dynamics are explained by the same random matrix expressions.\n\n## Cold atoms\n\n**Refs.\u00a0**\n\nThe Loschmidt echo, or rather a quantity closely related to the Loschmidt echo, has been carefully studied in laboratory experiments with cold atoms trapped inside optical cavities. These experiments offer an atom-optics realization of the echo in two-dimensional quantum billiards with underlying chaotic or mixed classical dynamics. The basic idea underlying the experiments with atom-optics billiards is as follows. An effectively two-level atom, with the internal states denoted by $|1\\rangle$ and $|2\\rangle$, is initially prepared in a state $|1\\rangle \\otimes |\\psi\\rangle$, where $|\\psi\\rangle$ stands for the spatial component of the initial state. A laboratory observation of the echo involves first exposing the atom to a sequence of microwave-frequency electromagnetic pulses and then measuring the probability $P_2$ of finding the atom in the internal state $|2\\rangle$.\n\nIn the protocol targeting the Loschmidt echo, $\\big| \\langle \\psi |\ne^{i H_2 t \/ \\hbar} e^{-i H_1 t \/ \\hbar} | \\psi \\rangle \\big|^2$, an atom is first irradiated with a so-called $\\pi\/2$ pulse, which changes the atomic state into an equiprobable superposition of $|1\\rangle\n\\otimes |\\psi\\rangle$ and $|2\\rangle \\otimes |\\psi\\rangle$. The pulse is practically instantaneous and introduces almost no change to the spatial component of the state. Then, the atom is left to evolve in an optical trap for a time $t$, during which components $|1\\rangle$ and $|2\\rangle$ of the atomic state propagate under Hamiltonians $H_1$ and $H_2$ respectively. The difference between $H_1$ and $H_2$ originates from a difference in dipole interaction potentials exerted upon the states $|1\\rangle$ and $|2\\rangle$ by the optical trap. Finally, another $\\pi\/2$ pulse is applied to the atom and the probability $P_2$ of finding the atom in the internal state $|2\\rangle$ is measured; $P_2$ turns out to be a quantity closely related to the Loschmidt echo.\n\nRealistic experiments however deal with thermal incoherent mixtures of initial states, rather than with pure states. As a result, the echo amplitude, due to an individual state of this thermal mixture, contributes to $P_2$ with an effectively random phase, smearing out the echo signal. In order to overcome this difficulty, Andersen, Davidson, Gr\u00fcnzweig, and Kaplan addressed a new echo measure, $\\big| \\langle \\psi | e^{i H_2 t \/ 2\\hbar} e^{i H_1 t \/ 2\\hbar} e^{-i\n H_2 t \/ 2\\hbar} e^{-i H_1 t \/ 2\\hbar} | \\psi \\rangle \\big|^2$, closely related to the Loschmidt echo. In their experiments, \"two-level\" atoms, initially prepared in the state $|1\\rangle\n\\otimes |\\psi\\rangle$, were successively exposed to (i) a $\\pi\/2$ pulse, (ii) evolution for a time $t\/2$, (iii) a $\\pi$ pulse swapping the population of the internal states $|1\\rangle$ and $|2\\rangle$, (iv) another evolution for a time $t\/2$, (v) a $\\pi\/2$ pulse, and finally (vi) a measurement of $P_2$. Under this protocol, each state of the thermal mixture contributed to the probability $P_2$ with the same phase. This allowed for a reliable measurement of the echo even for ensembles of more than a million of thermally populated states.\n\nMore recently, atom interferometry has also been used to investigate several aspects of time-reversal in quantum kicked rotor systems. Wu, Tonyushkin, and Prentiss studied the dynamics of laser-cooled rubidium atoms subjected to periodically pulsed optical standing waves, as an atom-optics realization of the kicked rotor. Their experiments have demonstrated that quantum fidelity of a system, that is chaotic in the classical limit, can survive strong perturbations over long time without decay. In a similar setup, Ullah and Hoogerland have performed an experiment that demonstrated the possibility of using time-reversal evolution for cooling atomic matter waves. The problem of fidelity decay in kicked atom-optics systems has recently drawn considerable interest among theoreticians.\n\n## Time-reversal mirrors\n\n**Refs.\u00a0**\n\nAnother time-reversal attempt conceptually close to the Loschmidt echo is that of time-reversal mirrors, developed in the last twenty years. Such a procedure has been successfully implemented in various setups where classical waves propagate through a complex media (going from acoustic to electromagnetic waves). In the time-reversal mirror protocol an initially localized pulse is recorded by a collection of receiver-emitter transducers during a time interval where the wave suffers multiple scattering. The later re-emission of the time-reversed signal by each transducer during an interval of time equal to the recording one leads to the refocusing of the signal in the region of the original excitation.\n\nTwo features of this protocol have lead to considerable surprise among the practitioners:\n\n- even one transducer is enough to obtain a good reproduction of the original signal\n\n- the quality of the refocusing was improved by the complexity of the media yielding the multiple scattering of the waves.\n\nA semiclassical theory of time-reversal focusing can be built up in terms of propagators and classical trajectories. The two previous features can be acknowledged by the semiclassical theory.\n\nTime-reversal mirrors and Loschmidt echos differ since the former aims the refocalization of a wave which is localized in space and time, while the latter attempt to time-reverse a quantum state. A common aspect between both protocols is the fact that the Hamiltonian for the forward and backward evolutions can differ due to modifications of the environment during the process.\n\nTime-reversal mirrors are not only conceptually important, but also have very important technological applications as for example brain therapy, lithotripsy, nondestructive testing and telecommunications. Recently a sensor of perturbations was proposed and demonstrated combining the ideas of the Loschmidt echo and time reversal mirror in classical waves.\n\n# Past, present, and future of the Loschmidt echo studies\n\nSince the early discussions between Boltzmann and Loschmidt on irreversibility and time-reversal, it has been clearly established that chaos is the source of the irreversibility in statistical mechanics. However, when considering the sources of irreversibility in a quantum system with a few degrees of freedom Quantum Mechanics, which is the fundamental theory of the microscopic world, does not allow for chaotic behavior in the sense in which the latter appears in Classical Mechanics. That is, there is no exponential separation between states with nearby initial conditions since quantum evolution is unitary. Trying to understand the origin of irreversibility in quantum mechanics, Asher Peres proposed in 1984 as an alternative, to study the stability of quantum motion owing to perturbations in the Hamiltionian. In his seminal paper, Peres considered $M(t)$, later called Loschmidt echo, as a measure of sensibility and reversibility of quantum evolutions. He aimed to distinguish classically chaotic or integrable dynamics according to the speed at which the fidelity decays. Peres reached the conclusion that the long-time behavior of the fidelity (or saturation, following the terminology of Sec.\u00a0) in classically chaotic systems is characterized by smaller values and smaller fluctuations as compared to the case of regular dynamics. In view of numerous possibilities of different behaviors that are allowed in regular systems, such a conclusion does not always hold.\n\nIt is interesting to notice that Peres' article appeared almost at the same time than two other seminal works studying the relevance of the underlying classical dynamics for quantum properties. These are the random-matrix description of the statistical properties of classically chaotic systems by Oriol Bohigas and collaborators; and the understanding of the spectral rigidity from a semiclassical analysis of periodic orbits proposed by Michael Berry. The field of Quantum Chaos developed building upon these founding ideas and most of the subsequent works were concerned with the spectral properties of classically chaotic systems. Comparatively very little work was done along the Peres' proposal in the decade following it.\n\nMotivated by the puzzles posed by echo experiments in extended dipolar coupled nuclear spin systems, Rodolfo Jalabert and Horacio Pastawski studied in 2001 the behavior of the Loschmidt echo for classically chaotic systems. They found that, depending on the perturbation strength, the decay of the Loschmidt echo exhibits mainly two different behaviors. For weak perturbations, the decay is exponential with the rate that depends on the perturbation strength and that is given by the width of the local density of states (usually called the Fermi-golden-rule regime). For stronger perturbations, however, there is a crossover to a perturbation-independent regime, characterized by an exponential decay with the rate equal to the average Lyapunov exponent of the underlying classical system.\n\nThe connection of the quantum Loschmidt echo with the classical chaos generated a great activity in the field. Researches from different fields, such as quantum chaos, solid state physics, acoustics, and cold atom physics, have made important contributions towards understanding various aspects of the Loschmidt echo. During the first years of the last decade the studies were mainly focused on one-body aspects of the problem and on the connection between the Loschmidt echo and decoherence; many interesting experiments were also performed.\n\nIn the last years, the interest has primarily been focused on various many-body aspects of the Loschmidt echo. For example, there are studies that consider the decay of the Loschmidt echo as a signature of a quantum phase transition, or that concentrate on the relation between the Loschmidt echo and the statistics of the work done by a quantum critical system when a control parameter is quenched. In view of recent improvements in experimental many-body system techniques, the issue of the Loschmidt echo decay has become more concrete.\n\nSome of the more important open questions in the field are:\n\n- The Lyapunov (perturbation-independent) decay regime has been one of the most influential breakthroughs in the theory of the Loschmidt echo. However, an experimental observation of the regime in simple well-controlled systems is still lacking.\n\n- The first experimental measure of the Loschmidt echo was done in a many-body spin system using NMR techniques. However, theoretically little is know about the behavior of the Loschmidt echo in many-body systems. Very interesting experiments are now being carried on using NMR.\n\n- The semiclassical theory of the Loschmidt echo has proved to be a powerful tool to understand many aspects of its behavior. But this approach has some well-recognized difficulties, such as the root search problem or the exponentially growing number of classical orbits needed for the semiclassical expansion. Recently, a simple semiclasical dephasing representation of the Loschmidt echo was proposed that does not suffer of the usual problems of the semiclassical theories. The range of validity is however unknown.\n\n- There is a vast amount of work characterizing the decay regimes of the Loschmidt echo as universal. For chaotic systems perturbed with global or local perturbations the behavior of the decay rate of the Loschmidt echo is well established (see Sec.\u00a0). However, recent works in quantum maps that are perturbed in all phase space, have reported non-universal oscillatory behavior in the decay rate of the Loschmidt echo as a function of the perturbation strength. Deviations from the perturbation independence are found usually in the form of oscillations around the Lyapunov exponent. Moreover, there are cases where deviations are considerably large rendering the Lyapunov regime non-existing. The fully understanding of this non-universal behavior, that was also observed in a Josephson flux quit, is still lacking .\n\n- The semiclassical theory of the Lyapunov decay of the Loschmidt echo is based on highly localized initial states and the diagonal approximation of Eq. (3) for $M(t)$. But recent results using the semiclassical dephasing representation have shown a Lyapunov regime in the mean value of the fidelity amplitude $\\langle m(t) \\rangle$ .\n\n- Systems with regular or fully chaotic dynamics are rather exceptional in nature. A generic system has mixed phase space consisting of integrable islands immersed in a chaotic sea. Few general results are known about the Loschmidt echo in this scenario.\n\n# Recommended reading\n\n- R.\u00a0A.\u00a0Jalabert and H.\u00a0M.\u00a0Pastawski. \"The semiclassical tool in complex physical systems: Mesoscopics and decoherence\". *Advances in Solid State Physics*, Vol. 41, p. 483, 2001.\n\n- H.\u00a0M.\u00a0Pastawski, G.\u00a0Usaj, and P.\u00a0R.\u00a0Levstein. \"Quantum Chaos: an answer to the Boltzmann-Loschmidt controversy? An experimental approach\". *Contemporary Problems of Condensed Matter Physics*, pp. 223\u2013258, S.\u00a0J.\u00a0Vlaev, L.\u00a0M.\u00a0Gaggero\u00a0Sager, and V.\u00a0V.\u00a0Dvoeglazov, Eds., NOVA Scientific Publishers, New York, 2001. URL: \n\n- T.\u00a0Gorin, T.\u00a0Prosen, T.\u00a0H.\u00a0Seligman, and M.\u00a0Znidaric. \"Dynamics of Loschmidt echos and fielity decay\". *Physics Reports*, Vol. 435, pp. 33\u2013156, 2006.\n\n- Ph.\u00a0Jacquod and C.\u00a0Petitjean. \"Decoherence, entanglement and irreversibility in quantum dynamical systems with few degrees of freedom\". *Advances in Physics*, Vol. 58, pp. 67\u2013196, 2009.\n\n- A.\u00a0Peres. \"Quantum Theory: Concepts and Methods (Fundamental Theories of Physcis)\". Springer, 1995.\n\n- A.\u00a0M.\u00a0Ozorio\u00a0de\u00a0Almeida. \"Hamiltonian Systems, Chaos and Quantization\". Cambridge Press, Cambridge 1988.\n\n- F.\u00a0Haake. \"Quantum Signatures of Chaos\". Springer-Verlag, Berlin-Heidelberg, 2001.\n\n- H.-J.\u00a0St\u00f6ckmann. \"Quantum Chaos: An Introduction\". Cambridge University Press, 1999.\n\n# Internal references\n\n- Y.\u00a0Fyodorov; *Random Matrix Theory*, Scholarpedia 6(3):9886(2011).\n\n- M.\u00a0Gutzwiller; *Quantum chaos*, Scholarpedia 2(12):3146(2007).\n\n- M.\u00a0Raizen and D.\u00a0A.\u00a0Steck; *Cold atom experiments in quantum chaos*, Scholarpedia 6(11):10468(2011).\n\n- H.-J.\u00a0St\u00f6ckmann; *Microwave billiards and quantum chaos*, Scholarpedia 5(10):10243(2010).\n\n# External links\n\n- The Loschmidt Echo homepage","meta":{"dup_signals":{"dup_doc_count":46,"dup_dump_count":36,"dup_details":{"curated_sources":2,"2023-40":1,"2023-23":1,"2023-14":1,"2022-49":2,"2022-27":1,"2022-21":1,"2022-05":1,"2021-43":1,"2021-17":1,"2021-04":2,"2020-40":1,"2020-34":1,"2020-24":1,"2020-05":2,"2019-47":2,"2019-39":2,"2019-30":1,"2019-22":2,"2019-13":3,"2019-04":1,"2018-51":1,"2018-47":1,"2018-39":2,"2018-30":1,"2018-26":1,"2018-17":1,"2018-09":1,"2018-05":1,"2017-47":1,"2023-50":1,"2024-18":1,"2024-10":1,"2013-48":1,"2013-20":1,"2024-26":1}},"filename":"out\/1206.6348_extract_le_review-v17-arXiv.tex.md"},"subset":"arxiv"} +{"text":"abstract: The creation of social ties is largely determined by the entangled effects of people's similarities in terms of individual characters and friends. However, feature and structural characters of people usually appear to be correlated, making it difficult to determine which has greater responsibility in the formation of the emergent network structure. We propose *AN2VEC*, a node embedding method which ultimately aims at disentangling the information shared by the structure of a network and the features of its nodes. Building on the recent developments of Graph Convolutional Networks (GCN), we develop a multitask GCN Variational Autoencoder where different dimensions of the generated embeddings can be dedicated to encoding feature information, network structure, and shared feature-network information. We explore the interaction between these disentangled characters by comparing the embedding reconstruction performance to a baseline case where no shared information is extracted. We use synthetic datasets with different levels of interdependency between feature and network characters and show (i) that shallow embeddings relying on shared information perform better than the corresponding reference with unshared information, (ii) that this performance gap increases with the correlation between network and feature structure, and (iii) that our embedding is able to capture joint information of structure and features. Our method can be relevant for the analysis and prediction of any featured network structure ranging from online social systems to network medicine.\nauthor: S\u00e9bastien Lerique[^1]; Jacob Levy Abitbol; M\u00e1rton Karsai\nbibliography: an2vec.bib\ntitle: Joint embedding of structure and features via graph convolutional networks\n\n# Introduction\n\nAlthough it is relatively easy to obtain the proxy social network and various individual features for users of online social platforms, the combined characterisation of these types of information is still challenging our methodology. While current approaches have been able to approximate the observed marginal distributions of node and network features separately, their combined consideration was usually done via summary network statistics merged with otherwise independently built feature sets of nodes. However, the entanglement between structural patterns and feature similarities appears to be fundamental to a deeper understanding of network formation and dynamics. The value of this joint information then calls for the development of statistical tools for the learning of combined representation of network and feature information and their dependencies.\n\nThe formation of ties and mesoscopic structures in online social networks is arguably determined by several competing factors. Considering only network information, neighbour similarities between people are thought to explain network communities, where triadic closure mechanisms\u00a0 induce ties between peers with larger fractions of common friends\u00a0. Meanwhile, random bridges\u00a0 are built via focal closure mechanisms, optimising the structure for global connectedness and information dissemination. At the same time, people in the network can be characterised by various individual features such as their socio-demographic background\u00a0, linguistic characters\u00a0, or the distributions of their topics of interests\u00a0, to mention just a few. Such features generate homophilic tie creation preferences\u00a0, which induce links with higher probability between similar individuals, whom in turn form feature communities of shared interest, age, gender, or socio-economic status, and so on\u00a0. Though these mechanisms are not independent and lead to correlations between feature and network communities, it is difficult to define the causal relationship between the two: first, because simultaneously characterising similarities between multiple features and a complex network structure is not an easy task; second, because it is difficult to determine, which of the two types of information, features or structure, is driving network formation to a greater extent. Indeed, we do not know what fraction of similar people initially get connected through homophilic tie creation, versus the fraction that first get connected due to structural similarities before influencing each other to become more similar\u00a0.\n\nOver the last decade popular methods have been developed to characterise structural and feature similarities and to identify these two notions of communities. The detection of network communities has been a major challenge in network science with various concepts proposed\u00a0 to solve it as an unsupervised learning task\u00a0. Commonly, these algorithms rely solely on network information, and their output is difficult to cross-examine without additional meta-data, which is usually disregarded in their description. On the other hand, methods grouping similar people into feature communities typically ignore network information, and exclusively rely on individual features to solve the problem as a data clustering challenge\u00a0. Some semi-supervised learning tasks, such as link prediction, may take feature and structural information simultaneously into account, but only by enriching individual feature vectors with node-level network characteristics such as degree or local clustering\u00a0. Methods that would take higher order network correlations and multivariate feature information into account at the same time are still to be defined. Their development would offer huge potential in understanding the relation between individuals' characters, their social relationships, the content they are engaged with, and the larger communities they belong to. This would not only provide us with deeper insight about social behaviour, it would give us predictive tools for the emergence of network structure, individual interests and behavioural patterns.\n\nIn this paper we propose a contribution to solve this problem by developing a joint feature-network embedding built on multitask Graph Convolutional Networks\u00a0 and Variational Autoencoders (GCN-VAE)\u00a0, which we call the Attributed Node to Vector method (AN2VEC). In our model, different dimensions of the generated embeddings can be dedicated to encode feature information, network structure, or shared feature-network information separately. Unlike previous embedding methods dealing with features\u00a0, this interaction model\u00a0 allows us to explore the dependencies between the disentangled network and feature information by comparing the embedding reconstruction performance to a baseline case where no shared information is extracted. Using this method, we can identify an optimal reduced embedding, which indicates whether combined information coming from the structure and features is important, or whether their non-interacting combination is sufficient for reconstructing the featured network.\n\nIn practice, as this method solves a reconstruction problem, it may give important insights about the combination of feature- and structure-driven mechanisms which determine the formation of a given network. As an embedding, it is useful to identify people sharing similar individual and structural characters. And finally, by measuring the optimal overlap between feature\u2013 and network-associated dimensions, it can be used to verify network community detection methods to see how well they identify communities explained by feature similarities.\n\nIn what follows, after summarising the relevant literature, we introduce our method and demonstrate its performance on synthetic featured networks, for which we control the structural and feature communities as well as the correlations between the two. As a result, we will show that our embeddings, when relying on shared information, outperform the corresponding reference without shared information, and that this performance gap increases with the correlation between network and feature structure since the method can capture the increased joint information. Next, we extensively explore the behaviour of our model on link prediction and node classification on standard benchmark datasets, comparing it to well-known embedding methods. Finally, we close our paper with a short summary and a discussion about potential future directions for our method.\n\n# Related Work\n\nThe advent of increasing computational power coupled with the continuous release and ubiquity of large graph-structured datasets has triggered a surge of research in the field of network embeddings. The main motivation behind this trend is to be able to convert a graph into a low-dimensional space where its structural information and properties are maximally preserved\u00a0. The aim is to extract unseen or hard to obtain properties of the network, either directly or by feeding the learned representations to a downstream inference pipeline.\n\n## Graph embedding survey: from matrix factorisation to deep learning\n\nIn early work, low-dimensional node embeddings were learned for graphs constructed from non-relational data by relying on matrix factorisation techniques. By assuming that the input data lies on a low dimensional manifold, such methods sought to reduce the dimensionality of the data while preserving its structure, and did so by factorising graph Laplacian eigenmaps\u00a0 or node proximity matrices\u00a0.\n\nMore recent work has attempted to develop embedding architectures that can use deep learning techniques to compute node representations. DeepWalk\u00a0, for instance, computes node co-occurrence statistics by sampling the input graph via truncated random walks, and adopts a SkipGram neural language model to maximise the probability of observing the neighbourhood of a node given its embedding. By doing so the learned embedding space preserves second order proximity in the original graph. However, this technique and the ones that followed\u00a0 present generalisation caveats, as unobserved nodes during training cannot be meaningfully embedded in the representation space, and the embedding space itself cannot be generalised between graphs. Instead of relying on random walk-based sampling of graphs to feed deep learning architectures, other approaches have used the whole network as input to autoencoders in order to learn, at the bottleneck layer, an efficient representation able to recover proximity information\u00a0. However, the techniques developed herein remained limited due to the fact that successful deep learning models such as convolutional neural networks require an underlying euclidean structure in order to be applicable.\n\n## Geometric deep learning survey: defining convolutional layers on non-euclidean domains\n\nThis restriction has been gradually overcome by the development of graph convolutions or Graph Convolutional Networks (GCN). By relying on the definition of convolutions in the spectral domain, Bruna et al.\u00a0 defined spectral convolution layers based on the spectrum of the graph Laplacian. Several modifications and additions followed and were progressively added to ensure the feasibility of learning on large networks, as well as the spatial localisation of the learned filters\u00a0. A key step is made by\u00a0 with the use of Chebychev polynomials of the Laplacian, in order to avoid having to work in the spectral domain. These polynomials, of order up to $r$, generate localised filters that behave as a diffusion operator limited to $r$ hops around each vertex. This construction is then further simplified by Kipf and Welling by assuming among others that $r\\approx2$\u00a0.\n\nRecently, these approaches have been extended into more flexible and scalable frameworks. For instance, Hamilton et al.\u00a0 extended the original GCN framework by enabling the inductive embedding of individual nodes, training a set of functions that learn to aggregate feature information from a node's local neighborhood. In doing so, every node defines a computational graph whose parameters are shared for all the graphs nodes.\n\nMore broadly, the combination of GCN with autoencoder architectures has proved fertile for creating new embedding methods. The introduction of probabilistic node embeddings, for instance, has appeared naturally from the application of variational autoencoders to graph data\u00a0, and has since led to explorations of the uncertainty of embeddings\u00a0, of appropriate levels of disentanglement and overlap\u00a0, and of better representation spaces for measuring pairwise embedding distances (see in particular recent applications of the Wasserstein distance between probabilistic embeddings ). Such models consistently outperform earlier techniques on different benchmarks and have opened several interesting lines of research in fields ranging from drug design\u00a0 to particle physics\u00a0. Most of the more recent approaches mentioned above can incorporate node features (either because they rely on them centrally, or as an add-on). However, with the exception of DANE\u00a0, they mostly do so by assuming that node features are an additional source of information, which is congruent with the network structure (e.g. multi-task learning with shared weights\u00a0, or fusing both information types together\u00a0). That assumption may not hold in many complex datasets, and it seems important to explore what type of embeddings can be constructed when we lift it, considering different levels of congruence between a network and the features of its nodes.\n\nWe therefore set out to make a change to the initial GCN-VAE in order to: (i) create embeddings that are explicitly trained to encode both node features and network structure; (ii) make it so that these embeddings can separate the information that is shared between network and features, from the (possibly non-congruent) information that is specific to either network or features; and (iii) be able to tune the importance that is given to each type of information in the embeddings.\n\n# Methods\n\nIn this section we present the architecture of the neural network model we use to generate shared feature-structure node embeddings[^2]. We take a featured network as input, with structure represented as an adjacency matrix and node features represented as vectors (see below for a formal definition). Our starting point is a GCN-VAE, and our first goal is a multitask reconstruction of both node features and network adjacency matrix. Then, as a second goal, we tune the architecture to be able to scale the number of embedding dimensions dedicated to feature-only reconstruction, adjacency-only reconstruction, or shared feature-adjacency information, while keeping the number of trainable parameters in the model constant.\n\n## Multitask graph convolutional autoencoder\n\nWe begin with the graph-convolutional variational autoencoder developed by , which stacks graph-convolutional (GC) layers in the encoder part of a variational autoencoder to obtain a lower dimensional embedding of the input structure. This embedding is then used for the reconstruction of the original graph (and in our case, also of the features) in the decoding part of the model. Similarly to\u00a0, we use two GC layers in our encoder and generate Gaussian-distributed node embeddings at the bottleneck layer of the autoencoder. We now introduce each phase of our embedding method in formal terms.\n\n### Encoder\n\nWe are given an undirected unweighted feautured graph $\\mathcal{G} = (\\mathcal{V}, \\mathcal{E})$, with $N = |\\mathcal{V}|$ nodes, each node having a $D$-dimensional feature vector. Loosely following the notations of , we note $\\mathbf{A}$ the graph's $N \\times N$ adjacency matrix (diagonal elements set to $0$), $\\mathbf{X}$ the $N \\times D$ matrix of node features, and $\\mathbf{X}_i$ the D-dimensional feature vector of a node $i$.\n\nThe encoder part of our model is where $F$-dimensional node embeddings are generated. It computes $\\bm{\\mu}$ and $\\bm{\\sigma}$, two $N \\times F$ matrices, which parametrise a stochastic embedding of each node: $$\\bm{\\mu} = \\mathop{\\mathrm{GCN}}_{\\bm{\\mu}}(\\mathbf{X}, \\mathbf{A})\n\\quad \\text{and} \\quad\n\\log\\bm{\\sigma} = \\mathop{\\mathrm{GCN}}_{\\bm{\\sigma}}(\\mathbf{X}, \\mathbf{A}).$$ Here we use two graph-convolutional layers for each parameter set, with shared weights at the first layer and parameter-specific weights at the second layer: $$\\mathop{\\mathrm{GCN}}_{\\alpha} (\\mathbf{X}, \\mathbf{A}) = \\hat{\\mathbf{A}} \\mathop{\\mathrm{ReLU}}( \\hat{\\mathbf{A}} \\mathbf{X} \\mathbf{W}^{enc}_0 ) \\mathbf{W}^{enc}_{1, \\alpha}$$ In this equation, $W^{enc}_0$ and $W^{enc}_{1,\\alpha}$ are the weight matrices for the linear transformations of each layer's input; $\\mathop{\\mathrm{ReLU}}$ refers to a rectified linear unit\u00a0; and following the formalism introduced in\u00a0, $\\hat{\\mathbf{A}}$ is the standard normalised adjacency matrix with added self-connections, defined as: $$\\begin{aligned}\n\\hat{\\mathbf{A}} &{} = \\tilde{\\mathbf{D}}^{-\\frac{1}{2}} \\tilde{\\mathbf{A}} \\tilde{\\mathbf{D}}^{-\\frac{1}{2}} \\\\\n\\tilde{\\mathbf{A}} &{} = \\mathbf{A} + \\mathbf{I}_N \\\\\n\\tilde{D}_{ii} &{} = \\sum_j \\tilde{A}_{ij}\n\\end{aligned}$$ where $\\mathbf{I}_N$ is the $N \\times N$ identity matrix.\n\n### Embedding\n\nThe parameters $\\bm{\\mu}$ and $\\bm{\\sigma}$ produced by the encoder define the distribution of an $F$-dimensional stochastic embedding $\\bm{\\xi}_i$ for each node $i$, defined as: $$\\bm{\\xi}_i | \\mathbf{A}, \\mathbf{X} \\sim \\mathcal{N}(\\bm{\\mu}_i, \\mathop{\\mathrm{diag}}(\\bm{\\sigma}^2_i)).$$ Thus, for all the nodes we can write a probability density function over a given set of embeddings $\\bm{\\xi}$, in the form of an $N \\times F$ matrix: $$q(\\bm{\\xi} | \\mathbf{X}, \\mathbf{A}) = \\prod_{i=1}^N q(\\bm{\\xi}_i | \\mathbf{A}, \\mathbf{X}).$$\n\n### Decoder\n\nThe decoder part of our model aims to reconstruct both the input node features and the input adjacency matrix by producing parameters of a generative model for each of the inputs. On one hand, the adjacency matrix $\\mathbf{A}$ is modelled as a set of independent Bernoulli random variables, whose parameters come from a bi-linear form applied to the output of a single dense layer: $$\\begin{aligned}\nA_{ij} | \\bm{\\xi}_i, \\bm{\\xi}_j &{} \\sim \\mathop{\\mathrm{Ber}}(\\mathop{\\mathrm{MLB}}(\\bm{\\xi})_{ij}) \\\\\n\\mathop{\\mathrm{MLB}}(\\bm{\\xi}) &{} = \\mathop{\\mathrm{sigmoid}}(\\bm{\\gamma}^T \\mathbf{W}^{dec}_{\\mathbf{A}, 1} \\bm{\\gamma}) \\\\\n\\bm{\\gamma} &{} = \\mathop{\\mathrm{ReLU}}(\\bm{\\xi} \\mathbf{W}^{dec}_{\\mathbf{A}, 0}).`\n\\end{aligned}$$ Similarly to above, $W^{dec}_{\\mathbf{A},0}$ is the weight matrix for the first adjacency matrix decoder layer, and $W^{dec}_{\\mathbf{A},1}$ is the weight matrix for the bilinear form which follows.\n\nOn the other hand, features can be modelled in a variety of ways, depending on whether they are binary or continuous, and if their norm is constrained or not. Features in our experiments are one-hot encodings, so we model the reconstruction of the feature matrix $\\mathbf{X}$ by using $N$ single-draw $D$-categories multinomial random variables. The parameters of those multinomial variables are computed from the embeddings with a two-layer perceptron:[^3] $$\\begin{aligned}\n\\mathbf{X}_i | \\bm{\\xi}_i &{} \\sim \\mathop{\\mathrm{Multinomial}}(1, \\mathop{\\mathrm{MLP}}(\\bm{\\xi})_i) \\\\\n\\mathop{\\mathrm{MLP}}(\\bm{\\xi}) &{} = \\mathop{\\mathrm{softmax}}(\\mathop{\\mathrm{ReLU}}(\\bm{\\xi} \\mathbf{W}^{dec}_{\\mathbf{X}, 0}) \\mathbf{W}^{dec}_{\\mathbf{X}, 1})\n\\end{aligned}$$ In the above equations, $\\mathop{\\mathrm{sigmoid}}(z) = \\frac{1}{1 + e^{-z}}$ refers to the logistic function applied element-wise on vectors or matrices, and $\\mathop{\\mathrm{softmax}}(\\mathbf{z})_i = \\frac{e^{z_i}}{\\sum_j e^{z_j}}$ refers to the normalised exponential function, also applied element-wise, with $j$ running along the rows of matrices (and along the indices of vectors).\n\nThus we can write the probability density for a given reconstruction as: $$\\begin{aligned}\np(\\mathbf{X}, \\mathbf{A} | \\bm{\\xi}) &{} = p(\\mathbf{A} | \\bm{\\xi}) p(\\mathbf{X} | \\bm{\\xi}) \\\\\np(\\mathbf{A} | \\bm{\\xi}) &{} = \\prod_{i, j = 1}^N \\mathop{\\mathrm{MLB}}(\\bm{\\xi})_{ij}^{A_{ij}} (1 - \\mathop{\\mathrm{MLB}}(\\bm{\\xi})_{ij})^{1 - A_{ij}} \\\\\np(\\mathbf{X} | \\bm{\\xi}) &{} = \\prod_{i=1}^N \\prod_{j=1}^D \\mathop{\\mathrm{MLP}}(\\bm{\\xi})_{ij}^{X_{ij}}\n\\end{aligned}$$\n\n### Learning\n\nThe variational autoencoder is trained by minimising an upper bound to the marginal likelihood-based loss\u00a0 defined as: $$\\begin{aligned}\n- \\log p(\\mathbf{A}, \\mathbf{X}) &{} \\leq \\mathcal{L}(\\mathbf{A}, \\mathbf{X}) \\\\\n&{} = D_{KL}(q(\\bm{\\xi} | \\mathbf{A}, \\mathbf{X}) || \\mathcal{N}(0, \\mathbf{I}_F)) \\\\\n&{} \\quad - \\mathds{E}_{q(\\bm{\\xi} | \\mathbf{A}, \\mathbf{X})}[\\log (p(\\mathbf{A}, \\mathbf{X} | \\bm{\\xi}, \\bm{\\theta}) p(\\bm{\\theta})) ] \\\\\n&{} = \\mathcal{L}_{KL} + \\mathcal{L}_{\\mathbf{A}} + \\mathcal{L}_{\\mathbf{X}} + \\mathcal{L}_{\\bm{\\theta}}\n\\end{aligned}$$ Here $\\mathcal{L}_{KL}$ is the Kullback-Leibler divergence between the distribution of the embeddings and a Gaussian Prior, and $\\bm{\\theta}$ is the vector of decoder parameters whose associated loss $\\mathcal{L}_{\\bm{\\theta}}$ acts as a regulariser for the decoder layers.[^4] Computing the adjacency and feature reconstruction losses by using their exact formulas is computationally not tractable, and the standard practice is instead to estimate those losses by using an empirical mean. We generate $K$ samples of the embeddings by using the distribution $q(\\bm{\\xi} | \\mathbf{A}, \\mathbf{X})$ given by the encoder, and average the losses of each of those samples[^5] : $$\\begin{aligned}\n\\mathcal{L}_{\\mathbf{A}} &{} = - \\mathds{E}_{q(\\bm{\\xi} | \\mathbf{A}, \\mathbf{X})}[\\log p(\\mathbf{A} | \\bm{\\xi}, \\bm{\\theta}) ] \\\\\n&{} \\simeq - \\frac{1}{K} \\sum_{k=1}^K \\sum_{i, j = 1}^N \\left[ A_{ij} \\log( \\mathop{\\mathrm{MLB}}(\\bm{\\xi}^{(k)})_{ij}) \\right. \\\\\n&{} \\qquad \\qquad + \\left. (1 - A_{ij}) \\log(1 - \\mathop{\\mathrm{MLB}}(\\bm{\\xi}^{(k)})_{ij}) \\right] \\\\\n\\mathcal{L}_{\\mathbf{X}} &{} = - \\mathds{E}_{q(\\bm{\\xi} | \\mathbf{A}, \\mathbf{X})}[\\log p(\\mathbf{X} | \\bm{\\xi}, \\bm{\\theta}) ] \\\\\n&{} \\simeq - \\frac{1}{K} \\sum_{k=1}^K \\sum_{i=1}^N \\sum_{j=1}^D X_{ij} \\log(\\mathop{\\mathrm{MLP}}(\\bm{\\xi^{(k)}})_{ij})\n\\end{aligned}$$\n\nFinally, for diagonal Gaussian embeddings such as the ones we use, $\\mathcal{L}_{KL}$ can be expressed directly : $$\\mathcal{L}_{KL} = \\frac{1}{2} \\sum_{i=1}^N \\sum_{j=1}^F \\mu_{ij}^2 + \\sigma_{ij}^2 - 2 \\log \\sigma_{ij} - 1$$\n\n### Loss adjustments\n\nIn practice, to obtain useful results a few adjustments are necessary to this loss function. First, given the high sparsity of real-world graphs, the $A_{ij}$ and $1 - A_{ij}$ terms in the adjacency loss must be scaled respectively up and down in order to avoid globally near-zero link reconstruction probabilities. Instead of penalising reconstruction proportionally to the overall number of errors in edge prediction, we want false negatives ($A_{ij}$ terms) and false positives ($1 - A_{ij}$ terms) to contribute equally to the reconstruction loss, independent of graph sparsity. Formally, let $d = \\frac{\\sum_{ij} A_{ij}}{N^2}$ denote the density of the graph's adjacency matrix ($d = \\frac{N-1}{N} \\times \\mathop{\\mathrm{density}}(\\mathcal{G})$); then we replace $\\mathcal{L}_{\\mathbf{A}}$ by the following re-scaled estimated loss (the so-called \"balanced cross-entropy\"): $$\\begin{aligned}\n\\tilde{\\mathcal{L}}_{\\mathbf{A}} &{} = - \\frac{1}{K} \\sum_{k=1}^K \\sum_{i, j = 1}^N \\frac{1}{2} \\left[ \\frac{A_{ij}}{d} \\log( \\mathop{\\mathrm{MLB}}(\\bm{\\xi}^{(k)})_{ij}) \\right. \\\\\n&{} \\qquad + \\left. \\frac{1 - A_{ij}}{1 - d} \\log(1 - \\mathop{\\mathrm{MLB}}(\\bm{\\xi}^{(k)})_{ij}) \\right]\n\\end{aligned}$$\n\nSecond, we correct each component loss for its change of scale when the shapes of the inputs and the model parameters change: $\\mathcal{L}_{KL}$ is linear in $N$ and $F$, $\\tilde{\\mathcal{L}}_{\\mathbf{A}}$ is quadratic in $N$, and $\\mathcal{L}_{\\mathbf{X}}$ is linear in $N$ (but not in $F$, remember that $\\sum_{j} X_{ij} = 1$ since each $\\mathbf{X}_i$ is a single-draw multinomial).\n\nBeyond dimension scaling, we also wish to keep the values of $\\tilde{\\mathcal{L}}_{\\mathbf{A}}$ and $\\mathcal{L}_{\\mathbf{X}}$ comparable and, doing so, maintain a certain balance between the difficulty of each task. As a first approximation to the solution, and in order to avoid more elaborate schemes which would increase the complexity of our architecture (such as ), we divide both loss components by their values at maximum uncertainty[^6], respectively $\\log 2$ and $\\log D$.\n\nFinally, we make sure that the regulariser terms in the loss do not overpower the actual learning terms (which are now down-scaled close to 1) by adjusting $\\kappa_{\\bm{\\theta}}$ and an additional factor, $\\kappa_{KL}$, which scales the Kullback-Leibler term.[^7] These adjustments lead us to the final total loss the model is trained for: $$\\begin{aligned}\n\\mathcal{L} = \\frac{ \\tilde{\\mathcal{L}}_{\\mathbf{A}}}{N^2 \\log 2} + \\frac{\\mathcal{L}_{\\mathbf{X}}}{N \\log D} + \\frac{\\mathcal{L}_{KL}}{N F \\kappa_{KL}} + \\frac{||\\bm{\\theta}||_2^2}{2 \\kappa_{\\bm{\\theta}}}\n\\label{eq:base-loss}\n\\end{aligned}$$ where we have removed constant terms with respect to trainable model parameters.\n\n## Scaling shared information allocation\n\nThe model we just presented uses all dimensions of the embeddings indiscriminately to reconstruct the adjacency matrix and the node features. While this can be useful in some cases, it cannot adapt to different interdependencies between graph structure and node features; in cases where the two are not strongly correlated, the embeddings would lose information by conflating features and graph structure. Therefore our second aim is to adjust the dimensions of the embeddings used exclusively for feature reconstruction, or for adjacency reconstruction, or used for both.\n\nIn a first step, we restrict which part of a node's embedding is used for each task. Let $F_{\\mathbf{A}}$ be the number of embedding dimensions we will allocate to adjacency matrix reconstruction only, $F_{\\mathbf{X}}$ the number of dimensions allocated to feature reconstruction only, and $F_{\\mathbf{AX}}$ the number of dimensions allocated to both. We have $F_{\\mathbf{A}} + F_{\\mathbf{AX}} + F_{\\mathbf{X}} = F$. We further introduce the following notation for the restriction of the embedding of node $i$ to a set of dedicated dimensions $\\{a, \\dots, b\\}$[^8]: $$\\begin{aligned}\n\\bm{\\xi}_{i, a:b} &= (\\xi_{ij})_{j \\in \\{a, \\dots, b\\}}\n\\end{aligned}$$ This extends to the full matrix of embeddings similarly: $$\\begin{aligned}\n\\bm{\\xi}_{a:b} &= (\\xi_{ij})_{i \\in \\{1, \\dots, N\\}, j \\in \\{a, \\dots, b\\}}\n\\end{aligned}$$ Using these notations we adapt the decoder to reconstruct adjacency and features as follows: $$\\begin{aligned}\n&{}A_{ij} | \\bm{\\xi}_{i, 1:F_\\mathbf{A}+F_\\mathbf{AX}}, \\bm{\\xi}_{j, 1:F_\\mathbf{A}+F_\\mathbf{AX}} \\sim \\mathop{\\mathrm{Ber}}(\\mathop{\\mathrm{MLB}}(\\bm{\\xi}_{1:F_\\mathbf{A}+F_\\mathbf{AX}})_{ij})&\\\\\n&{} \\mathbf{X}_i | \\bm{\\xi}_{i, F_\\mathbf{A}+1:F} \\sim \\mathop{\\mathrm{Multinomial}}(1, \\mathop{\\mathrm{MLP}}(\\bm{\\xi}_{F_\\mathbf{A}+1:F})_i)&\n\\end{aligned}$$ In other words, adjacency matrix reconstruction relies on $F_{\\mathbf{A}} + F_{\\mathbf{AX}}$ embedding dimensions, feature reconstruction relies on $F_{\\mathbf{X}} + F_{\\mathbf{AX}}$ dimensions, and $F_{\\mathbf{AX}}$ overlapping dimensions are shared between the two. Our reasoning is that for datasets where the dependency between features and network structure is strong, shallow models with higher overlap value will perform better than models with the same total embedding dimensions $F$ and less overlap, or will perform on par with models that have more total embedding dimensions and less overlap. Indeed, the overlapping model should be able to extract the information shared between features and network structure and store it in the overlapping dimensions, while keeping the feature-specific and structure-specific information in their respective embedding dimensions. This is to compare to the non-overlapping case, where shared network-feature information is stored redundantly, both in feature- and structure-specific embeddings, at the expense of a larger number of distinct dimensions.\n\nTherefore, to evaluate the performance gains of this architecture, one of our measures is to compare the final loss for different hyperparameter sets, keeping $F_{\\mathbf{A}} + F_{\\mathbf{AX}}$ and $F_{\\mathbf{X}} + F_{\\mathbf{AX}}$ fixed and varying the overlap size $F_{\\mathbf{AX}}$. Now, to make sure the training losses for different hyperparameter sets are comparable, we must maintain the overall number of trainable parameters in the model fixed. The decoder already has a constant number of trainable parameters, since it only depends on the number of dimensions used for decoding features ($F_{\\mathbf{X}} + F_{\\mathbf{AX}}$) and adjacency matrix ($F_{\\mathbf{A}} + F_{\\mathbf{AX}}$), which are themselves fixed.\n\nOn the other hand, the encoder requires an additional change. We maintain the dimensions of the encoder-generated $\\bm{\\mu}$ and $\\bm{\\sigma}$ parameters fixed at $F_{\\mathbf{A}} + 2 F_{\\mathbf{AX}} + F_{\\mathbf{X}}$ (independently from $F_{\\mathbf{AX}}$, given the constraints above), and reduce those outputs to $F_{\\mathbf{A}} + F_{\\mathbf{AX}} + F_{\\mathbf{X}}$ dimensions by averaging dimensions $\\{F_{\\mathbf{A}} + 1, \\dots, F_{\\mathbf{A}} + F_{\\mathbf{AX}}\\}$ and $\\{F_{\\mathbf{A}} + F_{\\mathbf{AX}} + 1, \\dots, F_{\\mathbf{A}} + 2 F_{\\mathbf{AX}}\\}$ together.[^9] In turn, this model maintains a constant number of trainable parameters, while allowing us to adjust the number of dimensions $F_{\\mathbf{AX}}$ shared by feature and adjacency reconstruction (keeping $F_{\\mathbf{A}} + F_{\\mathbf{AX}}$ and $F_{\\mathbf{X}} + F_{\\mathbf{AX}}$ constant). Figure\u00a0 schematically represents this architecture.\n\n# Results\n\nWe are interested in measuring two main effects: first, the variation in model performance as we increase the overlap in the embeddings, and second, the capacity of the embeddings with overlap (versus no overlap) to capture and benefit from dependencies between graph structure and node features. To that end, we train overlapping and non-overlapping models on synthetic data with different degrees of correlation between network structure and node features.\n\n## Synthetic featured networks\n\nWe use a Stochastic Block Model\u00a0 to generate synthetic featured networks, each with $M$ communities of $n=10$ nodes, with intra-cluster connection probabilities of $0.25$, and with inter-cluster connection probabilities of $0.01$. Each node is initially assigned a colour which encodes its feature community; we shuffle the colours of a fraction $1 - \\alpha$ of the nodes, randomly sampled. This procedure maintains constant the overall count of each colour, and lets us control the correlation between the graph structure and node features by moving $\\alpha$ from 0 (no correlation) to 1 (full correlation).\n\nNode features are represented by a one-hot encoding of their colour (therefore, in all our scenarios, the node features have dimension $M = N \/ n$). However, since in this case all the nodes inside a community have exactly the same feature value, the model can have difficulties differentiating nodes from one another. We therefore add a small Gaussian noise ($\\sigma = .1$) to make sure that nodes in the same community can be distinguished from one another.\n\nNote that the feature matrix has less degrees of freedom than the adjacency matrix in this setup, a fact that will be reflected in the plots below. However, opting for this minimal generative model lets us avoid the parameter exploration of more complex schemes for feature generation, while still demonstrating the effectiveness of our model.\n\n## Comparison setup\n\nTo evaluate the efficiency of our model in terms of capturing meaningful correlations between network and features, we compare overlapping and non-overlapping models as follows. For a given maximum number of embedding dimensions $F_{max}$, the overlapping models keep constant the number of dimensions used for adjacency matrix reconstruction and the number of dimensions used for feature reconstruction, with the same amount allocated to each task: $F^{ov}_{\\mathbf{A}} + F^{ov}_{\\mathbf{AX}} = F^{ov}_{\\mathbf{X}} + F^{ov}_{\\mathbf{AX}} = \\frac{1}{2} F_{max}$. However they vary the overlap $F^{ov}_{\\mathbf{AX}}$ from 0 to $\\frac{1}{2} F_{max}$ by steps of 2. Thus the total number of embedding dimensions $F$ varies from $F_{max}$ to $\\frac{1}{2} F_{max}$, and as $F$ decreases, $F^{ov}_{\\mathbf{AX}}$ increases. We call one such model $\\mathcal{M}^{ov}_F$.\n\nNow for a given overlapping model $\\mathcal{M}^{ov}_F$, we define a reference model $\\mathcal{M}^{ref}_F$, which has the same total number of embedding dimensions, but without overlap: $F^{ref}_{\\mathbf{AX}} = 0$, and $F^{ref}_{\\mathbf{A}} = F^{ref}_{\\mathbf{X}} = \\frac{1}{2} F$ (explaining why we vary $F$ with steps of 2). Note that while the reference model has the same information bottleneck as the overlapping model, it has less trainable parameters in the decoder, since $F^{ref}_{\\mathbf{A}} + F^{ref}_{\\mathbf{AX}} = F^{ref}_{\\mathbf{X}} + F^{ref}_{\\mathbf{AX}} = \\frac{1}{2} F$ will decrease as $F$ decreases. Nevertheless, this will not be a problem for our measures, since we will be mainly looking at the behaviour of a given model for different values of $\\alpha$ (i.e. the feature-network correlation parameter).\n\nFor our calculations (if not noted otherwise) we use synthetic networks of $N = 1000$ nodes (i.e. 100 clusters), and set the maximum embedding dimensions $F_{max}$ to 20. For all models, we set the intermediate layer in the encoder and the two intermediate layers in the decoder to an output dimension of $50$, and the internal number of samples for loss estimation at $K = 5$. We train our models for 1000 epochs using the Adam optimiser with a learning rate of $0.01$ (following ), after initialising weights following . For each combination of $F$ and $\\alpha$, the training of the overlapping and reference models is repeated $20$ times on independent featured networks.\n\nSince the size of our synthetic data is constant, and we average training results over independently sampled data sets, we can meaningfully compare the averaged training losses of models with different parameters. We therefore take the average best training loss of a model to be our main measure, indicating the capacity to reconstruct an input data set for a given information bottleneck and embedding overlap.\n\n## Advantages of overlap\n\n### Absolute loss values\n\nFigure shows the variation of the best training loss (total loss, adjacency reconstruction loss, and feature reconstruction loss) for both overlapping and reference models, with $\\alpha$ ranging from 0 to 1 and $F$ decreasing from 20 to 10 by steps of 2. One curve in these plots represents the variation in losses of a model with fixed $F$ for data sets with increasing correlation between network and features; each point aggregates 20 independent trainings, used to bootstrap 95% confidence intervals.\n\nWe first see that all losses, whether for overlapping model or reference, decrease as we move from the uncorrelated scenario to the correlated scenario. This is true despite the fact that the total loss is dominated by the adjacency reconstruction loss, as feature reconstruction is an easier task overall. Second, recall that the decoder in a reference model has less parameters than its corresponding overlapping model of the same $F$ dimensions (except for zero overlap), such that the reference is less powerful and produces higher training losses. The absolute values of the losses for overlap and reference models are therefore not directly comparable. However, the changes in slopes are meaningful. Indeed, we note that the curve slopes are steeper for models with higher overlap (lower $F$) than for lower overlap (higher $F$), whereas they seem relatively independent for the reference models of different $F$. In other words, as we increase the overlap, our models seem to benefit more from an increase in network-feature correlation than what a reference model benefits.\n\n### Relative loss disadvantage\n\nIn order to assess this trend more reliably, we examine losses relative to the maximum embedding models. Figure plots the loss disadvantage that overlap and reference models have compared to their corresponding model with $F = F_{max}$, that is, $\\frac{\\mathcal{L}_{\\mathcal{M}_{F}} - \\mathcal{L}_{\\mathcal{M}_{F_{max}}}}{\\mathcal{L}_{\\mathcal{M}_{F_{max}}}}$. We call this the *relative loss disadvantage* of a model. In this plot, the height of a curve thus represents the magnitude of the decrease in performance of a model $\\mathcal{M}^{ov|ref}_F$ relative to the model with maximum embedding size, $\\mathcal{M}^{ov|ref}_{F_{max}}$. Note that for both the overlap model and the reference model, moving along one of the curves does not change the number of trainable parameters in the model.\n\nAs the correlation between network and features increases, we see that the relative loss disadvantage decreases in overlap models, and that the effect is stronger for higher overlaps. In other words, when the network and features are correlated, the overlap captures this joint information and compensates for the lower total number of dimensions (compared to $\\mathcal{M}^{ov|ref}_{F_{max}}$): the model achieves a better performance than when network and features are more independent. Strikingly, for the reference model these curves are flat, thus indicating no variation in relative loss disadvantage with varying network-feature correlations in these cases. This confirms that the new measure successfully controls for the baseline decrease of absolute loss values when the network-features correlation increases, as observed in Figure . Our architecture is therefore capable of capturing and taking advantage of some of the correlation by leveraging the overlap dimensions of the embeddings.\n\nFinally note that for high overlaps, the feature reconstruction loss value actually increases a little when $\\alpha$ grows. The behaviour is consistent with the fact that the total loss is dominated by the adjacency matrix loss (the hardest task). In this case it seems that the total loss is improved more by exploiting the gain of optimising for adjacency matrix reconstruction, and paying the small cost of a lesser feature reconstruction, than decreasing both adjacency matrix and feature losses together. If wanted, this strategy could be controlled using a gradient normalisation scheme such as .\n\n## Standard benchmarks\n\nFinally we compare the performance of our architecture to other well-known embedding methods, namely spectral clustering (SC) , DeepWalk (DW) , the vanilla non-variational and variational Graph Auto-Encoders (GAE and VGAE) , and GraphSAGE which we look at in more detail. We do so on two tasks: (i) the link prediction task introduced by and (ii) a node classification task, both on the Cora, CiteSeer and PubMed datasets, which are regularly used as citation network benchmarks in the literature . Note that neither SC nor DW support feature information as an input.\n\nThe Cora and CiteSeer datasets are citation networks made of respectively 2708 and 3312 machine learning articles, each assigned to a small number of document classes (7 for Cora, 6 for CiteSeer), with a bag-of-words feature vector for each article (respectively 1433 and 3703 words). The PubMed network is made of 19717 diabetes-related articles from the PubMed database, each assigned to one of three classes, with article feature vectors containing *term frequency-inverse document frequency* (TF\/IDF) scores for 500 words.\n\n### Link prediction\n\nThe link prediction task consists in training a model on a version of the datasets where part of the edges has been removed, while node features are left intact. A test set is formed by randomly sampling 15% of the edges combined with the same number of random disconnected pairs (non-edges). Subsequently the model is trained on the remaining dataset where 15% of the real edges are missing.\n\n```latex\n\\begin{table*}[h!]\\centering\n\\resizebox{1.0\\textwidth}{!}{%\n\\begin{tabular}{lcccccc}\n\\hline\n\\multirow{2}{*}{\\textbf{Method}} & \\multicolumn{2}{c}{\\textbf{Cora}} & \\multicolumn{2}{c}{\\textbf{CiteSeer}} & \\multicolumn{2}{c}{\\textbf{PubMed}}\\\\\n& AUC & AP & AUC & AP & AUC & AP \\\\\n\\hline\nSC & 84.6 $\\pm$ 0.01 & 88.5 $\\pm$ 0.00 & 80.5 $\\pm$ 0.01 & 85.0 $\\pm$ 0.01 & 84.2 $\\pm$ 0.02 & 87.8 $\\pm$ 0.01 \\\\\nDW & 83.1 $\\pm$ 0.01 & 85.0 $\\pm$ 0.00 & 80.5 $\\pm$ 0.02 & 83.6 $\\pm$ 0.01 & 84.4 $\\pm$ 0.00 & 84.1 $\\pm$ 0.00 \\\\\nGAE & 91.0 $\\pm$ 0.02 & 92.0 $\\pm$ 0.03 & 89.5 $\\pm$ 0.04 & 89.9 $\\pm$ 0.05 & \\textbf{96.4} $\\pm$ 0.00 & \\textbf{96.5} $\\pm$ 0.00 \\\\\nVGAE & 91.4 $\\pm$ 0.01 & 92.6 $\\pm$ 0.01 & 90.8 $\\pm$ 0.02 & 92.0 $\\pm$ 0.02 & 94.4 $\\pm$ 0.02 & 94.7 $\\pm$ 0.02 \\\\\n\\hline\nAN2VEC-0 & 89.5 $\\pm$ 0.01 & 90.6 $\\pm$ 0.01 & 91.2 $\\pm$ 0.01 & 91.5 $\\pm$ 0.02 & 91.8 $\\pm$ 0.01 & 93.2 $\\pm$ 0.01 \\\\\nAN2VEC-16 & 89.4 $\\pm$ 0.01 & 90.2 $\\pm$ 0.01 & 91.1 $\\pm$ 0.01 & 91.3 $\\pm$ 0.02 & 92.1 $\\pm$ 0.01 & 92.8 $\\pm$ 0.01 \\\\\nAN2VEC-S-0 & 92.9 $\\pm$ 0.01 & 93.4 $\\pm$ 0.01 & 94.3 $\\pm$ 0.01 & 94.8 $\\pm$ 0.01 & 95.1 $\\pm$ 0.01 & 95.4 $\\pm$ 0.01 \\\\\nAN2VEC-S-16 & \\textbf{93.0} $\\pm$ 0.01 & \\textbf{93.5} $\\pm$ 0.00 & \\textbf{94.9} $\\pm$ 0.00 & \\textbf{95.1} $\\pm$ 0.00 & 93.1 $\\pm$ 0.01 & 93.1 $\\pm$ 0.01 \\\\\n\\hline\n\\end{tabular}\n}\n\\caption{Link prediction task in citation networks. SC, DW, GAE and VGAE values are from \\cite{kipf_variational_2016}. Error values indicate the sample standard deviation.}\n\\label{tbl:edges-many-embs}\n\\end{table*}\n```\n\nWe pick hyperparameters such that the restriction of our model to VGAE would match the hyperparameters used by . That is a 32-dimensions intermediate layer in the encoder and the two intermediate layers in the decoder, and 16 embedding dimensions for each reconstruction task ($F_{\\mathbf{A}} + F_{\\mathbf{AX}} = F_{\\mathbf{X}} + F_{\\mathbf{AX}} = 16$). We call the zero-overlap and the full-overlap versions of this model AN2VEC-0 and AN2VEC-16 respectively. In addition, we test a variant of these models with a shallow adjacency matrix decoder, consisting of a direct inner product between node embeddings, while keeping the two dense layers for feature decoding. Formally: $A_{ij} | \\bm{\\xi}_i, \\bm{\\xi}_j \\sim \\mathop{\\mathrm{Ber}}(\\bm{\\xi}^T_i \\bm{\\xi}_j)$. This modified overlapping architecture can be seen as simply adding the feature decoding and embedding overlap mechanics to the vanilla VGAE. Consistently, we call the zero-overlap and full-overlap versions AN2VEC-S-0 and AN2VEC-S-16.\n\nWe follow the test procedure laid out by : we train for $200$ epochs using the Adam optimiser with a learning rate of $.01$, initialise weights following , and repeat each condition $10$ times. The $\\bm{\\mu}$ parameter of each node's embedding is then used for link prediction (i.e. the parameter is put through the decoder directly without sampling), for which we report *area under the ROC curve* and *average precision* scores in Table .[^10]\n\nWe argue that AN2VEC-0 and AN2VEC-16 should have somewhat poorer performance than VGAE. These models are required to reconstruct an additional output, which is not directly used to the link prediction task at hand. First results confirmed our intuition. However, we found that the shallow decoder models AN2VEC-S-0 and AN2VEC-S-16 perform consistently better than the vanilla VGAE for Cora and CiteSeer while their deep counterparts (AN2VEC-0 and AN2VEC-16) outperforms VGAE for all datasets. As neither AN2VEC-0 nor AN2VEC-16 exhibited over-fitting, this behaviour is surprising and calls for further explorations which are beyond the scope of this paper (in particular, this may be specific to the link prediction task). Nonetheless, the higher performance of AN2VEC-S-0 and AN2VEC-S-16 over the vanilla VGAE on Cora and CiteSeer confirms that including feature reconstruction in the constraints of node embeddings is capable of increasing link prediction performance when feature and structure are not independent (consistent with\u00a0). An illustration of the embeddings produced by AN2VEC-S-16 on Cora is shown in Figure\u00a0.\n\nOn the other hand, performance of AN2VEC-S-0 on PubMed is comparable with GAE and VGAE, while AN2VEC-S-16 has slightly lower performance. The fact that lower overlap models perform better on this dataset indicates that features and structure are less congruent here than in Cora or CiteSeer (again consistent with the comparisons found in\u00a0). Despite this, an advantage of the embeddings produced by the AN2VEC-S-16 model is that they encode *both* the network structure and the node features, and can therefore be used for downstream tasks involving both types of information.\n\nWe further explore the behaviour of the model for different sizes of the training set, ranging from 10% to 90% of the edges in each dataset (reducing the training set accordingly), and compare the behaviour of AN2VEC to GraphSAGE. To make the comparison meaningful we train two variants of the two-layer GraphSAGE model with mean aggregators and no bias vectors: one with an intermediate layer of 32 dimensions and an embedding layer of 16 dimensions (roughly equivalent in dimensions to the full overlap AN2VEC models), the second with an intermediate layer of 64 dimensions and an embedding layer of 32 dimensions (roughly equivalent to no overlap in AN2VEC). Both layers use neighbourhood sampling, 10 neighbours for the first layer and 5 for the second. Similarly to the shallow AN2VEC decoder, each pair of node embeddings is reduced by inner product and a sigmoid activation, yielding a scalar prediction between 0 and 1 for each possible edge. The model is trained on minibatches of 50 edges and non-edges (edges generated with random walks of length 5), learning rate 0.001, and 4 total epochs. Note that on Cora, one epoch represents about 542 minibatches,[^11] such that 4 epochs represent about 2166 gradient updates; thus with a learning rate of 0.001, we remain comparable to the 200 full batches with learning rate 0.01 used to train AN2VEC.\n\nFigure\u00a0 plots the AUC produced by AN2VEC and GraphSAGE for different training set sizes and different embedding sizes (and overlaps, for AN2VEC), for each dataset. As expected, the performance of both models decreases as the size of the test set increases, though less so for AN2VEC. For Cora and CiteSeer, similarly to Table\u00a0, higher overlaps and a shallow decoder in AN2VEC give better performance. Notably, the shallow decoder version of AN2VEC with full overlap is still around .75 for a test size of 90%, whereas both GraphSAGE variants are well below .65. For PubMed, as in Table\u00a0, the behaviour is different to the first two datasets, as overlaps 0 and 16 yield the best results. As for Cora and CiteSeer, the approach taken by AN2VEC gives good results: with a test size of 90%, all AN2VEC deep decoder variants are still above .75 (and shallow decoders above .70), whereas both GraphSAGE variants are below .50.\n\n### Node classification\n\nSince the embeddings produced also to encode feature information, we then evaluate the model's performance on a node classification task. Here the models are trained on a version of the dataset where a portion of the nodes (randomly selected) have been removed; next, a logistic classifier[^12] is trained on the embeddings to classify training nodes into their classes; finally, embeddings are produced for the removed nodes, for which we show the F1 scores of the classifier.\n\nFigure\u00a0 shows the results for AN2VEC and GraphSAGE on all datasets. The scale of the reduction in performance as the test size increases is similar for both models (and similar to the behaviour for link prediction), though overlap and shallow versus deep decoding seem to have less effect. Still, the deep decoder is less affected by the change in test size than the shallow decoder; and contrary to the link prediction case, the 0 overlap models perform best (on all datasets). Overall, the performance levels of GraphSAGE and AN2VEC on this task are quite similar, with slightly better results of AN2VEC on Cora, slightly stronger performance for GraphSAGE on CiteSeer, and mixed behaviour on PubMed (AN2VEC is better for small test sizes and worse for large test sizes).\n\n### Variable embedding size\n\nWe also explore the behaviour of AN2VEC for different embedding sizes. We train models with $F_{\\mathbf{A}} = F_{\\mathbf{X}} \\in \\{8, 16, 24, 32\\}$ and overlaps 0, 8, 16, 24, 32 (whenever there are enough dimensions to do so), with variable test size. Figure\u00a0 shows the AUC scores for link prediction, and Figure\u00a0 shows the F1-micro scores for node classification, both on CiteSeer (the behaviour is similar on Cora, though less salient). For link prediction, beyond confirming trends already observed previously, we see that models with less total embedding dimensions perform slightly better than models with more total dimensions. More interestingly, all models seem to reach a plateau at overlap 8, and then exhibit a slightly fluctuating behaviour as overlap continues to increase (in models that have enough dimensions to do so). This is valid for all test sizes, and suggests (i) that at most 8 dimensions are necessary to capture the commonalities between network and features in CiteSeer, and (ii) that having more dimensions to capture either shared or non-shared information is not necessarily useful. In other words, 8 overlapping dimensions seem to capture most of what can be captured by AN2VEC on the CiteSeer dataset, and further increase in dimensions (either overlapping or not) would capture redundant information.\n\nNode classification, on the other hand, does not exhibit any consistent behaviour beyond the reduction in performance as the test size increases. Models with less total dimensions seems to perform slightly better at 0 overlap (though this behaviour is reversed on Cora), but neither the ordering of models by total dimensions nor the effect of increasing overlap are consistent across all conditions. This suggests, similarly to Figure\u00a0, that overlap is less relevant to this particular node classification scheme than it is to link prediction.\n\n### Memory usage and time complexity\n\nFinally, we evaluate the resources used by our implementation of the method in terms of training time and memory usage. We use AN2VEC with 100-dimensions intermediate layers in the encoder and the (deep) decoder with 16 embedding dimensions for each reconstruction task ($F_{\\mathbf{A}} + F_{\\mathbf{AX}} = F_{\\mathbf{X}} + F_{\\mathbf{AX}} = 16$), and overlap $F_{\\mathbf{AX}} \\in \\{0, 8, 16\\}$. We train that model on synthetic networks generated as in section *Synthetic featured networks* (setting $\\alpha = 0.8$, and without adding any other noise on the features), with $M \\in \\{50, 100, 200, 500, 1000, 2000, 5000\\}$ communities of size $n = 10$ nodes.\n\nOnly CPUs were used for the computations, running on a 4\u00a0$\\times$\u00a0Intel Xeon CPU E7-8890\u00a0v4 server with 1.5 TB of memory. Using 8 parallel threads for training,[^13] we record the peak memory usage,[^14] training time, and full job time[^15] for each network size, averaged over the three overlap levels. Results are shown in Figure . Note that in a production setting, multiplying the number of threads by $n$ will divide compute times by nearly $n$, since the process is aggressively parallelised. A further reduced memory footprint can also be achieved by using sparse encoding for all matrices.\n\n# Conclusions\n\nIn this work, we proposed an attributed network embedding method based on the combination of Graph Convolutional Networks and Variational Autoencoders. Beyond the novelty of this architecture, it is able to consider jointly network information and node attributes for the embedding of nodes. We further introduced a control parameter able to regulate the amount of information allocated to the reconstruction of the network, the features, or both. In doing so, we showed how shallow versions of the proposed model outperform the corresponding non-interacting reference embeddings on given benchmarks, and demonstrated how this overlap parameter consistently captures joint network-feature information when they are correlated.\n\nOur method opens several new lines of research and applications in fields where attributed networks are relevant. As an example one can take a social network with the task of predicting future social ties, or reconstruct existing but invisible social ties. Solutions to this problem can rely on network similarities in terms of overlapping sets of common friends, or on feature similarities in terms of common interest, professional or cultural background, and so on. While considering these types of information separately would provide us with a clear performance gain in the prediction, these similarities are not independent. For example, common friends may belong to the same community. By exploiting these dependencies our method can provide us with an edge in terms of predictive performance and could indicate which similarities, structural, feature-related, or both, better explain why a social tie exists at all. Another setting where we believe our framework might yield noteworthy insights is when applied to the prediction of side effects of drug pairs (polypharmacy). This problem has recently been approached by Zitnik et al.\u00a0 by extending GraphSAGE for multirelational link prediction in multimodal networks. In doing so, the authors were able to generate multiple novel candidates of drug pairs susceptible to induce side effects when taken together. Beyond using drug feature vectors to generate polypharmacy edge probabilities, our overlapping encoder units would enable a detailed view on how these side effects occur due to confounding effects of particular drug attributes. It would pinpoint the feature pairs that interacting drugs might share (or not), further informing the drug design process. Furthermore, we expect that our method will help yield a deeper understanding between node features and structure, to better predict network evolution and ongoing dynamical phenomena. In particular, it should help to identify nodes with special roles in the network by clarifying whether their importance has structural or feature origin.\n\nIn this paper our aim was to ground our method and demonstrate its usefulness on small but controllable featured networks. Its evaluation on more complex synthetic datasets, in particular with richer generation schemes, as well as its application to larger real datasets, are therefore our immediate goals in the future.\n\n# List of abbreviations\n\n# Availability of data and material\n\nThe synthetic datasets generated for this work are stochastically created by our implementation, available at [github.com\/ixxi-dante\/an2vec](https:\/\/github.com\/ixxi-dante\/an2vec).\n\nThe datasets used for standard benchmarking (Cora, CiteSeer, and PubMed) are available at [linqs.soe.ucsc.edu\/data](https:\/\/linqs.soe.ucsc.edu\/data).\n\nOur implementation of AN2VEC is made using the Julia programming language, and particularly making heavy use of Flux\u00a0. Parallel computations were run using GNU Parallel\u00a0. Finally, we used StellarGraph\u00a0 for the GraphSAGE implementation.\n\n# Competing interests\n\nThe authors declare that they have no competing interests.\n\n# Funding\n\nThis project was supported by the LIAISON Inria-PRE project, the SoSweet ANR project (ANR-15-CE38-0011), and the ACADEMICS project financed by IDEX LYON.\n\n# Author's contributions\n\nMK, JLA and SL participated equally in designing and developing the project, and in writing the paper. SL implemented the model and experiments. SL and JLA developed and implemented the analysis of the results.\n\n# Acknowledgements\n\nWe thank E. Fleury, J-Ph. Magu\u00e9, D. Seddah, and E. De La Clergerie for constructive discussions and for their advice on data management and analysis. Some computations for this work were made using the experimental GPU platform at the Centre Blaise Pascal of ENS Lyon, relying on the SIDUS infrastructure provided by E. Quemener.\n\n[^1]: Corresponding author: `email@example.com`.\n\n[^2]: The implementation of our model is available online at [github.com\/ixxi-dante\/an2vec](https:\/\/github.com\/ixxi-dante\/an2vec).\n\n[^3]: Other types of node features are modelled according to their constraints and domain. Binary features are modelled as independent Bernoulli random variables. Continuous-range features are modelled as Gaussian random variables in a similar way to the embeddings themselves.\n\n[^4]: Indeed, following we assume $\\bm{\\theta} \\sim \\mathcal{N}(0, \\kappa_{\\bm{\\theta}} \\mathbf{I})$, such that $\\mathcal{L}_{\\bm{\\theta}} = - \\log p(\\bm{\\theta}) = \\frac{1}{2} \\dim(\\bm{\\theta}) \\log(2 \\pi \\kappa_{\\bm{\\theta}}) + \\frac{1}{2 \\kappa_{\\bm{\\theta}}} ||\\bm{\\theta}||_2^2$.\n\n[^5]: In practice, $K = 1$ is often enough.\n\n[^6]: That is, $p(A_{ij} | \\bm{\\xi}, \\bm{\\theta}) = \\frac{1}{2} \\quad \\forall i, j$, and $p(X_{ij} | \\bm{\\xi}, \\bm{\\theta}) = \\frac{1}{D} \\quad \\forall i, j$.\n\n[^7]: We use $\\kappa_{KL} = 2\\kappa_{\\bm{\\theta}} = 10^3$.\n\n[^8]: Note that the order of the indices does not change the training results, as the model has no notion of ordering inside its layers. What follows is valid for any permutation of the dimensions, and the actual indices only matter to downstream interpretation of the embeddings after training.\n\n[^9]: Formally: $$\\begin{aligned}\n \\tilde{\\bm{\\mu}} = \\bm{\\mu}_{1:F_{\\mathbf{A}}} &{} \\mathop{\\mathrm{\\scalebox{1}[1.5]{$\\parallel$}}}\\frac{1}{2} (\\bm{\\mu}_{F_{\\mathbf{A}}+1:F_{\\mathbf{A}} + F_{\\mathbf{AX}}} + \\bm{\\mu}_{F_{\\mathbf{A}} + F_{\\mathbf{AX}}+1:F_{\\mathbf{A}} + 2F_{\\mathbf{AX}}}) \\\\\n &{} \\mathop{\\mathrm{\\scalebox{1}[1.5]{$\\parallel$}}}\\bm{\\mu}_{F_{\\mathbf{A}} + 2F_{\\mathbf{AX}} + 1:F_{\\mathbf{A}} + 2F_{\\mathbf{AX}} + F_{\\mathbf{X}}} \\\\\n \\log\\tilde{\\bm{\\sigma}} = \\log\\bm{\\sigma}_{1:F_{\\mathbf{A}}} &{} \\mathop{\\mathrm{\\scalebox{1}[1.5]{$\\parallel$}}}\\frac{1}{2} (\\log\\bm{\\sigma}_{F_{\\mathbf{A}}+1:F_{\\mathbf{A}} + F_{\\mathbf{AX}}} + \\log\\bm{\\sigma}_{F_{\\mathbf{A}} + F_{\\mathbf{AX}}+1:F_{\\mathbf{A}} + 2F_{\\mathbf{AX}}}) \\\\\n &{} \\mathop{\\mathrm{\\scalebox{1}[1.5]{$\\parallel$}}}\\log\\bm{\\sigma}_{F_{\\mathbf{A}} + 2F_{\\mathbf{AX}} + 1:F_{\\mathbf{A}} + 2F_{\\mathbf{AX}} + F_{\\mathbf{X}}}\n \\end{aligned}$$ where $\\mathop{\\mathrm{\\scalebox{1}[1.5]{$\\parallel$}}}$ denotes concatenation along the columns of the matrices.\n\n[^10]: Note that in , the training set is also 85% of the full dataset, and test and validation sets are formed with the remaining edges, respectively 10% and 5% (and the same amount of non-edges). Here, since we use the same hyperparameters as we do not need a validation set. We therefore chose to use the full 15% remaining edges (with added non-edges) as a test set, as explained above.\n\n[^11]: One epoch is 2708 nodes $\\times$ 5 edges per node $\\times$ 2 (for non-edges) = 27080 training edges or non-edges; divided by 50, this makes 541.6 minibatches per epoch.\n\n[^12]: Using Scikit-learn's\u00a0 interface to the liblinear library, with one-vs-rest classes.\n\n[^13]: There are actually two levels of threading: the number of threads used in our code for computing losses, and the number of threads used by the BLAS routines for matrix multiplication. We set both to 8, and since both computations alternate this leads to an effective 8 compute threads, with some fluctuations at times.\n\n[^14]: Using the `top` utility program.\n\n[^15]: As reported by our scripts and by GNU Parallel.","meta":{"dup_signals":{"dup_doc_count":16,"dup_dump_count":15,"dup_details":{"curated_sources":1,"2023-23":1,"2023-14":1,"2022-40":1,"2022-21":1,"2021-43":1,"2021-31":1,"2021-21":2,"2021-10":1,"2020-50":1,"2020-40":1,"2020-29":1,"2020-16":1,"2020-05":1,"2023-50":1}},"filename":"out\/1905.08636_extract_an2vec.tex.md"},"subset":"arxiv"} +{"text":"abstract: This template is provided to demonstrate options for using REVTeX4-1 in preparing manuscripts for submission to JOSA A, JOSA B, *Optics Letters*, and *Applied Optics*. REVTeX4-1 support for OSA journals was added September 2012 as a BETA and updated in April 2013. Users should obtain the REVTeX4-1 package () and the OSA REVTeX4-1 style files (on the Author page of any OSA journal site). Authors in need of a length estimate because of page charge concerns can use the OSA REVTeX4-1 template. The template will not yield an exact estimate but should provide a good approximation of the length of the page proof. Figures, large tables, and complex display math may still affect the estimate. Note that the two-column format is acceptable for submission and will meet the needs of OSA peer review and production.\nauthor: Joe Richardson; Chris Videll; Jennifer Mayfield\ntitle: Preparing a REVTeX4-1 manuscript for the OSA journals \n JOSA A, JOSA B, *Optics Letters*, and *Applied Optics*\n\n# Introduction\n\nThe OSA REVTeX4-1 template is designed to assist authors with creating a two-column manuscript that can be submitted to JOSA A, JOSA B, *Optics Letters*, and *Applied Optics* (separate templates are available for other OSA journals, which have not yet migrated to REVTeX).\n\n- See the \"REVTeX 4.1 Author's Guide\" for an overview of REVTeX4-1's features .\n\n- See the \"OSA Author Style Guide\" for points on OSA journal style , such as use of OCIS codes.\n\n# Preparing a REVTeX Manuscript for Submission to an OSA Journal\n\n1. ***Optics Letters*** **authors**: Be aware of two major changes for OL papers. Authors now have a firm **four**-page limit for their papers, and submitted papers must contain a **fifth** informational page, with complete references.\n\n2. **Preamble** Use the following preamble to invoke REVTeX4.1 for OSA journals. Use the \"10\u00a0pt\" option for JOSA\u00a0A, JOSA\u00a0B, and OL; use the \"11\u00a0pt\" option for *Applied Optics*:\n\n \\documentclass[osajnl,twocolumn,showpacs,\n superscriptaddress,10pt]{revtex4-1}\n\n3. **Citations** in REVTeX use the `natbib` package. The `osajnl4-1.rtx` package will enforce appropriate style for citations callouts: \"$\\ldots$ in this study \\[1,2\\].\" Use the `\\cite{}` command for all callouts.\n\n4. **BibTeX** may be used to create a file containing the references, whose contents (i.e., contents of .bbl file) can then be pasted into the bibliography section of the .tex file. References must be contained within the .tex file, not a separate BibTeX file.\n\n5. **Compress** your .tex manuscript file and all figures (which should be in EPS format, or PDF format if you are using PDFLaTeX) in a ZIP, TAR or TAR-GZIP package. PRISM will process in LaTeX mode by default but will use PDF LaTeX if PDF figure files are detected. Note: TAR or TAR-GZIP is no longer required. All files must be referenced at the root level (e.g., file figure-1.eps, not \/myfigs\/figure-1.eps).\n\n6. **Figures and Tables** It is *not* necessary to place figures and tables at the back of the manuscript. Figures and tables should be sized for display as you wish them to appear in the final article. Do not include a separate listing figure captions and table titles.\n\n# Sample Equation, Figure, and Table\n\n## Equation\n\nSample equation. When a two-column equation is needed, use the `\\begin{widetext}\\end{widetext}` environment. See the \"REVTeX 4.1 Author's Guide\" .\n\n$$\\Delta x \\Delta p \\approx h.$$\n\n## Figure\n\nSample figure environment. Use the `graphicsx` package. For two-column figures, use an asterisk in the figure environment: `\\begin{figure*}\\end{figure*}`.\n\n## Table\n\nSample table environment. For long tables, see the \"REVTeX 4.1 Author's Guide\" .\n\n| A | B | C | D | E | F | G | H | I |\n|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|\n| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |\n| 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 |\n| 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 |\n\nA simple table","meta":{"dup_signals":{"dup_doc_count":14,"dup_dump_count":2,"dup_details":{"curated_sources":7,"unknown":7}},"filename":"out\/1502.01819_extract_osa-revtex4-1.tex.md"},"subset":"arxiv"} +{"text":"abstract: Electron-electron interactions are at the origin of many exotic electronic properties of materials which have emerged from recent experimental observations. The main important phenomena discovered are related with electronic magnetic properties, which have been quite accessible to Nuclear Magnetic Resonance techniques. Those specifically permit to distinguish the orbitals or electronic bands responsible for the magnetism, the metallic properties and superconductivity and to reveal the physical properties which are distinct from expectations in an independent electron scheme. The description of some selected experimental cases permits us to underline the importance of the technique and to reveal altogether to the reader a wide range of novel phenomena specific to correlated electron physics.\nauthor: Henri Alloul [^1] \nLPS - CNRS\/Universite Paris Sud, Orsay, France\ntitle: **NMR in strongly correlated materials**\n\n# Introduction\n\nIn non interacting electronic systems, one considers energy levels with spin degeneracy and fills them with two electrons par level, without any consideration of U, the local coulomb repulsion on atomic orbitals. But as soon as one considers a solid which displays magnetic properties the latter has to be considered, as U is responsible for atomic and solid state magnetism. An introduction to these aspects has been given in a previous Scholarpedia article on the electronic properties of *Strongly Correlated Electron Systems*, which will be quoted as **SCES** throughout this article.\n\nIf one starts with a completely free electron gas, the first incidence of weak correlations can be expressed in a Fermi liquid approach, that is the electronic states at the Fermi level are not single particle states but rather quasiparticle states in which the electron is dressed by an electronic cloud which involves the electronic correlations. Those quasiparticles are populated in a same way as free electron cases, except that the population jump at the Fermi level is smaller than unity. Correspondingly these quasiparticles have effective masses $m^\\star$ which differ from the electron mass. This is seen for instance in the specific heat and the Pauli susceptibility.\n\nWith increasing electron correlations one reaches situations where electron states are in an intermediate regime between independent extended electronic states and local states. Those intermediate electronic states are at the basis of the correlated electron physics which gives exotic properties to the materials and various competing low $T$ states which are far from being understood at this time.\n\nHere we shall take advantage of a series of NMR experimental investigations done on correlated electron systems, to introduce various specific effects which have been highlighted in such systems. The principal useful NMR parameters and technical details have been introduced in a previous Scholarpedia article *NMR studies of electronic properties of solids* that we shall quote as **NMREPS** from now on. The good knowledge of the NMR characteristics in solids for which non interacting electron theories apply quite well, naturally permitted in the initial experiments to detect the unexpected modifications of electronic properties which occur in the presence of strong electronic correlations. This appears as an advantage of the NMR technique, with respect to most recent experimental probes which have been developed specifically to study strongly correlated electron systems.\n\nThis article will be organised as follows. We shall recall first in **section\u00a0** the relatively simple case of the NMR studies on the magnetic properties of *3d* impurities in metallic *sp* systems, which has been highlighted as the Kondo effect. This has been the earliest correlated electron physics case which has been understood. It has opened the way to the study of Heavy Fermions and Kondo lattices which will be touched in **section\u00a0**.\n\nThe High $T_c$ cuprates is of course the family of compounds which has attracted a large interest on correlated electron physics especially in the low doping part of the phase diagram where NMR experiments have permitted to reveal the occurrence of a pseudogap as is detailed in **section\u00a0**. The original properties induced by electron interactions in 1D systems, that is Luttinger liquids will be briefly mentioned then in **section\u00a0**. We detail in **section\u00a0** the original behavior of impurities in correlated electron systems, spin chains, cuprates, which have been important to reveal some of the physical properties which were difficult to probe by distinct approaches. This altogether has induced large efforts to clarify the incidence of disorder on the properties of correlated electron systems. The study of exotic superconductivities and the capability of NMR to give some hints on the SC order parameter symmetry is illustrated in **section\u00a0**. An important tendency towards charge ordering situations has been proposed to dominate the electronic properties of correlated electron systems. We shall illustrate in **section\u00a0**, in the particular case of Na cobaltates, that NMR ideally permits to unravel such situations in great detail. Of course purely magnetic insulating states are essential cases of correlated electron physics. Those imply a large variety of magnetic states, from ordered spin lattices to disordered spin glasses and spin liquid states which are highlighted when frustration effects occur in an ordered lattice. Some of the specific information which can be brought by NMR experiments on such magnetic states are discussed in **section\u00a0**. Finally we illustrate in **section\u00a0** how NMR techniques permitted to study recently the insulator to metal transition induced by pressure in undoped half filled systems, that is the actual Mott transition. This has been made possible by the recent discovery of quasi 2D organic and 3D alkali fulleride compounds, which display quasi ideal 2D or 3D Mott transitions. Let us point out that throughout this article we restrict ourselves to a presentation of some robust experimental evidences on these correlated electron systems. We avoid as much as possible entering into the theoretical debates which are natural in a vivid research area and are not solved so far.\n\n# Magnetic impurities and Kondo effect\n\nOne of the first correlated electron physics problem which has been fully solved has been revealed by studies of $3d$ impurities substituted on the atomic sites of regular $sp$ metals. One usually assumed that a local moment $S$ resides on the $3d$ sites and interacts with the free electron spin $s$ by an exchange interaction $$\\label{exchange} \nH=-J\\ S.\\ s\\ \\delta (r)$$ The Kondo problem arose with the discovery by J.Kondo that perturbation theory of this Hamiltonian resulted in a $-lnT$ term in the resistivity of the alloys, which was indeed observed experimentally. It was understood that the conduction electron interaction with the local moment induced a crossover of the impurity electronic state towards a low $T$ ground state quite different from the quasi-free local moment and that the crossover temperature defines an energy scale\n\n$$\\label{Kondo temp} \nk_{B}T_{K}=E_{F}\\exp\\left[ \\frac{1}{J\\rho (E_{F})}\\right]$$\n\nThis expression for the Kondo temperature $T_{K}$ bears some analogy with that of $T_{c}$ and the energy gap variation with electron-phonon coupling for superconductivity. It has been harder to qualify the actual properties of the Kondo ground state, but from the observed transport and thermodynamic properties associated with the impurity degrees of freedom, it has been accepted rather soon that the impurity properties experimentally appear to evolve from a high $T$ magnetic state to a non-magnetic like behavior below $T_{K}$. In other words, the weak coupling regime where the impurity moment can be treated in a perturbation scheme evolves at low $T$ in a strong coupling regime where the impurity and conduction electrons are bound into the ground state. The basic picture which was initially accepted is that the conduction electrons might form a singlet state with the impurity and compensate its magnetization. If such a spatially extended state occurs, one would then expect to see its experimental signature on local magnetic measurements in the corresponding spatial range around the impurity, so that NMR experiments appeared as the ideal probe to view such effects.From the study of the macroscopic properties of impurities in noble metal hosts, it was established that the crossover temperature $T_{K}$ was highly dependent on the impurity. This was of course quite compatible with the exponential expression of Eq.2. Values of $T_{K}$ could be estimated from the maximum in the impurity contribution to the specific heat, or from the Weiss contribution to the spin susceptibility measured at high enough temperature, etc. This permitted to establish that $T_{K}$ was below 10 mK for Cu-Mn, $\\sim1$ K for Cu-Cr, $\\sim30$ K for Cu-Fe, $\\sim300$ K for Au-V, etc. (Daybell and Steyert 1968).It was harder to consider Al-Mn along the same lines as all temperature variations were very weak in this case,so that this crossover could only occur above 1000 K, for which the alloy would have molten. Anyway, if one wanted to study experimentally the change from the magnetic state to the non magnetic state, one needed to consider in priority a system in which one can explore both regimes $T>>T_{K}$ and $T<http:\/\/en.wikipedia.org\/wiki\/Nuclear_magnetic_resonance Nuclear magnetic resonance<\/a> in Wikipedia\n\n5. http:\/\/en.wikipedia.org\/wiki\/Peierls_transition Peierls transition<\/a> in Wikipedia\n\n[^1]: Henri Alloul (2014), *NMR studies of electronic properties of solids*, Scholarpedia, 9(9):32069.","meta":{"dup_signals":{"dup_doc_count":46,"dup_dump_count":35,"dup_details":{"curated_sources":2,"2023-40":1,"2023-23":2,"2023-06":2,"2022-40":1,"2022-27":1,"2022-21":1,"2021-43":1,"2021-17":1,"2021-04":2,"2020-45":1,"2020-40":1,"2020-29":1,"2020-16":1,"2020-05":2,"2019-43":2,"2019-35":2,"2019-26":2,"2019-18":2,"2019-09":2,"2018-51":2,"2018-43":1,"2018-39":1,"2018-34":1,"2018-26":1,"2018-22":1,"2018-13":1,"2018-09":1,"2017-47":1,"2023-50":1,"2024-26":1,"2024-22":1,"2024-18":1,"2024-10":1,"2024-30":1}},"filename":"out\/1504.06992_extract_Henri_Alloul_NMR_studies_of_electronic_properties_of__solids.tex.md"},"subset":"arxiv"} +{"text":"author: J. D. Bailey; J. D. Landstreet; S. Bagnulo\nbibliography: abund-evol.bib\ndate: Accepted 26 November 2013\ntitle: Discovery of secular variations in the atmospheric abundances of magnetic Ap stars[^1]\n\nWe want to establish whether abundance peculiarities change as stars evolve on the main sequence, and provide observational constraints to diffusion theory. We have performed spectral analysis of 15 magnetic Bp stars that are members of open clusters (and thus have well-known ages), with masses between about 3 and 4\u00a0M$_{\\odot}$. For each star, we measured the abundances of He, O, Mg, Si, Ti, Cr, Fe, Pr and Nd. We have discovered the systematic time evolution of trace elements through the main-sequence lifetime of magnetic chemically peculiar stars as their atmospheres cool and evolve toward lower gravity. During the main sequence lifetime, we observe clear and systematic variations in the atmospheric abundances of He, Ti, Cr, Fe, Pr and Nd. For all these elements, except He, the atmospheric abundances decrease with age. The abundances of Fe-peak elements converge toward solar values, while the rare-earth elements converge toward values at least 100 times more abundant than in the Sun. Helium is always underabundant compared to the Sun, evolving from about 1% up to 10% of the solar He abundance. We have attempted to interpret the observed abundance variations in the context of radiatively driven diffusion theory, which appears to provide a framework to understand some, but not all, of the anomalous abundance levels and variations that we observe.\n\n# Introduction\n\nOur empirical knowledge of the chemical history of the galaxy is based on the assumption that the atmospheres of the main sequence stars that trace its evolution have chemical compositions that reflect that of the interstellar gas at the time when they formed. This is not always the case, and it is essential to understand situations in which stellar surface chemistry does not reflect the bulk or initial chemical composition of the star.\n\nThere are physical processes, such as diffusional gravitational settling of trace elements, and levitation of specific ions by radiative acceleration, that can lead to substantial evolution of atmospheric chemistry during the main sequence lifetime of a star. In cool stars (with atmospheric effective temperatures below about $T_{\\rm eff} \\le 7000$\u00a0K), these separation processes are overwhelmed by deep convective mixing of the outer envelope of the star, which maintains chemical homogeneity, and nearly the initial chemistry. In hot stars (with $T_{\\rm eff} \\ge 20\\,000$\u00a0K), the intense outward radiation flux drives a strong stellar wind, stripping material off the stellar surface so rapidly that separation processes act too slowly to be able to compete.\n\nBetween these two effective temperature ranges lie the A\u2013and late B\u2013type main sequence stars. Such stars do not have any single powerful process acting to force the surface layers to retain essentially their initial chemical composition. In such stars, the atmospheric chemistry may evolve under the influence of loss of heavy trace ions downward by diffusive gravitational settling. Chemical evolution may also occur if the atmosphere acquires ions that are driven upwards from the invisible subsurface layers by radiative acceleration due to the outflowing radiative energy flux. Atoms that are driven into the atmosphere by this effect may also be driven out into space and thus lost .\n\nOther processes interact with or modify the effects of diffusion in the A and B stars. For example, a magnetic field can greatly impede the flow of material through the upper atmosphere of the star into interstellar space. Large-scale circulation currents in the stellar interior can lead to shear-induced turbulence that mixes subsurface layers. This mixing is strongly dependent on the stellar rotation rate, and is probably ineffective in stars with rotation periods of more than a few days. Accretion from dense interstellar clouds, through which the star passes from time to time, may also alter the surface chemistry . Finally, the star may be a member of a double system, and may acquire fresh surface material that is ejected by the evolution of its companion.\n\nAs a result, many A and B stars show chemical abundance ratios $\\log N_{\\rm X}\/N_{\\rm H}$ that are remarkably different from those measured in the Sun. Among A and B stars there are several families of distinctive compositional patterns that reflect different sets of physical conditions. The compositional peculiarities vary rather strongly with effective temperature (and thus with the momentum carried by the outflowing radiation that levitates some trace elements). The peculiarities are very different, depending on whether the star has a strong magnetic field or not. There is also a family of peculiarities due to recent accretion from an interstellar cloud. These chemical peculiarities provide powerful probes of invisible processes occurring beneath the visible layers of all kinds of star, particularly of upward and downward diffusion . It is therefore of obvious importance to observe and characterise the nature and time variations of these phenomena.\n\nUntil now, the study of atmospheric abundances of magnetic A and B (Ap and Bp) stars has been restricted mainly to individual studies of specific stars. The detail of these studies vary from coarse models that roughly describe the magnetic field geometry and abundance variations over the stellar surface to more detailed maps of both the magnetic field structure and abundance distributions using spectroscopic observations in all four Stokes parameters . In this paper, we present the results of a large project to study the time evolution of average surface characteristics in the Ap and Bp stars during the $10^8 - 10^9$\u00a0yrs of their main sequence phase.\n\nA major difficulty of determining how the observed chemical signatures vary with time has been that the ages of isolated (field) Ap and Bp stars can be estimated only very roughly. To solve this problem we have studied surface abundance evolution in a sample of magnetic peculiar A and B stars that are open cluster members. The ages of cluster members can be determined with much better accuracy than those of isolated stars, and the cluster age applies to all its members, which are presumed to have formed essentially contemporaneously .\n\nThe following section discusses the observations. Sect.\u00a03 describes the modelling technique. Sect.\u00a04 details the observational results. Sect.\u00a05 explores possible mechanisms to explain the observed trends and Sect.\u00a06 summarises the main conclusions.\n\n# Observations\n\n| Star | Instrument | R | $\\lambda$ (\u00c5) |\n|:------------|-------------:|-------:|--------------:|\n| HD 45583 | FEROS (2) | 48000 | 3528-9217 |\n| HD 61045 | ESPaDOnS (2) | 65000 | 3690-10481 |\n| HD\u00a063401 | FEROS (2) | 48000 | 3528-9217 |\n| HD 74535 | FEROS (2) | 48000 | 3528-9217 |\n| HD\u00a0133652 | ESPaDOnS (1) | 65000 | 3690-10481 |\n| | FEROS (1) | 48000 | 3528-9217 |\n| HD 133880 | ESPaDOnS (2) | 65000 | 3690-10481 |\n| HD 147010 | ESPaDOnS (2) | 65000 | 3690-10481 |\n| HD\u00a0162576 | ESPaDOnS (2) | 65000 | 3690-10481 |\n| HD\u00a0162725 | ESPaDOnS (2) | 65000 | 3690-10481 |\n| HD 304842 | FEROS (2) | 48000 | 3528-9217 |\n| NGC\u00a02169\u00a012 | UVES (1) | 110000 | 3070-10398 |\n| BD+00 1659 | ESPaDOnS (1) | 65000 | 3690-10481 |\n| BD-19 5044L | ESPaDOnS (2) | 65000 | 3690-10481 |\n| BD+49 3789 | ESPaDOnS (2) | 65000 | 3690-10481 |\n| HIP\u00a0109911 | ESPaDOnS (2) | 65000 | 3690-10481 |\n| | | | |\n\nStars analysed in this study. Listed are the star designations, instrument used (with the number of spectra indicated in parentheses), spectral resolution, and spectral range.\n\nOur study used high dispersion spectra of 15 stars, listed in Table\u00a0. The majority of spectra were acquired using the ESPaDOnS spectropolarimeter and the FEROS spectrograph located at the Canada-France-Hawaii-Telescope (CFHT) and the European Southern Observatory's (ESO) La Silla Observatory, respectively. One spectrum was acquired using the UVES spectrograph at ESO's Paranal Observatory.\n\nWe are studying a sample of stars with a limited range of masses (between 3 and 4\u00a0M$_{\\odot}$) because the physical effects driving atmospheric abundance changes, and the main sequence time-scale over which these effects can act, are expected to depend on mass. Our sample is selected to allow us to reconstruct the atmospheric abundance evolution of a magnetic Bp star of about $3.5 M_\\odot$.\n\nThe stars of our sample are all definite or probable members of open clusters or associations. The age of each cluster was determined by fitting a theoretical isochrone to the cluster members in an HR diagram ($\\log (L\/L_{\\odot})$ versus $T_{\\rm eff}$). For each cluster, the brightest members (those closest to turn-off from the main sequence) are used as the main indicators of cluster age. It has been found that fitting isochrones to the observed stars in the theoretical HR diagram provides a more precise cluster age than doing this fitting in an observational HR diagram, such as $V$ versus $B-V$. This is because in the theoretical HR diagram the isochrones have a strong hook near the cluster turn-off, while in the observational HR diagram the isochrones tend to a vertical line that does not discriminate clearly between various ages .\n\nFor each star, accurate values of effective temperature ($T_{\\rm eff}$), gravity ($\\log g$), evolutionary mass, magnetic field strength and age were adopted from the literature . When values of $T_{\\rm eff}$ and $\\log g$ were not available, we derived them from available Geneva and $uvby\\beta$ photometry. For the Geneva photometry the fortran program described by was used. For the Str\u00f6mgren $uvby\\beta$ photometry we used a version of the fortran program \"UVBYBETANEW\" that corrects the $T_{\\mathrm{eff}}$\u00a0of the magnetic Bp stars to the appropriate temperature scale . As per the discussions of and , the uncertainties in $T_{\\mathrm{eff}}$\u00a0and $\\log g$\u00a0were taken to be about $\\pm$``{=html}500\u00a0K and 0.2\u00a0dex, respectively. Table\u00a0 lists the properties of each star, including its designation, associated cluster, age, mass, $T_{\\rm eff}$, $\\log g$, and $v \\sin i$. The final column shows the root-mean-square magnetic field strength ($B_{\\rm rms}$) as computed from whatever modern measurements of mean longitudinal magnetic field $B_{\\rm z}$ are available ; this quantity is the most useful value we have to characterise the magnetic field of each star. Where applicable, the appropriate reference is given for each parameter. Unreferenced parameters were derived for this study. We note that the photometrically determined values of $T_{\\rm eff}$ and $\\log g$ vary systematically with increasing age from about 13500\u00a0K and 4.4 to 10000\u00a0K and 3.5, as expected for the evolution of a single star of about 3.5\u00a0M$_{\\odot}$ from ZAMS to TAMS.\n\n# Modelling technique\n\n```latex\n\\begin{table*}\n\\caption{Physical properties of the stars studied.}\n\\begin{tabular}{lrlrrrrr}\n\\hline\\hline\nStar & Cluster & $\\log t$ & M\/M$_{\\odot}$ & \\ensuremath{T_{\\mathrm{eff}}}\\ (K) & $\\log g$\\ & \\ensuremath{v \\sin i}\\ (km s$^{-1}$) & B$_{\\rm rms}$ (G)\\\\ \n\\hline\nHD~147010 & Upper Sco & 6.70 $\\pm$ 0.10$^{1}$ & 3.15 $\\pm$ 0.20$^{1}$ & 13000 $\\pm$ 500$^{2}$ & 4.40 $\\pm$ 0.20$^{2}$ & 15 $\\pm$ 2$^{2}$ & 4825$^{1}$ \\\\\nNGC~2169~12 & NGC~2169 & 6.97 $\\pm$ 0.10$^{1}$ & 3.65 $\\pm$ 0.15$^{1}$ & 13800 $\\pm$ 500$^{1}$ & 4.30 $\\pm$ 0.20 & 56 $\\pm$ 5 & 3410$^{1}$\\\\\nHD~133652 & Upper Cen Lup & 7.20 $\\pm$ 0.10$^{1}$ & 3.35 $\\pm$ 0.15$^{1}$ & 13000 $\\pm$ 500 & 4.30 $\\pm$ 0.20 & 48 $\\pm$ 2 & 1120$^{1}$\\\\\nHD~133880 & & & 3.20 $\\pm$ 0.15$^{1}$ & 13000 $\\pm$ 600$^{3}$ & 4.34 $\\pm$ 0.16$^{3}$ & 103 $\\pm$ 10$^{3}$ & 2300$^{1}$ \\\\\nHD~45583 & NGC~2232 & 7.55 $\\pm$ 0.10$^{1}$ & 3.30 $\\pm$ 0.15$^{1}$ & 12700 $\\pm$ 500$^{2}$ & 4.20 $\\pm$ 0.20$^{2}$ & 70 $\\pm$ 6$^{2}$ & 2730$^{1}$\\\\\nHD~63401 & NGC~2451 & 7.70 $\\pm$ 0.10$^{1}$ & 3.70 $\\pm$ 0.20$^{1}$ & 13500 $\\pm$ 500$^{1}$ & 4.20 $\\pm$ 0.20 & 52 $\\pm$ 4 & 365$^{1}$ \\\\\nHD~74535 & IC~2391 & 7.70 $\\pm$ 0.15$^{1}$ & 3.85 $\\pm$ 0.15$^{1}$ & 13600 $\\pm$ 500$^{1}$ & 4.30 $\\pm$ 0.20 & 45 $\\pm$ 4 & 95$^{1}$\\\\\nBD-19~5044L & IC~4725 & 8.02 $\\pm$ 0.08$^{1}$ & 3.55 $\\pm$ 0.15$^{1}$ & 12800 $\\pm$ 500$^{2}$ & 4.50 $\\pm$ 0.20$^{2}$ & 15 $\\pm$ 3$^{2}$ & 235$^{1}$\\\\\nBD+49~3789 & NGC~7243 & 8.06 $\\pm$ 0.10$^{*}$ & 3.55 $\\pm$ 0.15$^{*}$ & 12900 $\\pm$ 500$^{2}$ & 4.20 $\\pm$ 0.20$^{2}$ & 85 $\\pm$ 5$^{2}$ & 561$^{*}$\\\\\nHIP~109911 & & & 3.65 $\\pm$ 0.15$^{*}$ & 13000 $\\pm$ 500 & 4.30 $\\pm$ 0.20 & 60 $\\pm$ 2 & 348$^{*}$\\\\ \nHD~61045 & NGC~2422 & 8.08 $\\pm$ 0.11$^{1}$ & 3.85 $\\pm$ 0.20$^{1}$ & 13000 $\\pm$ 500$^{2}$ & 4.10 $\\pm$ 0.20$^{2}$ & 64 $\\pm$ 3$^{2}$ & 430$^{1}$\\\\\nHD~304842 & NGC~3114 & 8.13 $\\pm$ 0.15$^{1}$ & 3.55 $\\pm$ 0.15$^{1}$ & 12500 $\\pm$ 500$^{2}$ & 3.90 $\\pm$ 0.20$^{2}$ & 65 $\\pm$ 5$^{2}$ & 20$^{1}$\\\\\nBD+00~1659 & NGC~2301 & 8.22 $\\pm$ 0.10$^{*}$ & 3.65 $\\pm$ 0.15$^{*}$ & 12500 $\\pm$ 500$^{2}$ & 4.00 $\\pm$ 0.20$^{2}$ & 7.0 $\\pm$ 1$^{2}$ & 394$^{*}$\\\\\nHD~162576 & NGC~6475 & 8.41 $\\pm$ 0.13$^{1}$ & 3.10 $\\pm$ 0.15$^{*}$ & 10300 $\\pm$ 500 & 3.70 $\\pm$ 0.20 & 28 $\\pm$ 3 & 15$^{*}$\\\\\nHD~162725 & & & 3.30 $\\pm$ 0.20$^{1}$ & 10000 $\\pm$ 500 & 3.50 $\\pm$ 0.20 & 31 $\\pm$ 3 & 69$^{*}$\\\\\n\\hline\n\\multicolumn{8}{p{0.85\\textwidth}}{{\\sc references} -- (1) \\citet{paper2}; (2) \\citet{BL2013}; (3) \\citet{Bailey2012}; (*) This work} \\\\\n\\label{properties}\n\\end{tabular}\n\\end{table*}\n```\n\nTo determine the atmospheric abundances of the magnetic Bp stars, the fortran program zeeman was used . zeeman is a spectrum synthesis program for stars having magnetic fields. It assumes a magnetic field geometry which is modelled as a simple co-linear multipole expansion, with the strength of the multipole components, the inclination $i$ of the rotation axis, and the obliquity $\\beta$ of the magnetic field axis to the rotation axis specified. zeeman interpolates an appropriate stellar atmospheric structure from a pre-tabulated grid of ATLAS 9 atmospheric models, based on the $T_{\\rm eff}$ and $\\log g$ assumed. The atomic data for individual spectral lines are taken from the Vienna Atomic Line Database . For all stars, a uniform atmospheric abundance distribution vertically and over the stellar surface was assumed. The microturbulence parameter has been set to zero for all abundance determinations for two reasons. Firstly, it has been found that even non-magnetic stars show no evidence of microturbulence between about 11000 and 14000\u00a0K ; and secondly, it is very probable that the magnetic field is able to suppress convective motions in the atmosphere because of the large energy density in the field.\n\nzeeman searches for an optimal fit between the synthetic and observed spectra by means of a reduced $\\chi^{2}$ fit of the computed spectrum to the observed one. Multiple spectral windows can be synthesised simultaneously and zeeman automatically provides as output the best values for the radial velocity $v_{\\rm R}$ and $v \\sin i$. The abundance ($\\log N_{\\rm X}\/N_{\\rm H}$) of one element at a time is optimised by identifying unblended lines of that element and then fitting, as well as possible, the observed spectral lines of that element. The stars modelled vary in $T_{\\rm eff}$ from about 10000 to 14000\u00a0K and therefore share many spectral lines in common. For consistency, we endeavoured to deduce elemental abundances from the same sets of lines for all stars. Table\u00a0 lists the lines used for modelling all the stars. As far as possible, lines with a range of strengths were used to deduce the final abundances. For each element, we adopted an uncertainty consistent with the change in abundance from the best-fit model that was necessary to produce an unsatisfactory fit in the spectral window (determined by visual inspection). zeeman cannot model simultaneously spectral lines that are widely separated. In such instances, at least two lines (in different spectral windows) were fit separately and the average abundance deduced from the lines was adopted. The uncertainty in this case was estimated from the observed scatter between the computed values of the different spectral lines.\n\n| Element | $\\lambda$ (\u00c5) | Element | $\\lambda$ (\u00c5) |\n|:--------|--------------:|:--------|--------------:|\n| He\u00a0i | 4437.551 | Si\u00a0ii | 5055.984 |\n| | 4713.139 | | 5056.317 |\n| | 5015.678 | Ti\u00a0ii | 4533.960 |\n| | 5047.738 | | 4563.757 |\n| | 5875.599 | | 4571.968 |\n| | 5875.614 | Cr\u00a0ii | 4558.650 |\n| | 5875.615 | | 4565.739 |\n| | 5875.625 | | 4588.199 |\n| | 5875.640 | | 4592.049 |\n| | 5875.966 | Fe\u00a0ii | 4541.524 |\n| O\u00a0i | 6155.961 | | 4555.893 |\n| | 6155.966 | | 4556.392 |\n| | 6155.986 | | 4583.837 |\n| | 6156.736 | | 4583.999 |\n| | 6156.755 | | 5029.097 |\n| | 6156.776 | | 5030.630 |\n| | 6158.146 | | 5032.712 |\n| | 6158.172 | | 5035.708 |\n| | 7771.941 | Pr\u00a0iii | 6160.233 |\n| | 7774.161 | | 6161.194 |\n| | 7775.388 | | 7781.983 |\n| Mg\u00a0ii | 4481.126 | Nd\u00a0iii | 4911.653 |\n| | 4481.150 | | 4912.944 |\n| | 4481.325 | | 4914.094 |\n| Si\u00a0ii | 4621.418 | | 5050.695 |\n| | 4621.722 | | 6145.068 |\n| | 5041.024 | | |\n| | | | |\n\nList of spectral lines modelled.\n\nAll of our stars have measurements of the line-of-sight magnetic field $\\langle B_z \\rangle$, either previously published or from our own unpublished results. For two of the stars in our sample, magnetic field models are available from detailed studies: HD\u00a0133880 and HD\u00a0147010 . In these cases, we adopted the published geometries. When no detailed magnetic field model was available, we simply assumed a dipolar magnetic field that was approximately three times $B_{\\rm rms}$ and computed the spectrum with the line of sight parallel to the magnetic axis. For the majority of the stars in our sample we have multiple observations. In the cases where more than two spectra were available, we chose the two that exhibited the greatest differences. In all cases, the derived abundances are the average of results from the two modelled spectra, with the associated uncertainties propagated accordingly.\n\n## Abundance analysis\n\nAtmospheric abundances of He, O, Mg, Si, Ti, Cr, Fe, Pr and Nd were determined for the 15 cluster magnetic Bp stars in our 3.5\u00a0M$_{\\odot}$ sample. In Table\u00a0, we tabulate the mean abundances **$\\log (N_{\\rm X}\/N_{\\rm H})$** for each star. The first three columns recall the star designation, age ($\\log t$) and $T_{\\rm eff}$ from Table\u00a0. The subsequent columns list the average abundance values, with their associated uncertainties, for He, O, Mg, Si, Ti, Cr, Fe, Pr and Nd. For reference, the solar abundance ratios are also shown . Note that the abundance scale we use can be converted to the common scale, having the logarithmic abundance of H as +12, by adding 12 to all our values. Figure\u00a0 provides an example of the quality of fits we achieved for each star.\n\nWe know that serious discrepancies (of the order of 1\u00a0dex) are found between abundances derived from lines of Si\u00a0ii compared to those derived using lines of Si\u00a0iii, and that this situation exists in most, or all, magnetic stars within the $T_{\\rm eff}$ range of this study . In general, for magnetic Bp stars the abundance of Si deduced from lines of Si\u00a0iii is significantly larger than the value found using lines of Si\u00a0ii. The study by suggests that this discrepancy is mostly due to strong vertical stratification of Si, similar to the stratification profiles computed by for stellar atmospheres with $T_{\\rm eff} = 8000$ and 12000\u00a0K, and shown in their Figure 9. We have determined Si abundances using only lines of Si\u00a0ii.\n\nStratification has been directly deduced from observed spectra in the vertical distribution of Fe and other elements , mainly in cool magnetic Ap stars. We expect this phenomenon to occur in the $T_{\\rm eff}$ range discussed in this paper as well. Because both observation and theory suggest that the vertical abundance variations may be of the order of 1\u00a0dex or more, this phenomenon raises quite serious questions about the precise meaning of atmospheric chemical abundances derived assuming that the elements are uniformly distributed in the vertical direction. This is similar to the ambiguous meaning of abundances derived for (horizontally) patchy stars using models that assume uniform horizontal abundance.\n\nAlthough these complications make the meaning of abundance determinations rather inexact, such simplified abundance measurements nevertheless form a useful \"instrumental\" system, which reveals differences between different stars satisfactorily. The typical symptom of vertical abundance stratification is that when we find a homogeneous model spectrum that fits lines of an element of intermediate strength, that same model predicts lines that are stronger than the strongest observed lines, and weaker than the weakest observed lines (see Figs.\u00a02 of ). Thus, by simply fitting on average the spectral lines of the element under study, we hypothesise that we are deriving a fairly stable mean abundance which has a similar meaning in stars that do not differ greatly in $T_{\\mathrm{eff}}$. However, the ambiguity due to probable vertical stratification and horizontal inhomogeneities should be kept in mind in evaluating our results.\n\n```latex\n\\begin{sidewaystable*}\n\\vspace{20cm}\n\\caption{Average derived abundances for the stars studied.}\n\\centering\n\\begin{tabular}{llllllllllll}\n\\hline\\hline\nStar & $\\log t$ & \\ensuremath{T_{\\mathrm{eff}}}\\ (K) & $\\log$(He\/H) & $\\log$(O\/H) & $\\log$(Mg\/H) & $\\log$(Si~{\\sc ii}\/H) & $\\log$(Ti\/H) & $\\log$(Cr\/H) & $\\log$(Fe\/H) & $\\log$(Pr\/H) & $\\log$(Nd\/H) \\\\ \n\\hline\nHD~147010 & 6.70 $\\pm$ 0.10$^{1}$ & 13000 $\\pm$ 500$^{2}$ & $-3.11 \\pm 0.28$ & $-4.49 \\pm 0.28$ & $-5.67 \\pm 0.14$ & $-3.72 \\pm 0.14$ & $-5.68 \\pm 0.21$ & $-4.04 \\pm 0.14$ & $-3.33 \\pm 0.14$ & $-6.15 \\pm 0.28$ & $-6.50 \\pm 0.28$ \\\\\nNGC~2169~12 & 6.97 $\\pm$ 0.10$^{1}$ & 13800 $\\pm$ 500$^{1}$ & $-3.09 \\pm 0.30$ & $-3.88 \\pm 0.15$ & $-5.56 \\pm 0.15$ & $-3.87 \\pm 0.15$ & $-5.90 \\pm 0.20$ & $-4.80 \\pm 0.20$ & $-3.76 \\pm 0.10$ & $-7.31 \\pm 0.20$ & $-7.21 \\pm 0.20$ \\\\\nHD~133652 & 7.20 $\\pm$ 0.10$^{1}$ & 13000 $\\pm$ 500 & $-3.16 \\pm 0.28$ & $-3.87 \\pm 0.28$ & $-5.18 \\pm 0.14$ & $-3.36 \\pm 0.28$ & $-5.01 \\pm 0.21$ & $-4.17 \\pm 0.21$ & $-2.90 \\pm 0.14$ & $-6.98 \\pm 0.28$ & $-6.42 \\pm 0.28$ \\\\\nHD~133880 & & 13000 $\\pm$ 600$^{3}$ & $\\leq-2.00$ & $-2.90 \\pm 0.42$ & $-4.12 \\pm 0.28$ & $-2.94 \\pm 0.28$ & $-5.44 \\pm 0.28$ & $-4.65 \\pm 0.21$ & $-3.56 \\pm 0.14$ & $-6.96 \\pm 0.28$ & $-7.04 \\pm 0.42$ \\\\\nHD~45583 & 7.55 $\\pm$ 0.10$^{1}$ & 12700 $\\pm$ 500$^{2}$ & $\\leq-2.40$ & $-3.61 \\pm 0.42$ & $-4.19 \\pm 0.28$ & $-3.34 \\pm 0.28$ & $-5.68 \\pm 0.28$ & $-4.74 \\pm 0.21$ & $-3.43 \\pm 0.21$ & $-7.49 \\pm 0.42$ & $-7.04 \\pm 0.42$ \\\\\nHD~63401 & 7.70 $\\pm$ 0.10$^{1}$ & 13500 $\\pm$ 500$^{1}$ & $-2.64 \\pm 0.28$ & $-3.73 \\pm 0.21$ & $-5.86 \\pm 0.14$ & $-3.96 \\pm 0.14$ & $-6.44 \\pm 0.28$ & $-5.32 \\pm 0.28$ & $-3.84 \\pm 0.14$ & $-7.14 \\pm 0.28$ & $-7.44 \\pm 0.28$ \\\\\nHD~74535 & 7.70 $\\pm$ 0.15$^{1}$ & 13600 $\\pm$ 500$^{1}$ & $-2.52 \\pm 0.28$ & $-4.26 \\pm 0.28$ & $-5.44 \\pm 0.28$ & $-4.28 \\pm 0.14$ & $-6.29 \\pm 0.28$ & $-5.65 \\pm 0.14$ & $-4.02 \\pm 0.28$ & $-8.15 \\pm 0.21$ & $-7.52 \\pm 0.28$ \\\\\n%7.80 $\\pm$ 0.15$^{1}$ & HD~318107 & 11800 $\\pm$ 500$^{1}$ & $\\leq-2.50$ & $-3.65 \\pm 0.20$ & $-5.70 \\pm 0.20$ & $-3.65 \\pm 0.15$ & $-5.05 \\pm 0.10$ & $-4.50 \\pm 0.15$ & $-3.00 \\pm 0.20$ & $-6.35 \\pm 0.20$ & $-6.20 \\pm 0.15$ \\\\\nBD-19~5044L & 8.02 $\\pm$ 0.08$^{1}$ & 12800 $\\pm$ 500$^{2}$ & $-2.16 \\pm 0.21$ & $-3.99 \\pm 0.28$ & $-5.44 \\pm 0.21$ & $-3.94 \\pm 0.28$ & $-6.82 \\pm 0.28$ & $-5.88 \\pm 0.28$ & $-4.24 \\pm 0.28$ & $-8.12 \\pm 0.28$ & $-7.88 \\pm 0.42$ \\\\\nBD+49~3789 & 8.06 $\\pm$ 0.10$^{*}$ & 12900 $\\pm$ 500$^{2}$ & $-2.52 \\pm 0.35$ & $-3.60 \\pm 0.22$ & $-5.25 \\pm 0.25$ & $-3.54 \\pm 0.14$ & $-6.33 \\pm 0.35$ & $-5.44 \\pm 0.28$ & $-4.06 \\pm 0.28$ & $-7.72 \\pm 0.28$ & $-7.72 \\pm 0.25$ \\\\\nHIP~109911 & & 13000 $\\pm$ 600 & $-2.34 \\pm 0.28$ & $-3.69 \\pm 0.21$ & $-5.38 \\pm 0.14$ & $-3.49 \\pm 0.14$ & $-5.90 \\pm 0.28$ & $-5.35 \\pm 0.28$ & $-3.59 \\pm 0.28$ & $-7.70 \\pm 0.28$ & $-7.37 \\pm 0.28$ \\\\ \nHD~61045 & 8.08 $\\pm$ 0.11$^{1}$ & 13000 $\\pm$ 500$^{2}$ & $-2.06 \\pm 0.28$ & $-3.72 \\pm 0.28$ & $-5.10 \\pm 0.14$ & $-3.94 \\pm 0.28$ & $-6.24 \\pm 0.28$ & $-5.47 \\pm 0.28$ & $-3.98 \\pm 0.28$ & $-7.74 \\pm 0.28$ & $-7.77 \\pm 0.42$ \\\\\nHD~304842 & 8.13 $\\pm$ 0.15$^{1}$ & 12500 $\\pm$ 500$^{2}$ & $-1.54 \\pm 0.28$ & $-3.78 \\pm 0.25$ & $-5.76 \\pm 0.21$ & $-3.64 \\pm 0.28$ & $-6.83 \\pm 0.14$ & $-6.05 \\pm 0.32$ & $-4.35 \\pm 0.32$ & $-7.65 \\pm 0.36$ & $-7.04 \\pm 0.28$ \\\\\nBD+00~1659 & 8.22 $\\pm$ 0.10$^{*}$ & 12500 $\\pm$ 500$^{2}$ & $-2.34 \\pm 0.15$ & $-3.58 \\pm 0.10$ & $-5.62 \\pm 0.10$ & $-3.53 \\pm 0.20$ & $-6.69 \\pm 0.10$ & $-5.02 \\pm 0.10$ & $-3.79 \\pm 0.10$ & $-8.11 \\pm 0.20$ & $-7.34 \\pm 0.20$ \\\\\nHD~162576 & 8.41 $\\pm$ 0.13$^{1}$ & 10300 $\\pm$ 500 & $-1.51 \\pm 0.28$ & $-3.77 \\pm 0.28$ & $-5.34 \\pm 0.14$ & $-4.44 \\pm 0.14$ & $-7.72 \\pm 0.14$ & $-5.12 \\pm 0.14$ & $-3.95 \\pm 0.14$ & $-10.15 \\pm 0.42$ & $-9.10 \\pm 0.42$ \\\\\nHD~162725 & & 10000 $\\pm$ 500 & $-1.74 \\pm 0.36$ & $-3.77 \\pm 0.28$ & $-5.23 \\pm 0.14$ & $-3.54 \\pm 0.28$ & $-7.30 \\pm 0.28$ & $-4.64 \\pm 0.22$ & $-3.66 \\pm 0.28$ & $-8.32 \\pm 0.42$ & $-7.44 \\pm 0.28$ \\\\\nSun & & & $-1.07$ & $-3.31$ & $-4.40$ & $-4.49$ & $-7.05$ & $-6.36$ & $-4.50$ & $-11.28$ & $-10.58$\\\\\n\\hline\n\\multicolumn{12}{p{0.95\\textwidth}}{{\\sc references} -- (1) \\citet{paper2}; (2) \\citet{BL2013}; (3) \\citet{Bailey2012}; (*) This work}\\\\\n\\label{abundances}\n\\end{tabular}\n\\end{sidewaystable*}\n```\n\n# Results\n\n| y-axis | x-axis | Slope | $\\sigma_{D}$ |\n|:-----------------|:-------------------|-------------------:|-------------:|\n| $\\log(\\rm He\/H)$ | $\\log t$ | 0.76 $\\pm$ 0.18 | 4.2 |\n| | $\\log B_{\\rm rms}$ | $-0.48$ $\\pm$ 0.11 | 4.4 |\n| $\\log(\\rm O\/H)$ | $\\log t$ | 0.13 $\\pm$ 0.17 | 0.76 |\n| | $\\log B_{\\rm rms}$ | 0.04 $\\pm$ 0.13 | 0.31 |\n| $\\log(\\rm Mg\/H)$ | $\\log t$ | $-0.15$ $\\pm$ 0.26 | 0.58 |\n| | $\\log B_{\\rm rms}$ | 0.20 $\\pm$ 0.18 | 1.1 |\n| $\\log(\\rm Si\/H)$ | $\\log t$ | $-0.13$ $\\pm$ 0.20 | 0.65 |\n| | $\\log B_{\\rm rms}$ | 0.19 $\\pm$ 0.13 | 1.5 |\n| $\\log(\\rm Ti\/H)$ | $\\log t$ | $-1.07$ $\\pm$ 0.27 | 4.0 |\n| | $\\log B_{\\rm rms}$ | 0.69 $\\pm$ 0.12 | 5.8 |\n| $\\log(\\rm Cr\/H)$ | $\\log t$ | $-0.97$ $\\pm$ 0.24 | 4.0 |\n| | $\\log B_{\\rm rms}$ | 0.40 $\\pm$ 0.16 | 2.5 |\n| $\\log(\\rm Fe\/H)$ | $\\log t$ | $-0.41$ $\\pm$ 0.16 | 2.6 |\n| | $\\log B_{\\rm rms}$ | 0.24 $\\pm$ 0.08 | 3.0 |\n| $\\log(\\rm Pr\/H)$ | $\\log t$ | $-1.29$ $\\pm$ 0.29 | 4.4 |\n| | $\\log B_{\\rm rms}$ | 0.88 $\\pm$ 0.20 | 4.4 |\n| $\\log(\\rm Nd\/H)$ | $\\log t$ | $-0.81$ $\\pm$ 0.24 | 3.4 |\n| | $\\log B_{\\rm rms}$ | 0.46 $\\pm$ 0.17 | 2.7 |\n| | | | |\n\nThe linear fit parameters for the abundance of each element versus age and magnetic field strength. Shown are the slopes with their respective uncertainties and the significance of the slope, $\\sigma_{D}$\n\n## Abundance variations with time\n\nFigure\u00a0 shows nine plots, one for each of the nine elements studied in this paper. In every panel we show the mean abundance found for each of the 15 stars of our sample as a function of the age of that star, $\\log t$. Uncertainties in age are taken from Table\u00a0, and in abundance from Table\u00a0. Although the error bars are large, in six of the nine panels a trend is clearly present. We can immediately see that on average several of the elements studied decrease or increase in abundance as our average $3.5 M_\\odot$ star ages. It is essentially because we have good age resolution that we are able to detect these variations.\n\nFor each panel we show the best-fit linear regression to $\\log N_{\\rm X}\/ N_{\\rm H}$ versus $\\log t$ (dashed blue line). Table\u00a0 tabulates the slope of the best-fit line with its uncertainty, as well as the significance of the slope, $\\sigma_{D}$. This Table confirms the visual impression that the slopes are significantly (or nearly significantly) different from zero for He, the Fe-peak elements, and the rare earths, but not for the light elements: these elements evolve substantially in abundance through the main sequence life of magnetic Bp stars of about $3.5 M_\\odot$.\n\nWe also show for comparison the solar abundance of each element studied (solid horizontal red line). It is clear that overabundance or underabundance relative to solar abundance is systematically present for all the elements studied. We discuss individual elements below.\n\n### Helium\n\nOne of the defining properties of magnetic Bp stars is the fact that they are visibly underabundant in He compared to normal stars of the same $T_{\\mathrm{eff}}$\u00a0values. This is true of all of the magnetic Bp stars in our sample. Unexpectedly, we observe a clear evolutionary *increase* in the abundance of He with stellar age. Substantial underabundances (of the order of 2\u00a0dex compared to the Sun) are found in stars near the ZAMS and those with ages up to about $\\log t \\sim 8$. Older stars in our sample (nearer the TAMS) show only modest underabundances of He, about 1 \u2013 0.5\u00a0dex smaller than the solar ratio.\n\n### The light elements: oxygen, magnesium and silicon\n\nOxygen, magnesium and silicon are systematically different from solar abundance ratios by values up to about 1\u00a0dex. No apparent trend in abundance with age is seen in any of these three elements. Typically, both oxygen and magnesium are underabundant compared to the solar abundance ratios with a few exceptions. For oxygen, only HD\u00a0133880 is overabundant compared to in the Sun; this anomalous behaviour is well-documented by . HD\u00a0133880 and HD\u00a045583 have nearly solar abundances of magnesium, nearly 0.5\u00a0dex larger than the other stars of the sample. It is not clear why these two (very similar) Bp stars depart so far from the general behaviour of other stars in our sample. In general, silicon appears to be overabundant compared to the solar abundance ratio throughout the main sequence evolution. Moderate Si overabundance is common for magnetic Bp stars between effective temperatures of about 10000 and 15000\u00a0K.\n\nAlthough there are no clear trends in abundance for any of these three elements, all show a substantial star-to-star variations around the mean.\n\n### The Fe-peak elements: Ti, Cr and Fe\n\nAll three Fe-peak elements studied here, Ti, Cr and Fe, are quite overabundant compared to their solar ratios early in the main sequence phase. However, the level of excess abundance is larger for the lower abundance elements Ti and Cr than for Fe. In fact, Cr reaches almost the same level of total abundance as Fe close to the ZAMS. For these elements, definite decreases in atmospheric abundance are observed with time and by the end of the main sequence all three elements are not much more abundant that the solar values.\n\n### The rare earth elements: Pr and Nd\n\nClose to the ZAMS, both these elements appear very overabundant (more than 4\u00a0dex) relative to the Sun. There is a clear decrease in the abundances of each element with age, although even at the TAMS both elements are still typically 2\u20133\u00a0dex overabundant. The apparently smaller scatter around the regression lines compared to other elements is produced mainly by the very large range seen in the y-axis of these plots.\n\nThe single outlier with low abundance near the TAMS is HD\u00a0162576. This star also has the smallest Si abundances.\n\n## Abundance variations with magnetic field strength\n\ndiscovered that magnetic field strength decreases during the main sequence lifetime of magnetic Bp stars. Since we see trends in abundance with time for He, Ti, Cr, Fe, Pr and Nd, we expect that there will be correlations between abundances and stellar magnetic field strength. Fig.\u00a0 plots abundances versus $\\log B_{\\rm rms}$ in the same manner as for $\\log t$. In all the elements for which a trend was observed versus time (He, Ti, Cr, Fe, Pr and Nd) a correlation is also found for abundance versus $\\log B_{\\rm rms}$, but in the opposite sense: helium abundance decreases with increasing magnetic field strength and the Fe-peak and rare-earth elements increase in abundance with increasing magnetic field strength. It is clear that the correlations seen in Figure\u00a0 are those expected from the known decline of $\\langle B_z \\rangle$\u00a0with age together with the newly discovered variations of abundance with age. However, the observed correlations may also contain some information about how the time evolution of atmospheric abundance is modified by magnetic fields of various strengths. This may become clear with more detailed modelling.\n\nWe note that the results shown in Figure\u00a0 seem to be the explanation of the correlation between the Geneva photometric measurements of the 5200\u00a0\u00c5\u00a0depression (using the Z index) and field strength, as discussed by . This effect has also been exploited recently by the Special Astrophysical Observatory (SAO) group to search for particularly large magnetic fields in Ap\/Bp stars.\n\n## Cr and Fe abundances versus effective temperature\n\nOur results can be compared to previous studies of magnetic Bp stars. presented a comprehensive comparison of Cr and Fe abundances in roAp (rapidly oscillating Ap) stars versus effective temperature. However, their study was mostly restricted to stars at or below around 10000\u00a0K. added stars between about 10000 and 15000\u00a0K and our current study, as well as a previous study by , adds 23 stars in that same temperature range. The results are shown in Fig.\u00a0. Our observations confirm the tendency, already visible in the data of Ryabchikova, for the star with $T_{\\mathrm{eff}}$\u00a0above 8000\u00a0K to exhibit substantially larger abundance dispersion than is shown by cooler stars. Our new results suggest that this extra scatter might be due to mixing stars of quite different ages at a given value of $T_{\\mathrm{eff}}$, although we do not yet have enough abundance measurements of magnetic stars with both known mass and known age to confirm or reject this idea. It appears that for Cr the largest observed abundances peak at around 10500\u00a0K and decline with increasing effective temperature. A similar trend is seen with Fe, but to a lesser extent. There is only a broad flat peak in the abundance of Fe with $T_{\\mathrm{eff}}$. These results support the conclusions of that the maximum abundance for Cr and Fe both approach the same value of between $-3$ and $-4$\u00a0dex.\n\n# Discussion\n\nIt is well known that during its evolution on the main sequence, a star undergoes an extensive process of climate change. The atmospheric effective temperature decreases by about 30\u2006%, the radius increases by a factor of order three, and the gravity decreases by a factor of ten. In effect, by studying the changes in average atmospheric chemical composition of stars in our sample, with its limited mass range, as a function of cluster age, we are observing the time evolution of the atmospheric chemistry of a $3.5 M_\\odot$ star through its main sequence life. Figure\u00a0 shows a Hertzsprung-Russell (HR) diagram for the stars in this study, compared to theoretical stellar evolutionary models .\n\nWhat we have observed, for the first time, is a clear evidence of the secular evolution of the atmospheric chemistry of the magnetic peculiar B-type (Bp) stars during the main sequence phase. That is, we have observed how the consequences of diffusion processes are modified by time and by stellar climate change. Figure\u00a0 shows that the abundances of He, Ti, Cr, Fe, Pr, and Nd change monotonically with time. In contrast, the light metals O, Mg and Si also have peculiar abundances, but show no significant trends with age.\n\nIt is generally believed that the peculiar chemical abundances found in the atmospheres of magnetic Ap and Bp stars are the result of microscopic diffusion, in competition with other processes such as turbulent diffusion, convection, meridional circulation, well-mixed mass loss, accretion, and the effects of a magnetic field.\n\nThe basic ideas of how diffusion can lead to anomalous atmospheric abundances have been discussed for some years . Essentially, in a quiescent stellar plasma, the gravitational field tends to cause trace atoms to diffuse downward relative to the dominant hydrogen gas, but the outward flow of radiation from centre to surface is responsible for a force which tends to levitate some trace atoms up through the atmosphere, if they absorb at many wavelengths and are not too abundant. In slowly rotating middle main sequence stars, which have relatively weak mixing near the surface, this phenomenon can lead to anomalous atmospheric chemical abundance.\n\nDetailed and quantitative modelling of diffusion is actually quite complicated. The average upward acceleration per ion of a particular species depends on the intensity of the radiation (which increases with effective temperature $T_{\\mathrm{eff}}$), on the specific atomic transitions that the ion can undergo (especially those arising from low-lying energy levels), and on the number density of the trace ion. In general, as the fractional number density of the trace ion increases, the acceleration per ion decreases. For many trace ions of low abundance (having, say, number density less than about $10^{-6}$ that of H) in stars of $T_{\\mathrm{eff}}$\u00a0above about 7000\u00a0K, the upward acceleration $g_{\\rm R}$ due to the radiation field is larger than the local acceleration of gravity $g$, and the ion diffuses upwards rather than downward. In slowly rotating middle main sequence stars, which have relatively weak mixing near the surface, this phenomenon can lead to anomalous chemical abundances in the stellar atmospheres.\n\nThe actual vertical variation of radiative acceleration for various ions in specific stellar models has been calculated by a number of groups. The calculations fall into two general types. Because the acceleration per ion decreases as the fractional abundance of the ion increases (due to saturation of the spectral lines that the ion absorbs), an ion which is levitated at low abundance can become locally abundant enough that $g_{\\rm R} = g$, a situation in which the tendency of that trace ion to diffuse upwards or downwards vanishes. It is possible to determine the fractional abundance for a specific atom as a function of position in a region (e.g. with height in the stellar atmosphere) for which this condition is satisfied everywhere in the region, and diffusion of this atom ceases. The run of abundance through the region satisfying this condition is known as the \"equilibrium abundance distribution\". Some recent computations of equilibrium abundance distributions in the atmospheres of the late B stars are re reported by , and .\n\nRecent work in this field has focussed on re-computing the structure of atmospheres in which equilibrium abundance distribution computations suggest the presence of strong vertical variations in abundance, in order to have the atmospheric structure be consistent with the abundance stratification . It is found that when large variations in abundances of abundant elements with altitude are present, the atmospheric structure is substantially perturbed.\n\nHowever, an equilibrium abundance distribution may not be achievable in a region if not enough atoms of the species is available from below. In this second case, atoms may diffuse slowly up into the region from below and at the same time be removed from the top. This situation could settle into a stationary state in which the flux of atoms through the region is constant with height, and the abundance distribution is unchanging but the actual abundance is smaller than the value leading to equilibrium. In principle, if a large enough volume of the star is considered, the evolution of abundance of an atom with radius, including through the atmosphere, could be computed as a function of time. Because the time-scale for evolution varies strongly with density, this is a stiff problem. Solutions on the scale of a stellar envelope, to try to explain the chemical abundances in the atmospheres of metallic-line (Am) stars, including either a (mixed) stellar wind or deep, but weak, turbulent mixing have been reported by and .\n\nThe time variation with height of abundance of a very unabundant element diffusing up through a stellar atmosphere has recently been studied by , who find that a stationary state of constant particle flux with height is generally achieved after a few hundred years, provided that the atoms continue to diffuse out of the top of the atmosphere. The actual value of the flux, and the run of abundance with height, is set by the abundance of the atom supplied at the bottom of the atmosphere from the reservoir below. We will refer to this situation as \"stationary flow-through\".\n\nWe now apply these general ideas to the elements studied here, in stellar atmospheres of $\\log g \\sim 4$ and $\\ensuremath{T_{\\mathrm{eff}}}\\sim 10-13\\,000$\u00a0K. In the panels of Figure\u00a0, it appears that we can identify several different cases.\n\n## Abundant light elements: He, O\n\nThe general behaviour of these elements was already predicted by . For the very abundant light elements He and O, the radiative acceleration for $\\ensuremath{T_{\\mathrm{eff}}}\\sim 12000$\u00a0K is much too weak to support a solar abundance of these elements (cf. $g_{\\rm rad}$ calculations for $\\ensuremath{T_{\\mathrm{eff}}}= 12000$\u00a0K by ). These two elements are expected to diffuse downward into the stellar envelope below the atmosphere until the relative abundance has fallen enough that radiative acceleration can support them in the atmosphere. Consequently, He and O are expected to have abundances well below the solar values, and this is observed.\n\nThe diffusion of He in stellar atmospheres with well-mixed winds (including H) has been studied in more detail by , and ; all groups confirm that without a mixed wind from the star, that He should diffuse downward until quite low relative abundance is reached. However, we know of no published predictions of how low the He abundance should drop before an equilibrium distribution is achieved. It may well be that some degree of turbulent mixing with sub-atmospheric layers is required to keep the abundance of He as large as is observed (Figure\u00a0).\n\nThe *increase* in He abundance with stellar age that we observe is particularly unexpected. Radiative levitation is so weak (due to the shadowing of resonance lines of He\u00a0i by the very strong H continuum bound-free absorption below the Lyman limit) that it is not expected to play a significant role in the observed He abundance, and especially not in its increase with decreasing $T_{\\mathrm{eff}}$. The observed increase of He abundance towards the solar mixing ratio suggests either that there is significant and increasing mixing upward of envelope He by some process unrelated to diffusion, or that there is accretion of (He-rich) interstellar gas.\n\nhave studied the diffusion of O in stars in our mass range, using a simple approximation to estimate the radiative acceleration. Extrapolating their results, it appears that radiation may be able to support O at an abundance one to two dex below the solar abundance. This conclusion is supported by the very small $g_{\\rm rad}$ value for O at $12000$\u00a0K found by . The rather mild underabundance of O that we observe (about 0.5\u00a0dex) is probably higher than the level that would be found by an equilibrium abundance calculation. Furthermore, as the stars in our sample age, $T_{\\mathrm{eff}}$\u00a0and $\\log g$ both decrease. The change in $T_{\\mathrm{eff}}$\u00a0typically reduces $g_{\\rm R}$. However, the decrease in $\\log g$ means that a smaller value of $g_{\\rm R}$ is required to support ions of O, which in turn means that a larger abundance can be supported. It is not clear which of these two effects dominates, and in fact the observed lack of significant variation in the O abundance with age suggests that if the observed abundance is the result of radiative levitation, the two effects roughly cancel. We do not have any explanation at present for the slight overabundance of O observed for a single star, HD\u00a0133880.\n\nIt would clearly be of interest to have available some published results of computed equilibrium abundances of He and O in our temperature range, and especially to have such calculations follow the evolution of a star of $3.5 M_\\odot$ from $\\ensuremath{T_{\\mathrm{eff}}}= 13000$\u00a0K and $\\log g = 4.5$ to $\\ensuremath{T_{\\mathrm{eff}}}= 10000$\u00a0K and $\\log g = 3.5$, in order to determine the importance of radiative levitation of He and O in the observed abundance evolution of the stars of our study.\n\n## Light metals: Mg and Si\n\nThe Mg abundance, based generally only on the 4481\u00a0\u00c5\u00a0line of Mg\u00a0ii, appears to be about 1\u00a0dex below the solar abundance, or $\\log(n_{\\rm Mg}\/n_{\\rm H}) \\approx -5.2$. This may be compared with the predicted Mg abundance profile for a star of $\\ensuremath{T_{\\mathrm{eff}}}= 12000$\u00a0K of and of . The abundance predicted by the equilibrium calculation ($g_{\\rm R} = g$ throughout the atmosphere) is mildly non-uniform through the line-forming region, but is of about this same values. Thus it appears that Mg may be an element in which the supply of atoms from below has been large enough to allow the development of an equilibrium stratification.\n\nNo strong trend of abundance with age is observed. We have no explanation at present for the two stars (HD\u00a0133880 and HD\u00a045583) that deviate strongly from the mean behaviour, with Mg abundances about 1\u00a0dex larger than other, younger and older, stars.\n\nSince Mg may well be described in the stars of our sample by the equilibrium abundance distribution, it would be of great interest to have calculations of the equilibrium atmospheric abundance of Mg following the evolution of $T_{\\mathrm{eff}}$\u00a0and $\\log g$ for our $3.5 M_\\odot$ stars.\n\nSilicon equilibrium atmospheric abundance has been studied by a number of authors, including , , and . While detailed results differ, the overall conclusion is that Si is expected to be of order 1\u00a0dex underabundant in the atmosphere at $\\ensuremath{T_{\\mathrm{eff}}}\\sim 12000$\u00a0K. Instead, as has been found in the past, we observe an overabundance of about 1\u00a0dex at all ages. This situation has been a long-standing puzzle. Because it is not clear how to obtain an atmospheric abundance that is nearly 2\u00a0dex larger that the maximum value that can be supported by radiation pressure, various explanations have been probed, such as support by a horizontal magnetic field, non-LTE effects, etc., but none have been found to offer convincing explanations of the observed overabundance at all ages.\n\n## Iron peak elements: Ti, Cr and Fe\n\nExpected abundances for these elements based on equilibrium have recently been computed for stars in our mass range by , , and (for Fe) by . The results are most extensive for iron. The computed equilibrium abundances are only qualitatively in agreement with one another, and depend on still uncertain physics, particularly on how to treat redistribution of momentum between ionisation stages, but also on the effects of the magnetic field, and the chemical composition assumed for the computation. However, all the computations of equilibrium abundances agree that Fe in the line forming region of stars in our mass range should reach equilibrium at an abundance of around $-3$ to $-3.5$\u00a0dex relative to H. This is reasonably consistent with the values that we observe (Figure 2), so this may well be an element for which the equilibrium assumption of zero diffusion is approximately correct. This implies that an adequate supply of Fe ions is available from below the atmosphere to replenish atmospheric atoms lost to space during the initial period of equilibration, and to keep the abundance high enough to satisfy the equilibrium condition.\n\nIt is clear from the few values of $T_{\\mathrm{eff}}$\u00a0for which equilibrium Fe abundances have been computed that the radiative acceleration and the equilibrium abundance decrease with decreasing effective temperature. However, there is no computational information on how the equilibrium abundance varies as $\\log g$ decreases from 4.5 to 3.5, except for the qualitative result that the effect of the decrease in $\\log g$ should be to make it possible for a given radiative force to support more atoms. Since the observed evolution of Fe abundance is that the abundance decreases with time, it appears that the decrease in radiative acceleration with decreasing $T_{\\mathrm{eff}}$\u00a0may dominate. This is certainly a question that could be studied by computing an appropriate series of equilibrium abundance models following the evolution of a $3.5 M_\\odot$ star\n\nThe available equilibrium calculations for Ti and Cr in the mass range of our observations by have been cut off in the line-forming region because of artificial limits of 1000 times the solar abundances imposed on the calculations. Thus we may suppose that without these limits, the equilibrium calculations would imply equilibrium abundances of Ti and Cr of $\\log(n_{\\rm el}\/n_{\\rm H})$ similar to those of Fe, in the range of $-3$ to $-4$. This is confirmed by the calculations of . Although we observe overabundances of these two elements of up to 2\u00a0dex when the stars are young, the Cr abundance seems to be generally less than the equilibrium value, and the Ti abundances almost certainly are below equilibrium even at young ages. These may well be elements for which the abundance would be closer to that predicted by the assumption of a stationary flow-through state, with atoms fed into the atmosphere from below and lost from the top . In this case the limiting factor determining the atmospheric abundances is the available number density of Cr or Ti brought up from the envelope by diffusion.\n\nAs for iron, it is not known whether the probable decrease in radiative acceleration with declining $T_{\\mathrm{eff}}$, or the increased abundance that can be supported as $\\log g$ decreases, is the dominant effect during main sequence evolution. It appears that the effect of decreasing radiative acceleration, which may well reduce the supply of Cr and Ti at the bottom of the atmosphere and thus may reduce the supply if these elements are in a stationary flow-through state but not in equilibrium, may dominate, as the abundances of both these elements are observed to decrease strongly with increasing stellar age . It would be of great interest to have computations of equilibrium abundances of Ti and Cr following the evolution of a $3.5 M_\\odot$ star for more direct comparison with our results.\n\n## Trace heavy elements: Nd and Pr\n\nThere are no calculations of the equilibrium abundances of heavy elements such as Nd and Pr near the $T_{\\mathrm{eff}}$\u00a0temperature range of interest to us here, except for the equilibrium stratification calculated for a simplified artificial low-abundance element with $\\ensuremath{T_{\\mathrm{eff}}}= 12000$\u00a0K by . This artificial element is expected to behave somewhat like Hg, and the equilibrium abundance computed for it is about $-8$\u00a0dex without a magnetic field, and possibly even smaller high in the atmosphere in the presence of a strong magnetic field. It is not clear to what extent this result is applicable to Nd or Pr. More realistic computations would clearly be of great interest.\n\nIn considerably cooler magnetic Ap stars, it is known from modelling of observed spectra that these elements are strongly stratified, with the abundance of Nd as much as 4\u00a0dex more abundant high in the atmosphere than near optical depth unity . However, we know of no similar stratification models of rare earths in Ap stars in our temperature range. The stars in our sample are hot enough that we have not been able to identify the Nd\u00a0ii lines that would make stratification analysis possible.\n\nWe thus have no real evidence as to whether Nd and Pr at the temperatures of our stellar sample have approximately equilibrium abundance distributions, or are in a stationary flow-through state in which these elements are entering the bottom of the atmosphere from below and are being lost to space from the top.\n\nLike the Fe-peak elements, these rare earths are observed to have mean abundances that clearly decrease with stellar age. This could be because the radiative acceleration upward is **probably** declining with decreasing $T_{\\mathrm{eff}}$, and thus either decreasing the equilibrium abundance in the atmosphere, or decreasing the supply at the bottom of the atmosphere if the flow-through case applies.\n\n# Summary and conclusions\n\nIn this paper, we report the discovery of the time variations of atmospheric abundances during the main sequence lifetime of a magnetic Bp star. Large overabundances are observed for Ti, Cr, and Fe near the ZAMS. As the star evolves on the main sequence, the abundances of these Fe-peak elements clearly decrease, approaching nearly solar values closer to the TAMS. The rare-earth elements Pr and Nd show drastic overabundances in young Bp stars, with values nearly 10$^{4}$ times greater than in the Sun. Near the TAMS, magnetic Bp stars remain overabundant in rare-earth elements, still exhibiting abundances of Pr and Nd that are at least a factor of 100 larger than the solar ratios. The light elements O, Mg and Si show no evidence of time variations. However, O and Mg are, in general, underabundant and Si is always overabundant compared to the Sun. Remarkably, we found that the abundance of He increases during the main sequence lifetime from about 1% to 10% of the solar He abundance. We conclude that the observed increase is either the result of significant mixing in the stellar envelope of He that is not related to diffusion, or that there is accretion of He-rich gas from the interstellar medium.\n\nAs the climate in magnetic Bp stars changes with stellar age, important and systematic changes in surface chemistry result. These changes are due to both the evolving current climate of the outer layers of the star, and to the previous history of diffusion and competing effects. The systematic changes we have discovered present a major challenge to theoretical modelling. Efforts to reproduce theoretically the observed evolutionary changes should lead to greatly improved understanding of the physics at work in the envelopes of magnetic Bp stars. Specifically, more detailed calculations on the equilibrium abundances for elements are necessary to fully interpret the results we present. Further, efforts to follow the evolution of a 3.5\u00a0M$_{\\odot}$ star with time from a series of equilibrium abundance models will help differentiate between competing effects that influence the amount of radiative support for atoms in the stellar atmosphere. These competing effects include, for example, the decrease in gravity and the probable decrease in radiative acceleration with decreasing $T_{\\mathrm{eff}}$.\n\nFuture work will increase the number of stars in the current mass bin to enhance the current sample as well as increase the number of mass bins studied to include more massive stars (4-5\u00a0M$_{\\odot}$) and less massive stars (2-3\u00a0M$_{\\odot}$) to see if similar trends are observed.\n\n[^1]: Based in part on observations made with the European Southern Observatory (ESO) telescopes under the ESO programmes 072.D-0410(A) and 086.D-0449(A). It is also based in part on observations carried out at the Canada-France-Hawaii Telescope (CFHT) which is operated by the National Research Council of Canada, the Institut National des Science de l\u00daniverse of the Centre National de la Recherche Scientifique of France and the University of Hawaii.","meta":{"dup_signals":{"dup_doc_count":18,"dup_dump_count":12,"dup_details":{"curated_sources":1,"2022-49":1,"2021-21":1,"2019-43":1,"2019-09":1,"2018-39":2,"2018-26":2,"2018-09":2,"2017-47":2,"2017-39":2,"2017-30":2,"2023-06":1}},"filename":"out\/1312.0511_extract_Bailey-Landstreet-Bagnulo.tex.md"},"subset":"arxiv"} +{"text":"abstract: We present and discuss the design details of an extensible, modular, open source software framework called EXOSIMS, which creates end-to-end simulations of space-based exoplanet imaging missions. We motivate the development and baseline implementation of the component parts of this software with models of the WFIRST-AFTA coronagraph, and present initial results of mission simulations for various iterations of the WFIRST-AFTA coronagraph design. We present and discuss two sets of simulations: The first compares the science yield of completely different instruments in the form of early competing coronagraph designs for WFIRST-AFTA. The second set of simulations evaluates the effects of different operating assumptions, specifically the assumed post-processing capabilities and telescope vibration levels. We discuss how these results can guide further instrument development and the expected evolution of science yields.\nauthor: Dmitry Savransky and Daniel Garrett\ntitle: WFIRST-AFTA Coronagraph Science Yield Modeling with EXOSIMS\n\n**Address all correspondence to**: Dmitry Savransky ()\n\n```latex\n\\begin{spacing}{2} % use double spacing for rest of manuscript\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Introduction}\n\\label{sect:intro} % \\label{} allows reference to this section\nThe majority of exoplanets discovered to date have been detected indirectly, by looking for effects these planets have on their host stars. Directly imaging exoplanets will provide a great deal of additional information unobtainable by most indirect detection methods and make discoveries expanding the population of known exoplanets. While direct imaging of exoplanets has been demonstrated with ground based instruments, these have all been very young, very large, and self-luminous planets on long-period orbits. Imaging of smaller and more Earth-like planets will likely require space observatories such as the Wide-Field Infrared Survey Telescope-Astrophysics Focused Telescope Assets (WFIRST-AFTA). Such observatories are major undertakings requiring extensive planning and design.\n\nBuilding confidence in a mission concept's ability to achieve its science goals is always desirable. Unfortunately, accurately modeling the science yield of an exoplanet imager can be almost as complicated as designing the mission. While each component of the system is modeled in great detail as it proceeds through its design iterations, fitting these models together is very challenging. Making statements about expected science returns over the course of the whole mission requires a large number of often unstated assumptions when such results are presented. This makes it challenging to compare science simulation results and also to systematically test the effects of changing just one part of the mission or instrument design from different groups.\n\nWe seek to address this problem with the introduction of a new modular, open source mission simulation tool called EXOSIMS (Exoplanet Open-Source Imaging Mission Simulator). This software is specifically designed to allow for systematic exploration of exoplanet imaging mission science yields. The software framework makes it simple to change the modeling of just one aspect of the instrument, observatory, or overall mission design. At the same time, this framework allows for rapid prototyping of completely new mission concepts by reusing pieces of previously implemented models from other mission simulations.\n\n%Planet-finding is challenging because planets range from millions to billions of times fainter than their host stars. Conventional imaging systems do not have the dynamic range to handle such large contrast values. More importantly, the finite size of telescope apertures result in diffraction patterns of star light (Airy rings) which superimpose the light from orbiting planets. Direct exoplanet imaging becomes dependent on the amount of starlight suppression an instrument can achieve. \n%Space-based instruments with the necessary dynamic range or contrast have additional restrictions on their ability to directly image exoplanets. The observatory's orbit determines when desired targets are in view. The telescope design determines if visible targets may be imaged at a given time when other bright objects are near the field of view such as the sun, moon, or other planets within our solar system. \n\nModeling the science yield of an exoplanet imager is primarily difficult because it is completely conditional on the true distributions of planet orbital and physical parameters, of which we so far have only partial estimates. This makes the mission model an inherently probabilistic one, which reports posterior distributions of outcomes conditioned on some selected priors. Since the introduction of observational completeness by Robert Brown\\cite{brown2005}, it is common to approach exoplanet mission modeling with Monte Carlo methods. Various groups have pursued such modeling, often focusing on specific aspects of the overall mission or observation modeling\\cite{brown2010new,Savransky2010,turnbull2012search,Stark2014}.\n\nA second challenge is correctly including all of the dynamic and stochastic aspects of such a mission. Given a spacecraft orbit, a target list, and the constraints of the imaging instrument, we can always predict when targets will be observable. Incorporating this knowledge into a simulation, however, can be challenging if a single calculated value represents the predictions, i.e., the number of planets discovered. Similarly, while it is simple to write down the probability of detecting a planet upon the first observation of a star, it is more challenging to do the same for a second observation an arbitrary amount of time later, without resorting to numerical simulation\\cite{brown2010new}. EXOSIMS deals with these challenges by explicitly simulating every aspect of the mission and producing a complete timeline of simulated observations including the specific targets observed at specific times in the mission and recording the simulated outcomes of these observations. While one such simulation does not answer the question of expected mission science yield, an ensemble of many thousands of such simulations gives the data for the posterior distributions of science yield metrics. EXOSIMS is designed to generate these ensembles and provide the tools to analyze them, while allowing the user to model any aspect of the mission as detailed as desired.\n\nIn \\S\\ref{sec:EXOSIMS} we provide an overview of the software framework and some details on its component parts. As the software is intended to be highly reconfigurable, we focus on the operational aspects of the code rather than implementation details. We use the coronagraphic instrument currently being developed for WFIRST-AFTA as a motivating example for specific implementations of the code. In \\S\\ref{sec:wfirst} we present mission simulation results for various iterations of the WFIRST-AFTA coronagraph designs using components that are being adapted to build the final implementation of EXOSIMS.\n\nEXOSIMS is currently being developed as part of a WFIRST Preparatory Science investigation, with initial implementation targeted at WFIRST-AFTA. This development includes the definition of a strict interface control, along with corresponding prototypes and class definitions for each of the modules described below. The interface control document and as-built documentation are both available for public review and comment at \\linkable{https:\/\/github.com\/dsavransky\/EXOSIMS}. Initial code release is targeted for Fall 2015, with an alpha release in February of 2016 and continued updates through 2017. \n\nFuture development of EXOSIMS is intended to be a community-driven project, and all software related to the base module definitions and simulation execution will be made publicly available alongside the interface control documentation to allow mission planners and instrument designers to quickly write their own modules and drop them directly into the code without additional modifications made elsewhere. We fully expect that EXOSIMS will be highly useful for ensuring the achievement of the WFIRST-AFTA science goals, and will be of use to the design and planning of future exoplanet imaging missions.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{EXOSIMS Description}\\label{sec:EXOSIMS}\nEXOSIMS builds upon previous frameworks described in Ref.~\\citenum{Savransky2010} and Ref.~\\citenum{Savransky2013}, but will be significantly more flexible than these earlier efforts, allowing for seamless integration of independent software modules, each of which performs its own well-defined tasks, into a unified mission simulation. This will allow the wider exoplanet community to quickly test the effects of changing a single set of assumptions (for example, the specific model of planet spectra, or a set of mission operating rules) on the overall science yield of the mission, by only updating one part of the simulation code rather than rewriting the entire simulation framework. \n\nThe terminology used to describe the software implementation is loosely based on the object-oriented framework upon which EXOSIMS is built. The term module can refer to either the object class prototype representing the abstracted functionality of one piece of the software, or to an implementation of this object class which inherits the attributes of the prototype, or to an instance of this object class. Thus, when we speak of input\/output definitions of modules, we are referring to the class prototype. When we discuss implemented modules, we mean the inherited class definition. Finally, when we speak of passing modules (or their outputs), we mean the instantiation of the inherited object class being used in a given simulation. Relying on strict inheritance for all implemented module classes provides an automated error and consistency-checking mechanism, as we can always compare the outputs of a given object instance to the outputs of the prototype. This means that it is trivial to pre-check whether a given module implementation will work with the larger framework, and thus allows for the flexibility and adaptability described above. \n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=\\textwidth]{figure1}\n \\caption{\\label{fig:codeflow} Flowchart of mission simulation. Each box represents a component software module which interacts with other modules as indicated by the arrows. The simulation modules (those that are not classified as input modules) pass all input modules along with their own output. Thus, the Survey Ensemble module has access to all of the input modules and all of the upstream simulation modules.} \n\\end{figure} \n\nFig.~\\ref{fig:codeflow} shows the relationships of the component software modules classified as either input modules or simulation modules. The input modules contain specific mission design parameters. The simulation modules take the information contained in the input modules and perform mission simulation tasks. Any module may perform any number or kind of calculations using any or all of the input parameters provided. They are only constrained by their input and output specification, which is designed to be as flexible as possible, while limiting unnecessary data passing to speed up execution.\n\n%%-----------------------------------------------------------\n\\subsection{Input Modules}\n\nThe specific mission design under investigation determines the functionality of each of the input modules, but the inputs and outputs of each are always the same (in terms of data type and what the variables represent). These modules encode and\/or generate all of the information necessary to perform mission simulations. Here we briefly describe the functionality and major tasks for each of the input modules.\n\n\\subsubsection{Optical System Description}\nThe Optical System Description module contains all of the necessary information to describe the effects of the telescope and starlight suppression system on the target star and planet wavefronts. This requires encoding the design of both the telescope optics and the specific starlight suppression system, whether it be an internal coronagraph or an external occulter. The encoding can be achieved by specifying Point Spread Functions (PSF) for on- and off-axis sources, along with (potentially angular separation-dependent) contrast and throughput definitions. At the opposite level of complexity, the encoded portions of this module may be a description of all of the optical elements between the telescope aperture and the imaging detector, along with a method of propagating an input wavefront to the final image plane. Intermediate implementations can include partial propagations, or collections of static PSFs representing the contributions of various system elements. The encoding of the optical train will allow for the extraction of specific bulk parameters including the instrument inner working angle (IWA), outer working angle (OWA), and mean and max contrast and throughput.\n\nIf the starlight suppression system includes active wavefront control, i.e., via one or more deformable mirrors (DM) \\cite{cahoy2014wavefront}, then this module must also encode information about the sensing and control mechanisms. Again, this can be achieved by simply encoding a static targeted DM shape, or by dynamically calculating DM settings for specific targets via simulated phase retrieval. As wavefront control residuals may be a significant source of error in the final contrast budget, it is vitally important to include the effects of this part of the optical train.\n\nThe optical system description can optionally include stochastic and systematic wavefront-error generating components. Again, there is a wide range of possible encodings and complexities. They could be Gaussian errors on the contrast curves sampled during survey simulation to add a random element to the achieved contrast on each target. Alternatively, in cases where an active wavefront control system is modeled, stochastic wavefront errors could be introduced by simulating the measurement noise on the wavefront sensor (either again as drawn from pre-determined distributions, or additively from various detector and astrophysical noise sources). Systematic errors, such as mis-calibration of deformable mirrors, closed-loop control delays, and non-common path errors, may be included to investigate their effects on contrast or optical system overhead. In cases where the optical system is represented by collections of static PSFs, these effects must be included in the diffractive modeling that takes place before executing the simulation. For external occulters, we draw on the large body of work on the effects of occulter shape and positioning errors on the achieved contrast, as in Ref.~\\citenum{shaklan2010error}.\n\nFinally, the optical system description must also include a description of the science instrument or instruments. The baseline instrument is assumed to be an imaging spectrometer, but pure imagers and spectrometers are also supported. Each instrument encoding must provide its spatial and wavelength coverage and sampling. Detector details such as read noise, dark current, and quantum efficiency must be provided, along with more specific quantities such as clock induced charge for electron multiplying CCDs\\cite{denvir2003electron}. Optionally, this portion of the module may include descriptions of specific readout modes, i.e., in cases where Fowler sampling\\cite{fowler1990demonstration} or other noise-reducing techniques are employed. In cases where multiple science instruments are defined, they are given enumerated indices in the specification, and the Survey Simulation module must be implemented so that a particular instrument index is used for a specific task, i.e., detection vs. characterization. \n\nThe overhead time of the optical system must also be provided and is split into two parameters. The first is an integration time multiplier for detection and characterization modes, which represents the individual number of exposures that need to be taken to cover the full field of view, full spectral band, and all polarization states in cases where the instrument splits polarizations. For detection modes, we will typically wish to cover the full field of view, while possibly only covering a small bandpass and only one polarization, whereas for characterizations, we will typically want all polarizations and spectral bands, while focusing on only one part of the field of view. The second overhead parameter gives a value for how long it will take to reach the instrument's designed contrast on a given target. This overhead is separate from the one specified in the observatory definition, which represents the observatory settling time and may be a function of orbital position, whereas the contrast floor overhead may depend on target brightness. If this value is constant, as in the case of an observing strategy where a bright target is used to generate the high contrast regions, or zero, as in the case of an occulter, then it can be folded in with the observatory overhead. \n\n\n\\subsubsection{Star Catalog}\nThe Star Catalog module includes detailed information about potential target stars drawn from general databases such as SIMBAD\\cite{wenger2000simbad}, mission catalogs such as Hipparcos\\cite{perryman1997hipparcos}, or from existing curated lists specifically designed for exoplanet imaging missions\\cite{turnbull2012search}. Information to be stored, or accessed by this module will include target positions and proper motions at the reference epoch (see \\S\\ref{sec:time}), catalog identifiers (for later cross-referencing), bolometric luminosities, stellar masses, and magnitudes in standard observing bands. Where direct measurements of any value are not available, values are synthesized from ancillary data and empirical relationships, such as color relationships and mass-luminosity relations\\cite{henry2004}.\n\nThis module will not provide any functionality for picking the specific targets to be observed in any one simulation, nor even for culling targets from the input lists where no observations of a planet could take place. This is done in the Target List module as it requires interactions with the Planetary Population module (to determine the population of interest), the Optical System Description module (to define the capabilities of the instrument), and Observatory Definition module (to determine if the view of the target is unobstructed).\n\n\\subsubsection{Planet Population Description}\nThe Planet Population Description module encodes the density functions of all required planetary parameters, both physical and orbital. These include semi-major axis, eccentricity, orbital orientation, and planetary radius and mass. Certain parameter models may be empirically derived\\cite{Savransky2011} while others may come from analyses\\cite{Dressing2013,Fortney2007} of observational surveys such as the Keck Planet Search\\cite{Cumming2008,Howard2010}, Kepler\\cite{Batalha2013,Fressin2013,Petigura2013}, and ground-based imaging surveys including the Gemini Planet Imager Exoplanet Survey\\cite{McBride2011,Macintosh2014}. This module also encodes the limits on all parameters to be used for sampling the distributions and determining derived cutoff values such as the maximum target distance for a given instrument's IWA.\n\nThe Planet Population Description module does not model the physics of planetary orbits or the amount of light reflected or emitted by a given planet, but rather only encodes the statistics of planetary occurrence and properties. As this encoding is based on density functions, it fully supports modeling `toy' universes where all parameters are fixed, in which case all of the distributions become delta functions. We can equally use this encoding to generate simulated universes containing only `Earth-twins' to compare with previous studies as in Ref.~\\citenum{brown2005} or Ref.~\\citenum{Stark2014}. Alternatively, the distributions can be selected to mirror, as closely as possible, the known distributions of planetary parameters. As this knowledge is limited to specific orbital or mass\/radius scales, this process invariably involves some extrapolation.\n\n\\subsubsection{Observatory Description}\nThe Observatory Definition module contains all of the information specific to the space-based observatory not included in the Optical System Description module. The module has three main tasks: orbit, duty cycle, and keepout definition, which are implemented as functions within the module. The inputs and outputs for these functions are represented schematically in Fig.~\\ref{fig:observatory}.\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=1\\textwidth]{figure2}\n\\caption{ \\label{fig:observatory} Depiction of Observatory Definition module including inputs, tasks, and outputs.} \n\\end{figure} \n\nThe observatory orbit plays a key role in determining which of the target stars may be observed for planet finding at a specific time during the mission lifetime. The Observatory Definition module's orbit function takes the current mission time as input and outputs the observatory's position vector. The position vector is standardized throughout the modules to be referenced to a heliocentric equatorial frame at the J2000 epoch. The observatory's position vector is used in the keepout definition task and Target List module to determine which of the stars from the Star Catalog may be targeted for observation at the current mission time.\n\nThe duty cycle determines when during the mission timeline the observatory is allowed to perform planet-finding operations. The duty cycle function takes the current mission time as input and outputs the next available time when exoplanet observations may begin or resume, along with the duration of the observational period. The outputs of this task are used in the Survey Simulation module to determine when and how long exoplanet finding and characterization observations occur. The specific implementation of the duty cycle function can have significant effects on the science yield of the mission. For example, if the observing program is pre-determined, such that exoplanet observations can only occur at specific times and last for specific durations, this significantly limits the observatory's ability to respond dynamically to simulated events, such as the discovery of an exoplanet candidate. This can potentially represent a sub-optimal utilization of mission time, as it may prove to be more efficient to immediately spectrally characterize good planetary candidates rather than attempting to re-observe them at a later epoch. It also limits the degree to which followup observations can be scheduled to match the predicted orbit of the planet. Alternatively, the duty cycle function can be implemented to give the exoplanet observations the highest priority, such that all observations can be scheduled to attempt to maximize dynamic completeness\\cite{brown2010new} or some other metric of interest. \n\nThe keepout definition determines which target stars are observable at a specific time during the mission simulation and which are unobservable due to bright objects within the field of view such as the sun, moon, and solar system planets. The keepout volume is determined by the specific design of the observatory and, in certain cases, by the starlight suppression system. For example, in the case of external occulters, the sun cannot be within the 180$^\\circ$ annulus immediately behind the telescope (with respect to the line of sight) as it would be reflected by the starshade into the telescope. The keepout definition function takes the current mission time and Star Catalog module output as inputs and outputs a list of the target stars which are observable at the current time. It constructs position vectors of the target stars and bright objects which may interfere with observations with respect to the observatory. These position vectors are used to determine if bright objects are in the field of view for each of the potential stars under exoplanet finding observation. If there are no bright objects obstructing the view of the target star, it becomes a candidate for observation in the Survey Simulation module.\n\nThe observatory definition also includes the target transition time, which encodes the amount of overhead associated with transitioning to a new target before the next observation can begin. For missions with external occulters, this time includes both the transit time between targets as well as the time required to perform the fine alignment at the end of the transit. For internal coronagraphs, this includes the settling time of the telescope to reach the bus stability levels required by the active wavefront control system. These may all be functions of the orbital position of the telescope, and may be implemented to take into account thermal effects when considering observatories on geocentric orbits. This overhead calculation does not include any additional time required to reach the instrument's contrast floor, which may be a function of target brightness, and is encoded separately in the Optical System Description.\n\nIn addition to these functions, the observatory definition can also encode finite resources that are used by the observatory throughout the mission. The most important of these is the fuel used for stationkeeping and repointing, especially in the case of occulters which must move significant distances between observations. We could also consider the use of other volatiles such as cryogens for cooled instruments, which tend to deplete solely as a function of mission time. This module also allows for detailed investigations of the effects of orbital design on the science yield, e.g., comparing the baseline geosynchronous 28.5$^\\circ$ inclined orbit for WFIRST-AFTA\\cite{Spergel2013} with an alternative L2 halo orbit also proposed for other exoplanet imaging mission concepts \\cite{savransky2010occulting}. \n\n\\subsubsection{Planet Physical Model}\nThe Planet Physical Model module contains models of the light emitted or reflected by planets in the wavelength bands under investigation by the current mission simulation. It uses physical quantities sampled from the distributions defined in the Planet Population, including planetary mass, radius, and albedo, along with the physical parameters of the host star stored in the Target List module, to generate synthetic spectra or band photometry, as appropriate. The planet physical model is explicitly defined separately from the population statistics to enable studies of specific planet types under varying assumptions of orbital or physical parameter distributions, i.e., evaluating the science yield related to Earth-like planets under different definitions of the habitable zone. The specific implementation of this module can vary greatly, and can be based on any of the many available planetary albedo, spectra and phase curve models\\cite{Pollack1986,Marley1999,Fortney2008,Cahoy2010,Spiegel2012,burrows1997nongray,burrows2003beyond}. \n\n\\subsubsection{Time}\\label{sec:time}\nThe Time module is responsible for keeping track of the current mission time. It encodes only the mission start time, the mission duration, and the current time within a simulation. All functions in all modules requiring knowledge of the current time call functions or access parameters implemented within the Time module. Internal encoding of time is implemented as the time from mission start (measured in days). The Time module also provides functionality for converting between this time measure and standard measures such as Julian Day Number and UTC time.\n\n\\subsubsection{Rules}\nThe Rules module contains additional constraints placed on the mission design not contained in other modules. These constraints are passed into the Survey Simulation module to control the simulation. For example, a constraint in the Rules module could include prioritization of revisits to stars with detected exoplanets for characterization when possible. This rule would force the Survey Simulation module to simulate observations for target stars with detected exoplanets when the Observatory Module determines those stars are observable.\n\nThe Rules module also encodes the calculation of integration time for an observation. This can be based on achieving a pre-determined signal to noise (SNR) metric (with various possible definitions), or via a probabilistic description as in Ref.~\\citenum{kasdin2006}. This requires also defining a model for the background contribution due to all astronomical sources and especially due to zodiacal and exozodiacal light\\cite{Stark2014}.\n\nThe integration time calculation can have significant effects on science yield---integrating to the same SNR on every target may represent a suboptimal use of mission time, as could integrating to achieve the minimum possible contrast on very dim targets. Changing the implementation of the Rules module allows exploration of these tradeoffs directly.\n\n\\subsubsection{Post-Processing}\nThe Post-Processing module encodes the effects of post-processing on the data gathered in a simulated observation, and the effects on the final contrast of the simulation. In the simplest implementation, the Post-Processing module does nothing and simply assumes that the attained contrast is some constant value below the instrument's designed contrast---that post-processing has the effect of uniformly removing background noise by a pre-determined factor. A more complete implementation actually models the specific effects of a selected post-processing technique such as LOCI\\cite{lafreniere2007new} or KLIP\\cite{soummer2012detection} on both the background and planet signal via either processing of simulated images consistent with an observation's parameters, or by some statistical description.\n\nThe Post-Processing module is also responsible for determining whether a planet detection has occurred for a given observation, returning one of four possible states---true positive (real detection), false positive (false alarm), true negative (no detection when no planet is present) and false negative (missed detection). These can be generated based solely on statistical modeling as in Ref.~\\citenum{kasdin2006}, or can again be generated by actually processing simulated images.\n\n%%-----------------------------------------------------------\n\\subsection{Simulation Modules}\nThe simulation modules include Target List, Simulated Universe, Survey Simulation and Survey Ensemble. These modules perform tasks which require inputs from one or more input modules as well as calling function implementations in other simulation modules.\n\n\\subsubsection{Target List}\nThe Target List module takes in information from the Optical System Description, Star Catalog, Planet Population Description, and Observatory Definition input modules and generates the input target list for the simulated survey. This list can either contain all of the targets where a planet with specified parameter ranges could be observed\\cite{savransky2008}, or can contain a list of pre-determined targets such as in the case of a mission which only seeks to observe stars where planets are known to exist from previous surveys. The final target list encodes all of the same information as is provided by the Star Catalog module.\n\n\\subsubsection{Simulated Universe}\nThe Simulated Universe module takes as input the outputs of the Target List simulation module to create a synthetic universe composed of only those systems in the target list. For each target, a planetary system is generated based on the statistics encoded in the Planet Population Description module, so that the overall planet occurrence and multiplicity rates are consistent with the provided distribution functions. Physical parameters for each planet are similarly sampled from the input density functions. This universe is encoded as a list where each entry corresponds to one element of the target list, and where the list entries are arrays of planet physical parameters. In cases of empty planetary systems, the corresponding list entry contains a null array.\n\nThe Simulated Universe module also takes as input the Planetary Physical Model module instance, so that it can return the specific spectra due to every simulated planet at an arbitrary observation time throughout the mission simulation.\n\n\n\\subsubsection{Survey Simulation}\nThe Survey Simulation module takes as input the output of the Simulated Universe simulation module and the Time, Rules, and Post-Processing input modules. This is the module that performs a specific simulation based on all of the input parameters and models. This module returns the mission timeline - an ordered list of simulated observations of various targets on the target list along with their outcomes. The output also includes an encoding of the final state of the simulated universe (so that a subsequent simulation can start from where a previous simulation left off) and the final state of the observatory definition (so that post-simulation analysis can determine the percentage of volatiles expended, and other engineering metrics).\n\n\\subsubsection{Survey Ensemble}\nThe Survey Ensemble module's only task is to run multiple simulations. While the implementation of this module is not at all dependent on a particular mission design, it can vary to take advantage of available parallel-processing resources. As the generation of a survey ensemble is an embarrassingly parallel task---every survey simulation is fully independent and can be run as a completely separate process---significant gains in execution time can be achieved with parallelization. The baseline implementation of this module contains a simple looping function that executes the desired number of simulations sequentially, as well as a locally parallelized version based on IPython Parallel\\cite{perez2007ipython}.\n\n\\section{WFIRST-AFTA Coronagraph Modeling}\\label{sec:wfirst}\nWhile the development of EXOSIMS is ongoing, we have already produced simulation results with the functionality out of which the baseline EXOSIMS implementation is being built. In this section, we present the results of some mission simulations for WFIRST-AFTA using optical models of coronagraph designs generated at JPL during the coronagraph downselect process in 2013, as well as post-downselect optical models of the Hybrid Lyot Coronagraph (HLC)\\cite{trauger2012complex} generated in 2014\\footnote{J. Krist, personal communication, 2014}. It is important to emphasize that the instrument designs and mission yields shown here are not representative of the final coronagraphic instrument or its projected performance. All of the design specifics assumed in these simulations are still evolving in response to ongoing engineering modeling of the observatory as a whole and to best meet the mission science requirements. \n\nThese simulations are instead presented in order to highlight the flexibility of the EXOSIMS approach to mission modeling, and to present two important use cases. In \\S\\ref{sec:predown} we present mission yield comparisons for different instrument designs while all other variables (observatory, star catalog, planet models, etc.) are kept constant. The results from these simulations are most useful for direct comparisons between different instruments and to highlight particular strengths and weaknesses in specific designs. Ideally, they can be used to guide ongoing instrument development and improve the final design science yield. In \\S\\ref{sec:hlcparams} we investigate a single coronagraph design operating under varying assumptions on observatory stability and post-processing capabilities. These simulations highlight how EXOSIMS can be used to evaluate a more mature instrument design to ensure good results under a variety of operating parameters. This section also demonstrates how to incorporate the effects of different assumptions in the pre-simulation optical system diffractive modeling.\n\nIn addition to the HLC, the first set of optical models includes models for a Shaped Pupil Coronagraph (SPC)\\cite{zimmerman2015shaped} and a Phase-Induced Amplitude Apodization Complex Mask Coronagraph (PIAA-CMC) \\cite{sidick2014simulated}. In the downselect process, the SPC and HLC were selected for further development with PIAA-CMC as backup. It should be noted that the HLC optical models in the first and second set of simulations shown here represent different iterations on the coronagraph design, and thus represent different instruments.\n\nThe Optical System Description is implemented as a static point spread function, throughput curve, and contrast curve based on the JPL optical models. Other values describing the detector, science instrument and the rest of the optical train were chosen to match Ref.~\\citenum{traub2014science} as closely as possible. The integration times in the Rules module are determined via modified equations based on Ref.~\\citenum{kasdin2006} to achieve a specified false positive and negative rate, which are encoded as constant in the post-processing module. Spectral characterization times are based on pre-selected SNR values (as in Ref.~\\citenum{brown2005}) and match the calculations in Ref.~\\citenum{traub2014science}. \n\nThe Star Catalog is based on a curated database originally developed by Margaret Turnbull\\cite{turnbull2012search}, with updates to stellar data, where available, taken from current values from the SIMBAD Astronomical Database\\cite{wenger2000simbad}. Target selection is performed with a detection integration time cutoff of 30 days and a minimum completeness cutoff of 2.75\\%\\cite{savransky2008}. Revisits are permitted at the discretion of the automated scheduler\\cite{Savransky2010}, and one full spectrum is attempted for each target (spectra are not repeated if the full band is captured on the first attempt). The total integration time allotted is one year, spaced over six years of mission time with the coronagraph getting top priority on revisit observations. \n\n\\subsection{Comparison of Pre-Downselect Coronagraph Designs}\\label{sec:predown}\n\nAs a demonstration of EXOSIMS ability to compare different instrument designs for a single mission concept, we compare mission simulation results based on optical models of the pre-downselect SPC, HLC and PIAA-CMC designs. As all of these represent preliminary designs that have since been significantly improved upon, and as our primary purpose here is to demonstrate the simulations' utility, we will refer to the three coronagraphs simply as C1, C2, and C3 (in no particular order). Table \\ref{tbl:corons} lists some of the parameters of the three coronagraphs including their inner and outer working angles, their minimum and mean contrasts, and maximum and mean throughputs. Each design has significantly different operating characteristics in its region of high contrast (or `dark hole'). C3 provides the best overall minimum contrast and IWA, but has a more modest mean contrast, whereas C2 has the most stable, and lowest mean contrast over its entire dark hole, at the expense of a larger inner working angle. C1 has the smallest angular extent for its dark hole, but maintains reasonably high throughput throughout. C2 has a constant, and very low throughput, while C3 has the highest throughput over its entire operating region. Finally, while C1 and C3 cover the full field of view with their dark holes, C2 only creates high contrast regions in 1\/3 of the field of view, and so requires three integrations to cover the full field.\n\nWe consider five specific metrics for evaluating these coronagraph designs:\n\\begin{enumerate}\n\\item Unique planet detections, defined as the total number of individual planets observed at least once.\n\\item All detections, defined as the total number of planet observations throughout the mission (including repeat observations of the same planets).\n\\item Total visits, defined as the total number of observations.\n\\item Unique targets, defined as the number of target stars observed throughout the mission.\n\\item Full spectral characterizations, defined as the total number of spectral characterizations covering the entire 400 to 800 nm band. This does not include characterizations where the inner or outer working angle prevent full coverage of the whole band. This number will always be smaller than the number of unique detections based on the mission rules used here.\n\\end{enumerate}\nWhile it is possible to use EXOSIMS results to calculate many other values, these metrics represent a very good indicator of overall mission performance. As it is impossible to jointly maximize all five---in particular, getting more full spectra or additional repeat detections is a direct trade-off to finding additional, new planets---these values together describe the Pareto front of the mission phase space. At the same time, these metrics serve as proxies for other quantities of interest. For example, taken together, all detections and unique detections indicate a mission's ability to confirm it's own detections during the course of the primary mission, as well as for possible orbit fitting to detected planets. The number of unique targets, compared with the input target list, determines whether a mission is operating in a `target-poor' or `execution time-poor' regime. The latter can be addressed simply by increasing the mission lifetime, whereas the former can only be changed with an instrument redesign. Finally, comparing the numbers of unique detections and full spectra indicates whether an instrument design has sufficient capabilities to fully characterize the planets that it can detect.\n\nFor each of the coronagraphs we run 5000 full mission simulations, keeping all modules except for the Optical Description and Post-Processing constant. In addition to the parameters and implementations listed above, our Post-Processing module implementation assumes a static factor of either 10 or 30 in terms of contrast improvement due to post-processing. That is, results marked 10x assume that the achieved contrast on an observation is a factor of 10 below the design contrast at the equivalent angular separation. All together, we generated 30,000 discrete mission simulations, in six ensembles. Mean values and $1\\sigma$ standard deviations for our five metrics of interest for each ensemble are tabulated in Table \\ref{tbl:res2}, with the full probability density functions (PDFs) shown in Figs.~\\ref{fig:audets} - \\ref{fig:spectra}. \n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.65\\textwidth]{figure3}\n\\caption{PDF of unique detections (number of individual planets, potentially with multiple planets about some targets, detected one or more times) for the coronagraph designs described in Table \\ref{tbl:corons} assuming either a factor of 10 or 30 in post-processing contrast gains. Of particular importance here is the probability of zero detections---all of the designs at 10x suppression, and C1 in particular, have a significant ($>5\\%$) chance of never seeing a planet. \\label{fig:audets}}\n\\end{figure}\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.65\\textwidth]{figure4}\n\\caption{PDF of all detections (including repeat detections) for instruments as in Fig.~\\ref{fig:audets}. Note that values of 15 or more typically represent a small number of easily detectable planet that are re-observed many times. Re-observations of a single target were capped at four successful detections in all simulations. \\label{fig:adets}}\n\\end{figure}\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.65\\textwidth]{figure5}\n\\caption{PDF of total number of observations (including repeat observations of some targets) for instruments as in Fig.~\\ref{fig:audets}.\\label{fig:avisits}}\n\\end{figure}\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.65\\textwidth]{figure6}\n\\caption{PDF of unique targets observed for instruments as in Fig.~\\ref{fig:audets}. While all three instruments have fairly narrow distributions of this parameter, only C2 with 10x post-processing gains is completely target limited.\\label{fig:auvisits}}\n\\end{figure}\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.65\\textwidth]{figure7}\n\\caption{PDF of number of spectra achieved over the whole band from 400 to 800 nm for instruments as in Fig.~\\ref{fig:audets}. C3 does comparatively well in this metric due to its lower IWA and high throughput. \\label{fig:spectra}}\n\\end{figure}\n\nFrom the tabulated values, we see that the three coronagraphs have fairly similar performance in terms of number of planets found and spectrally characterized. Overall, C2 is most successful at detecting planets, due primarily to the stability of its contrast over the full dark hole. However, because of the very low overall throughput, this does not translate into more spectral characterizations than the other two designs. C1 and C2 benefit more from the change from 10x to 30x contrast improvement due to post-processing than does C3, which already has the deepest overall contrast, but whose contrast varies significantly over the dark hole. The largest differences in the metrics are the total number of observations. These illustrate the direct trade-off between acquiring spectra, which take a very long time, and doing additional integrations on other targets. In cases such as C2 with only 10x contrast improvement, the spectral characterization times are typically so long that most targets do not stay out of the observatory's keepouts and so the mission scheduling logic chooses to do more observations rather than wasting time on impossible spectral integrations. \n\nTurning to the figures of the full distributions for these metrics, we see that despite having similar mean values for unique planet detections, the full distributions of detections are quite different, leading to varying probabilities of zero detections. As this represents a major mission failure mode, it is very important to track this value, as it may outweigh the benefits of a given design. C1 with only 10x contrast gain does particularly poorly in this respect, with over 15\\% of cases resulting in no planets found. However, when a 30x gain is assumed, C1 and C2 end up having the lowest zero detection probabilities. We again see that the effects of even this simple post-processing assumption are not uniform over all designs. This is due to the complicated interactions between each instrument's contrast curve and the assumed distributions of planetary parameters. In essence, if our priors were different (leading to different completeness values for our targets) then we would expect different relative gains for the same post-processing assumptions. This is always a pitfall of these simulations and must always be kept in mind when analyzing the results. It should also be noted that there have been multiple iterations of all these coronagraph designs since downselect, resulting in significantly lower probabilities of zero detections, as seen in the next section.\n\nAnother interesting feature are the very long right-hand tails of the all detections and total visits distributions. These do not actually represent outliers in terms of highly successful missions, but rather typically imply the existence of one or a small number of very easy to detect planets. The logic of the scheduler allows the mission to keep returning to these targets for followup observations when it has failed to detect any other planets around the other targets in its list. This situation arises when the design of the instrument and assumptions on planet distributions leave a mission target limited. The distributions of unique targets show this limitation, with very narrow density functions for the actual number of targets observed for each instrument. In particular, Fig.~\\ref{fig:auvisits} makes it clear that C2 with 10x post-processing gains runs out of available targets. In order to combat this, the scheduler code prevents revisits to a given target after four successful detections of a planet around it. Finally, turning to Fig.~\\ref{fig:spectra} we see that all three designs, regardless of post-processing assumptions have greater than 10\\% probabilities of zero full spectral characterizations. C1 with 10x post-processing gains fares most poorly with zero full spectra achieved in over one third of all cases.\n\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.65\\textwidth]{figure8}\n\\caption{Input and output distributions of planetary radius for instruments as in Fig.~\\ref{fig:audets}. The black dashed line represents the density function used in generating the planetary radii for the simulated planets in all simulations, while the other lines represent the distributions of planetary radii of the planets detected by each of the coronagraphs. The input distribution is based on the Kepler results reported in Ref.~\\citenum{fressin2013false}. \\label{fig:radiusdists}}\n\\end{figure}\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.65\\textwidth]{figure9}\n\\caption{Input and output distributions of planetary mass for instruments as in Fig.~\\ref{fig:audets}. The input mass distribution is derived from sampling the radius distribution shown in Fig.~\\ref{fig:radiusdists} and converting to mass via an assumed density function. \\label{fig:massdists}}\n\\end{figure}\n\nAnalysis of the survey ensembles also allows us to measure the biasing effects of the mission on the planet parameters of interest. As we know the input distributions of the simulation, we can think of these as priors, and of the distribution of the `observed' planets as the posteriors. Figs.~\\ref{fig:radiusdists} and \\ref{fig:massdists} show the distributions of planetary mass and radius used in the simulations, respectively, along with the output distributions from the various coronagraph designs. The output distributions are calculated by taking the results of all of the simulations in each ensemble together, as the number of planets detected in each individual simulation is too small to produce an accurate distribution. \n\nThe input mass distribution shown here is derived from the Kepler radius distribution as reported in Ref.~\\citenum{fressin2013false} and is calculated by assuming that this distribution is the same for all orbital periods and via an assumed density function\\cite{Savransky2013}. The frequency spike seen at around 20 Earth masses is due to a poor overlap in the density functions used in this part of the phase space. This results in an equivalent spike in the posterior distributions, which slightly biases the results. \n\nAll of the instruments have fairly similar selection biases, although C1 and C3, which have smaller inner working angles and higher throughputs, detect more lower mass\/radius planets. The effects of the instruments are readily apparent in all cases: lower radius planets, which are predicted to occur more frequently than larger radius ones, are detected at much lower rates.\n\n\n\\subsection{Comparison of HLC Parameters}\\label{sec:hlcparams}\n\nIn this section we present the results of survey ensemble analyses for a single instrument---a post-downselect HLC design---again assuming either 10x or 30x post-processing gains, and assuming either 0.4, 0.8, or 1.6 milliarcseconds of telescope jitter. The jitter of the actual observatory will be a function of the final bus design and the operation of the reaction wheels, and its precise value is not yet known, which makes it important to evaluate how different levels of jitter may effect the achieved contrast and overall science yield. The jitter is built directly into the optical system model encoded in the Optical System Description module (see Krist et al., this volume, for details), while the post-processing is treated as in the previous section.\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.65\\textwidth]{figure10}\n\\caption{PDF of unique planetary detections (number of individual planets, potentially with multiple planets about some targets, detected one or more times) for the post-downselect HLC design, assuming either a factor of 10 or 30 in post-processing contrast gains and telescope jitter of 0.4, 0.8 or 1.6 mas. It should be noted that the change in assumed post-processing gain has a significantly smaller effect than the increased telescope jitter. We also note that the 1.6 mas jitter cases still have a small (0.6 to 1.8\\%) probability of never seeing a planet, whereas the 0.4 mas jitter ensembles do not contain a single simulation with zero plaents detected. \\label{fig:audets2}}\n\\end{figure}\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.65\\textwidth]{figure11}\n\\caption{PDF of total number of planetary detections (including repeat detections) for instruments as in Fig.~\\ref{fig:audets2}. The trend here closely follows the one observed in the results for the unique detections metric.\\label{fig:adets2}}\n\\end{figure}\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.65\\textwidth]{figure12}\n\\caption{PDF of total number of target observations (including repeat observations) for instruments as in Fig.~\\ref{fig:audets2}. Here, the post-processing improvement factor makes more of a difference than in the previous two figures, as more time must be devoted to spectral characterizations, limiting how much time is available for further observations.\\label{fig:auvisits2}}\n\\end{figure}\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.65\\textwidth]{figure13}\n\\caption{PDF of unique targets observed for instruments as in Fig.~\\ref{fig:audets2}. The trend here tracks closely to the one observed in the total visits metric, and shows that this coronagraph design is not target limited in any of the studied cases.\\label{fig:avisits2}}\n\\end{figure}\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.65\\textwidth]{figure14}\n\\caption{PDF of number of spectra achieved over the whole band from 400 to 800 nm for instruments as in Fig.~\\ref{fig:audets2}. In the worst case, there is an $\\sim15\\%$ of not getting any spectra. Only the case of 0.4 mas jitter with 30x post-processing gain has no simulations in its ensemble with zero full spectra achieved. \\label{fig:spectra2}}\n\\end{figure}\n\nAs in the previous section, we run ensembles of 5000 simulations for each of the six cases considered, keeping all modules except for the Optical Description and Post-Processing constant. The mean and $1\\sigma$ of the five metrics of interest described in \\S\\ref{sec:predown} are tabulated in Table \\ref{tbl:res3}, and the full PDFs for all metrics are shown in Figs.~\\ref{fig:audets2} - \\ref{fig:spectra2}.\n\nOne important observation made immediately obvious by these results is the relatively large effect of increased jitter versus the gains due to post-processing. Tripling the assumed gain factor of post-processing on the final achieved contrast has a significantly smaller effect on the number of detections, gaining only one unique detection, on average, as compared with halving the amount of telescope jitter, which increases the number of unique detections by over 30\\%, on average. This shows us that the telescope jitter may be an effect that fundamentally cannot be corrected after the fact, and therefore needs to be tightly controlled, with well defined requirements set during mission design. Much of the current development effort for the project is focused on low-order wavefront sensing and control to mitigate these effects \\cite{poberezhskiy2014technology,shi2015low}.\n\nWe can also see significant improvements in the coronagraph design since the versions evaluated in \\S\\ref{sec:predown}, as the probability of zero planet detections is less than 2\\% in the case of the highest jitter level, and is well below 1\\% for all other cases. In fact, for both the 0.4 mas jitter ensembles, no simulations had zero detections, indicating a very low probability of complete mission failure for this coronagraph at these operating conditions.\n\nSimilar to the results of the previous section, the trend in the number of total visits does not simply follow those seen in the unique and total detection metrics, but is a function of both the number of detections and how much time is spent on spectral characterizations. We can see how the cases with the highest jitter and lowest post-processing gains are pushed towards larger numbers of observations, and unique targets, as they are able to achieve fewer full spectral characterizations, leaving them with additional mission time to search for new candidates. This is equally reflected in Fig.~\\ref{fig:spectra2} where, despite the good performance seen in Fig.~\\ref{fig:audets2}, all jitter levels have over 5\\% chance of zero full spectra at the 10x post-processing gain level, and only the 0.4 mas case at 30x gain has no instances of zero full spectra in its ensemble of results.\n\nThese metrics, taken together, clearly show that further optimization is possible via modification of mission rules, which were kept constant in all these ensembles. For example, the low numbers of spectral characterizations at higher jitter levels suggest that it may be worthwhile to attempt shallower integrations in order to be able to make more total observations and potentially find a larger number of bright planets. This would bias the final survey results towards larger planets, but would increase the probability of spectrally characterizing at least some of the planets discovered. Alternatively, this may point to the desirability of investigating whether full spectral characterizations can be achieved for a small number of targets over the course of multiple independent observations.\n\n\\section{Conclusions}\n\nWe have presented the design details of EXOSIMS---a modular, open source software framework for the simulation of exoplanet imaging missions with instrumentation on space observatories. We have also motivated the development and baseline implementation of the component parts of this software for the WFIRST-AFTA coronagraph, and presented initial results of mission simulations for various iterations of the WFIRST-AFTA coronagraph design.\n\nThese simulations allow us to compare completely different instruments in the form of early competing coronagraph designs for WFIRST-AFTA. The same tools also allow us to evaluate the effects of different operating assumptions, demonstrated here by comparing different assumed post-processing capabilities and telescope stability values for a single coronagraph design.\n\nAs both the tools and the coronagraph and mission design continue to mature we expect the predictions presented here to evolve as well, but certain trends have emerged that we expect to persist. We have identified the portions of design space and telescope stability ranges that lead to significant probabilities of zero detections, and we expect instrument designs and observatory specifications to move away from these. We have also identified a mean number of new planetary detections, for our particular assumed prior distributions of planetary parameters, that are consistent with the science definition team's mission goals for this instrument.\n\nAs we continue to both develop the software and to improve our specific modeling of WFIRST-AFTA we expect that these and future simulations will prove helpful in guiding the final form of the mission, and will lay the groundwork for analysis of future exoplanet imagers. \n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\acknowledgments \nThis material is based upon work supported by the National Aeronautics and Space Administration under Grant No. NNX14AD99G issued through the Goddard Space Flight Center. EXOSIMS is being developed at Cornell University with support by NASA Grant No. NNX15AJ67G. This research has made use of the SIMBAD database,\noperated at CDS, Strasbourg, France. The authors would like to thank Rhonda Morgan for many useful discussions and suggestions, as well as our reviewers Wes Traub and Laurent Pueyo, who have significantly improved this work through their comments. \n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%% References %%%%%\n\n\\begin{thebibliography}{10}\n\n\\bibitem{brown2005}\nR.~A. Brown, ``Single-visit photometric and obscurational completeness,'' {\\em\n The Astrophysical Journal} {\\bf 624}, 1010--1024 (2005).\n\n\\bibitem{brown2010new}\nR.~Brown and R.~Soummer, ``New completeness methods for estimating exoplanet\n discoveries by direct detection,'' {\\em The Astrophysical Journal} {\\bf 715},\n 122 (2010).\n\n\\bibitem{Savransky2010}\nD.~Savransky, N.~J. Kasdin, and E.~Cady, ``Analyzing the designs of\n planet-finding missions,'' {\\em Publications of the Astronomical Society of\n the Pacific} {\\bf 122}(890), 401--419 (2010).\n\n\\bibitem{turnbull2012search}\nM.~C. Turnbull, T.~Glassman, A.~Roberge, W.~Cash, C.~Noecker, A.~Lo, B.~Mason,\n P.~Oakley, and J.~Bally, ``{The Search for Habitable Worlds: 1. The Viability\n of a Starshade Mission},'' {\\em Publications of the Astronomical Society of\n the Pacific} {\\bf 124}(915), 418--447 (2012).\n\n\\bibitem{Stark2014}\nC.~C. Stark, A.~Roberge, A.~Mandell, and T.~D. Robinson, ``Maximizing the\n exoearth candidate yield from a future direct imaging mission,'' {\\em The\n Astrophysical Journal} {\\bf 795}(2), 122 (2014).\n\n\\bibitem{Savransky2013}\nD.~Savransky, ``Space mission design for exoplanet imaging,'' in {\\em SPIE\n Optical Engineering+ Applications}, 886403--886403, International Society\n for Optics and Photonics (2013).\n\n\\bibitem{cahoy2014wavefront}\nK.~L. Cahoy, A.~D. Marinan, B.~Novak, C.~Kerr, T.~Nguyen, M.~Webber,\n G.~Falkenburg, and A.~Barg, ``Wavefront control in space with mems deformable\n mirrors for exoplanet direct imaging,'' {\\em Journal of\n Micro\/Nanolithography, MEMS, and MOEMS} {\\bf 13}(1), 011105--011105 (2014).\n\n\\bibitem{shaklan2010error}\nS.~B. Shaklan, M.~C. Noecker, A.~S. Lo, T.~Glassman, P.~J. Dumont, E.~O.\n Jordan, N.~J. Kasdin, J.~W.~C.~Cash, E.~J. Cady, and P.~R. Lawson, ``Error\n budgeting and tolerancing of starshades for exoplanet detection,'' in {\\em\n Proceedings of SPIE}, {\\bf 7731} (2010).\n\n\\bibitem{denvir2003electron}\nD.~J. Denvir and E.~Conroy, ``{Electron-multiplying CCD: the new ICCD},'' in\n {\\em International Symposium on Optical Science and Technology}, 164--174,\n International Society for Optics and Photonics (2003).\n\n\\bibitem{fowler1990demonstration}\nA.~Fowler and I.~Gatley, ``Demonstration of an algorithm for read-noise\n reduction in infrared arrays,'' {\\em The Astrophysical Journal} {\\bf 353},\n L33 (1990).\n\n\\bibitem{wenger2000simbad}\nM.~Wenger, F.~Ochsenbein, D.~Egret, P.~Dubois, F.~Bonnarel, S.~Borde,\n F.~Genova, G.~Jasniewicz, S.~Lalo{\\\"e}, S.~Lesteven, {\\em et~al.}, ``The\n simbad astronomical database-the cds reference database for astronomical\n objects,'' {\\em Astronomy and Astrophysics Supplement Series} {\\bf 143}(1),\n 9--22 (2000).\n\n\\bibitem{perryman1997hipparcos}\nM.~A. Perryman, L.~Lindegren, J.~Kovalevsky, E.~Hoeg, U.~Bastian, P.~Bernacca,\n M.~Cr{\\'e}z{\\'e}, F.~Donati, M.~Grenon, M.~Grewing, {\\em et~al.}, ``The\n hipparcos catalogue,'' {\\em Astronomy and Astrophysics} {\\bf 323}, L49--L52\n (1997).\n\n\\bibitem{henry2004}\nT.~J. Henry, ``The mass-luminosity relation from end to end,'' in {\\em\n Spectroscopically and Spatially Resolving the Components of the Close Binary\n Stars, Proceedings of the Workshop held 20-24 October 2003 in Dubrovnik,\n Croatia}, R.~W. Hilditch, H.~Hensberge, and K.~Pavlovski, Eds., {\\bf 318},\n ASP Conference Series, San Francisco (2004).\n\n\\bibitem{Savransky2011}\nD.~Savransky, E.~Cady, and N.~J. Kasdin, ``Parameter distributions of keplerian\n orbits,'' {\\em The Astrophysical Journal} {\\bf 728}(1), 66 (2011).\n\n\\bibitem{Dressing2013}\nC.~D. Dressing and D.~Charbonneau, ``The occurrence rate of small planets\n around small stars,'' {\\em The Astrophysical Journal} {\\bf 767}(1), 95\n (2013).\n\n\\bibitem{Fortney2007}\nJ.~Fortney, M.~Marley, and J.~Barnes, ``Planetary radii across five orders of\n magnitude in mass and stellar insolation: application to transits,'' {\\em The\n Astrophysical Journal} {\\bf 659}(2), 1661 (2007).\n\n\\bibitem{Cumming2008}\nA.~Cumming, R.~P. Butler, G.~W. Marcy, S.~S. Vogt, J.~T. Wright, and D.~A.\n Fischer, ``The keck planet search: detectability and the minimum mass and\n orbital period distribution of extrasolar planets,'' {\\em Publications of the\n Astronomical Society of the Pacific} {\\bf 120}(867), 531--554 (2008).\n\n\\bibitem{Howard2010}\nA.~W. Howard, G.~W. Marcy, J.~A. Johnson, D.~A. Fischer, J.~T. Wright,\n H.~Isaacson, J.~A. Valenti, J.~Anderson, D.~N. Lin, and S.~Ida, ``The\n occurrence and mass distribution of close-in super-earths, neptunes, and\n jupiters,'' {\\em Science} {\\bf 330}(6004), 653--655 (2010).\n\n\\bibitem{Batalha2013}\nN.~M. Batalha, J.~F. Rowe, S.~T. Bryson, T.~Barclay, C.~J. Burke, D.~A.\n Caldwell, J.~L. Christiansen, F.~Mullally, S.~E. Thompson, T.~M. Brown, {\\em\n et~al.}, ``Planetary candidates observed by kepler. iii. analysis of the\n first 16 months of data,'' {\\em The Astrophysical Journal Supplement Series}\n {\\bf 204}(2), 24 (2013).\n\n\\bibitem{Fressin2013}\nF.~Fressin, G.~Torres, D.~Charbonneau, S.~T. Bryson, J.~Christiansen, C.~D.\n Dressing, J.~M. Jenkins, L.~M. Walkowicz, and N.~M. Batalha, ``The false\n positive rate of kepler and the occurrence of planets,'' {\\em The\n Astrophysical Journal} {\\bf 766}(2), 81 (2013).\n\n\\bibitem{Petigura2013}\nE.~A. Petigura, G.~W. Marcy, and A.~W. Howard, ``A plateau in the planet\n population below twice the size of earth,'' {\\em The Astrophysical Journal}\n {\\bf 770}(1), 69 (2013).\n\n\\bibitem{McBride2011}\nJ.~McBride, J.~R. Graham, B.~Macintosh, S.~V. Beckwith, C.~Marois, L.~A.\n Poyneer, and S.~J. Wiktorowicz, ``Experimental design for the gemini planet\n imager,'' {\\em Publications of the Astronomical Society of the Pacific} {\\bf\n 123}(904), 692--708 (2011).\n\n\\bibitem{Macintosh2014}\nB.~Macintosh, J.~R. Graham, P.~Ingraham, Q.~Konopacky, C.~Marois, M.~Perrin,\n L.~Poyneer, B.~Bauman, T.~Barman, A.~S. Burrows, {\\em et~al.}, ``First light\n of the gemini planet imager,'' {\\em Proceedings of the National Academy of\n Sciences} {\\bf 111}(35), 12661--12666 (2014).\n\n\\bibitem{Spergel2013}\nD.~Spergel, N.~Gehrels, J.~Breckinridge, M.~Donahue, A.~Dressler, B.~Gaudi,\n T.~Greene, O.~Guyon, C.~Hirata, J.~Kalirai, {\\em et~al.}, ``Wide-field\n infrared survey telescope-astrophysics focused telescope assets wfirst-afta\n final report,'' {\\em arXiv preprint arXiv:1305.5422} (2013).\n\n\\bibitem{savransky2010occulting}\nD.~Savransky, D.~N. Spergel, N.~J. Kasdin, E.~J. Cady, P.~D. Lisman, S.~H.\n Pravdo, S.~B. Shaklan, and Y.~Fujii, ``Occulting ozone observatory science\n overview,'' in {\\em Proc. SPIE}, {\\bf 7731}, 77312H (2010).\n\n\\bibitem{Pollack1986}\nJ.~B. Pollack, K.~Rages, K.~H. Baines, J.~T. Bergstralh, D.~Wenkert, and G.~E.\n Danielson, ``Estimates of the bolometric albedos and radiation balance of\n uranus and neptune,'' {\\em Icarus} {\\bf 65}(2), 442--466 (1986).\n\n\\bibitem{Marley1999}\nM.~S. Marley, C.~Gelino, D.~Stephens, J.~I. Lunine, and R.~Freedman,\n ``Reflected spectra and albedos of extrasolar giant planets. i. clear and\n cloudy atmospheres,'' {\\em The Astrophysical Journal} {\\bf 513}(2), 879\n (1999).\n\n\\bibitem{Fortney2008}\nJ.~J. Fortney, M.~S. Marley, D.~Saumon, and K.~Lodders, ``Synthetic spectra and\n colors of young giant planet atmospheres: effects of initial conditions and\n atmospheric metallicity,'' {\\em The Astrophysical Journal} {\\bf 683}(2), 1104\n (2008).\n\n\\bibitem{Cahoy2010}\nK.~L. Cahoy, M.~S. Marley, and J.~J. Fortney, ``Exoplanet albedo spectra and\n colors as a function of planet phase, separation, and metallicity,'' {\\em The\n Astrophysical Journal} {\\bf 724}(1), 189 (2010).\n\n\\bibitem{Spiegel2012}\nD.~S. Spiegel and A.~Burrows, ``Spectral and photometric diagnostics of giant\n planet formation scenarios,'' {\\em The Astrophysical Journal} {\\bf 745}(2),\n 174 (2012).\n\n\\bibitem{burrows1997nongray}\nA.~{Burrows}, M.~{Marley}, W.~B. {Hubbard}, J.~I. {Lunine}, T.~{Guillot},\n D.~{Saumon}, R.~{Freedman}, D.~{Sudarsky}, and C.~{Sharp}, ``{A Nongray\n Theory of Extrasolar Giant Planets and Brown Dwarfs},'' {\\em The\n Astrophysical Journal} {\\bf 491}, 856--+ (1997).\n\n\\bibitem{burrows2003beyond}\nA.~Burrows, D.~Sudarsky, and J.~I. Lunine, ``Beyond the t dwarfs: Theoretical\n spectra, colors, and detectability of the coolest brown dwarfs,'' {\\em The\n Astrophysical Journal} {\\bf 596}, 587 (2003).\n\n\\bibitem{kasdin2006}\nN.~J. Kasdin and I.~Braems, ``Linear and bayesian planet detection algorithms\n for the terrestrial planet finder,'' {\\em The Astrophysical Journal} {\\bf\n 646}, 1260--1274 (2006).\n\n\\bibitem{lafreniere2007new}\nD.~Lafreni\\`{e}re, C.~Marois, R.~Doyon, D.~Nadeau, and E.~Artigau, ``A new\n algorithm for point-spread function subtraction in high-contrast imaging: A\n demonstration with angular differential imaging,'' {\\em The Astrophysical\n journal} {\\bf 660}(1), 770--780 (2007).\n\n\\bibitem{soummer2012detection}\nR.~Soummer, L.~Pueyo, and J.~Larkin, ``Detection and characterization of\n exoplanets and disks using projections on {Karhunen-Loeve} eigenimages,''\n {\\em The Astrophysical Journal Letters} {\\bf 755}(2), L28 (2012).\n\n\\bibitem{savransky2008}\nD.~Savransky and N.~J. Kasdin, ``Design reference mission construction for\n planet finders,'' in {\\em Proc. SPIE}, {\\bf 7010} (2008).\n\n\\bibitem{perez2007ipython}\nF.~Perez and B.~E. Granger, ``Ipython: a system for interactive scientific\n computing,'' {\\em Computing in Science \\& Engineering} {\\bf 9}(3), 21--29\n (2007).\n\n\\bibitem{trauger2012complex}\nJ.~Trauger, D.~Moody, B.~Gordon, J.~Krist, and D.~Mawet, ``Complex apodization\n lyot coronagraphy for the direct imaging of exoplanet systems: design,\n fabrication, and laboratory demonstration,'' in {\\em SPIE Astronomical\n Telescopes+ Instrumentation}, 84424Q--84424Q, International Society for\n Optics and Photonics (2012).\n\n\\bibitem{zimmerman2015shaped}\nN.~Zimmerman, A.~Eldorado~Riggs, N.~J. Kasdin, A.~Carlotti, and R.~J.\n Vanderbei, ``A shaped pupil lyot coronagraph for wfirst-afta,'' in {\\em\n American Astronomical Society Meeting Abstracts}, {\\bf 225} (2015).\n\n\\bibitem{sidick2014simulated}\nE.~Sidick, B.~Kern, R.~Belikov, A.~Kuhnert, and S.~Shaklan, ``Simulated\n contrast performance of phase induced amplitude apodization (piaa)\n coronagraph testbed,'' in {\\em SPIE Astronomical Telescopes+\n Instrumentation}, 91430W--91430W, International Society for Optics and\n Photonics (2014).\n\n\\bibitem{traub2014science}\nW.~A. Traub, R.~Belikov, O.~Guyon, N.~J. Kasdin, J.~Krist, B.~Macintosh,\n B.~Mennesson, D.~Savransky, M.~Shao, E.~Serabyn, and J.~Trauger, ``Science\n yield estimation for afta coronagraphs,'' in {\\em Proc. SPIE}, {\\em SPIE\n Astronomical Telescopes+ Instrumentation}, 91430N--91430N, International\n Society for Optics and Photonics (2014).\n\n\\bibitem{fressin2013false}\nF.~Fressin, G.~Torres, D.~Charbonneau, S.~T. Bryson, J.~Christiansen, C.~D.\n Dressing, J.~M. Jenkins, L.~M. Walkowicz, and N.~M. Batalha, ``The false\n positive rate of {Kepler} and the occurrence of planets,'' {\\em The\n Astrophysical Journal} {\\bf 766}(2), 81 (2013).\n\n\\bibitem{poberezhskiy2014technology}\nI.~Poberezhskiy, F.~Zhao, X.~An, K.~Balasubramanian, R.~Belikov, E.~Cady,\n R.~Demers, R.~Diaz, Q.~Gong, B.~Gordon, {\\em et~al.}, ``Technology\n development towards {WFIRST-AFTA} coronagraph,'' in {\\em SPIE Astronomical\n Telescopes+ Instrumentation}, 91430P--91430P, International Society for\n Optics and Photonics (2014).\n\n\\bibitem{shi2015low}\nF.~SHI, ``Low order wavefront sensing and control for {WFIRST-AFTA}\n coronagraph,'' in {\\em American Astronomical Society Meeting Abstracts},\n {\\bf 225} (2015).\n\n\\end{thebibliography}\n \n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%% Biographies of authors %%%%%\n\n\\vspace{2ex}\\noindent{\\bf Dmitry Savransky} is an assistant professor in the Sibley School of Mechanical and Aerospace Engineering at Cornell University. He received his PhD from Princeton University in 2011 followed by a postdoctoral position at Lawrence Livermore National Laboratory where he assisted in the integration and commissioning of the Gemini Planet Imager. His research interests include optimal control of optical system, simulation of space missions, and image post-processing techniques.\n\n\\vspace{2ex}\\noindent{\\bf Daniel Garrett} is a PhD student in the Sibley School of Mechanical and Aerospace Engineering at Cornell University. His research interests include dynamics and control theory, planetary science, and space exploration.\n\n\n\n\\begin{table}[ht]\n\\caption{Parameters for coronagraphs studied in \\S\\ref{sec:predown}. \\label{tbl:corons}} \n\\begin{center}\n\\begin{tabular}{l c c c c c c c c}\n & & & \\multicolumn{2}{c}{Contrast} & \\multicolumn{2}{c}{Throughput$^b$} & & FOV\\\\\nName & IWA$^a$ & OWA$^a$ & Min & Mean & Max & Mean & Sharpness$^c$ & Portion$^d$\\\\\n\\hline\nC1 & 0.128 & 0.652 & 6.91e-09 & 1.60e-08 & 0.40 & 0.32 & 0.0142 & 1\\\\\nC2 & 0.184 & 1.064 & 4.06e-09 & 7.06e-09 & 0.22 & 0.22 & 0.0138 & 1\/3\\\\\nC3 & 0.085 & 0.624 & 2.87e-09 & 2.94e-08 & 1.00 & 0.85 & 0.0143 & 1\n\\end{tabular}\n\\end{center}\n\\raggedright\n\\footnotesize $^a$Inner and outer working angle in arcseconds at 550 nm.\\\\\n$^b$This is the throughput due to the coronagraph optics only. \\\\\n$^c$Sharpness is defined as $\\left(\\sum_{i} P_i^2\\right)\/\\left(\\sum_i P_i\\right)^2$ for normalized PSF $P_i$.\\\\\n$^d$The fraction of the field of view covered by the coronagraph's region of high contrast.\\\\\n\\normalsize\n\\end{table}%\n\n\n\\begin{table}[ht]\n\\caption{Mean values and standard deviations of five performance metrics calculated from ensembles of mission simulations for the instruments described in Table \\ref{tbl:corons}.\\label{tbl:res2}} \n\\begin{center}\n\\begin{tabular}{l c | c c | c c | c c | c c | c c}\n & & \\multicolumn{2}{c}{Unique} & \\multicolumn{2}{c}{All} & \\multicolumn{2}{c}{Full} & \\multicolumn{2}{c}{All} & \\multicolumn{2}{c}{Unique} \\\\\n &Contrast & \\multicolumn{2}{c}{Detections$^b$} & \\multicolumn{2}{c}{Detections$^c$} & \\multicolumn{2}{c}{Spectra$^d$} & \\multicolumn{2}{c}{Visits} & \\multicolumn{2}{c}{Targets}\\\\\nName & Factor$^a$ & $\\mu$ & $1\\sigma$ & $\\mu$ & $1\\sigma$ & $\\mu$ & $1\\sigma$ & $\\mu$ & $1\\sigma$ & $\\mu$ & $1\\sigma$\\\\ \n\\hline\n\\multirow{2}{*}{C1} & 10x & 1.8 & 1.4 & 2.6 & 4.4 & 1.1 & 1.1 & 74.9 & 28.2 & 63.5 & 3.7\\\\\n & 30x & 3.7 & 2.0 & 4.4 & 2.6 & 2.2 & 1.5 & 56.4 & 2.7 & 55.3 & 2.2\\\\\n \\hline\n\\multirow{2}{*}{C2} & 10x & 2.4 & 1.6 & 7.8 & 13.5 & 1.4 & 1.2 & 141.3 & 38.8 & 74.9 & 0.4\\\\\n& 30x & 4.2 & 2.1 & 5.0 & 2.9 & 2.2 & 1.5 & 59.5 & 1.9 & 58.1 & 1.2\\\\\n\\hline\n\\multirow{2}{*}{C3} & 10x & 2.0 & 1.5 & 2.4 & 2.0 & 1.3 & 1.2 & 54.7 & 2.0 & 54.0 & 1.5\\\\\n & 30x & 3.0 & 1.9 & 3.4 & 2.3 & 1.9 & 1.4 & 31.5 & 2.2 & 30.9 & 1.9\\\\\n\\end{tabular}\n\\end{center}\n\\raggedright\n\\footnotesize $^a$Contrast improvement factor due to post-processing.\\\\\n$^b$Number of individual planets detected one or more times. \\\\\n$^c$Total number of detections (including repeat detections of the same planets).\\\\\n$^d$Total number of planets where spectra can be obtained over the whole wavelength range (400-800 nm).\\\\\n\\normalsize\n\\end{table}%\n\n\n\\begin{table}[ht]\n\\caption{Mean values and standard deviations of five performance metrics calculated from ensembles of mission simulations for the post-downselect HLC with varying levels of assumed telescope jitter. Column definitions as in Table \\ref{tbl:res2}.\\label{tbl:res3}} \n\\begin{center}\n\\begin{tabular}{l c | c c | c c | c c | c c | c c}\n & & \\multicolumn{2}{c}{Unique} & \\multicolumn{2}{c}{All} & \\multicolumn{2}{c}{Full} & \\multicolumn{2}{c}{All} & \\multicolumn{2}{c}{Unique} \\\\\nJitter &Contrast & \\multicolumn{2}{c}{Detections$^b$} & \\multicolumn{2}{c}{Detections$^c$} & \\multicolumn{2}{c}{Spectra$^d$} & \\multicolumn{2}{c}{Visits} & \\multicolumn{2}{c}{Targets}\\\\\n(mas) & Factor$^a$ & $\\mu$ & $1\\sigma$ & $\\mu$ & $1\\sigma$ & $\\mu$ & $1\\sigma$ & $\\mu$ & $1\\sigma$ & $\\mu$ & $1\\sigma$\\\\ \n\\hline\n\\multirow{2}{*}{0.4} & 30x & 12.4 & 3.5 & 14.0 & 4.4 & 9.5 & 3.2 & 47.6 & 4.3 & 45.2 & 4.0\\\\\n & 10x & 11.4 & 3.5 & 12.5 & 4.2 & 6.2 & 2.6 & 31.9 & 2.4 & 30.4 & 1.9\\\\\n\\multirow{2}{*}{0.8} & 30x & 7.8 & 2.8 & 8.7 & 3.3 & 4.9 & 2.2 & 38.4 & 2.5 & 37.0 & 2.3\\\\\n & 10x & 7.2 & 2.7 & 8.0 & 3.3 & 2.8 & 1.7 & 28.1 & 2.3 & 27.0 & 2.0\\\\\n\\multirow{2}{*}{1.6} & 30x & 5.1 & 2.3 & 5.7 & 2.7 & 1.4 & 1.2 & 31.6 & 1.6 & 30.8 & 1.4\\\\\n& 10x & 4.0 & 2.0 & 4.4 & 2.4 & 1.9 & 1.4 & 44.9 & 2.2 & 44.1 & 2.2\\\\\n\\end{tabular}\n\\end{center}\n\\end{table}%\n\n\n\\listoffigures\n\n\n\\end{spacing}\n```","meta":{"dup_signals":{"dup_doc_count":16,"dup_dump_count":2,"dup_details":{"curated_sources":2,"unknown":14}},"filename":"out\/1511.02869_extract_article.tex.md"},"subset":"arxiv"} +{"text":"abstract: This document is intended to serve as a sample for submissions to the 47th IEEE\/ACM International Symposium on Computer Architecture (ISCA), May 30 \u2013 June 3, 2020 in Valencia, Spain. This document provides guidelines that authors should follow when submitting papers to the conference. This format is derived from the IEEE conference template IEEEtran.cls file with the objective of keeping the submission similar to the final version, i.e., the IEEEtran.cls template will also be used for the camera-ready version.\nbibliography: refs.bib\ntitle: Guidelines for Submission to ISCA 2020\n\n# Introduction\n\nThis document provides instructions for submitting papers to ISCA 2020. In an effort to respect the efforts of reviewers and in the interest of fairness to all prospective authors, we request that all submissions to ISCA 2020 follow the formatting and submission rules detailed below. Submissions that violate these instructions may not be reviewed, at the discretion of the program chair, in order to maintain a review process that is fair to all potential authors. This document is itself formatted using the ISCA 2020 submission format. The content of this document mirrors that of the submission instructions that appear on the conference website. All questions regarding paper formatting and submission should be directed to the program chair.\n\n## Format Highlights\n\nHere are the format highlights in a nutshell:\n\n- Paper must be submitted in printable PDF format.\n\n- Text must be in a minimum 10pt Times font, see Table\u00a0.\n\n- Papers must be at most 11 pages (not including references) in a two-column format.\n\n- No page limit for references.\n\n- Each reference must specify *all* authors, i.e., no *et al.*\n\n## Paper Evaluation Objectives\n\nThe committee will make every effort to judge each submitted paper on its own merits. There will be no target acceptance rate. We expect to accept a wide range of papers with appropriate expectations for evaluation \u2014 while papers that build on significant past work with strong evaluations are valuable, papers that open new areas with less rigorous evaluation are equally welcome and especially encouraged. We also acknowledge the wide range of evaluation methodologies in ISCA including modeling, simulation, prototyping, experimental implementation, real product evaluation, etc.\n\n# Paper Preparation Instructions\n\n## Paper Formatting\n\nPapers must be submitted in printable PDF format and should contain a *maximum of 11 pages* of single-spaced two-column text, **not including references**. You may include any number of pages for references, but see below for more instructions. If you are using LaTeX\u00a0 to typeset your paper, then we suggest that you use the template used to prepare this document, which you can find on the ISCA 2020 website. If you use a different software package to typeset your paper, then please adhere to the guidelines given in Table\u00a0.\n\n```latex\n\\begin{scriptsize}\n\\begin{table}[h!]\n \\centering\n \\caption{Formatting guidelines for submission.}\n \\label{table:formatting}\n \\begin{tabular}{|l|l|}\n \\hline\n \\textbf{Field} & \\textbf{Value}\\\\\n \\hline\n \\hline\n File format & PDF \\\\\n \\hline\n Page limit & 11 pages, {\\bf not including}\\\\\n & {\\bf references}\\\\\n \\hline\n Paper size & US Letter 8.5in $\\times$ 11in\\\\\n \\hline\n Top margin & 1in\\\\\n \\hline\n Bottom margin & 1in\\\\\n \\hline\n Left margin & 0.75in\\\\\n \\hline\n Right margin & 0.75in\\\\\n \\hline\n Body & 2-column, single-spaced\\\\\n \\hline\n Space between columns & 0.25in\\\\\n \\hline\n Line spacing (leading) & 11pt \\\\\n \\hline\n Body font & 10pt, Times\\\\\n \\hline\n Abstract font & 10pt, Times\\\\\n \\hline\n Section heading font & 12pt, bold\\\\\n \\hline\n Subsection heading font & 10pt, bold\\\\\n \\hline\n Caption font & 9pt (minimum), bold\\\\\n \\hline\n References & 8pt, no page limit, list \\\\\n & all authors' names\\\\\n \\hline\n \\end{tabular}\n\\end{table}\n\\end{scriptsize}\n```\n\n*Please ensure that you include page numbers with your submission*. This makes it easier for the reviewers to refer to different parts of your paper when they provide comments. Please ensure that your submission has a banner at the top of the title page, similar to this document, which contains the submission number and the notice of confidentiality. If using the template, just replace 'NaN' with your submission number.\n\n## Content\n\nReviewing will be *double blind* (no author list); therefore, please do not include any author names on any submitted documents except in the space provided on the submission form. You must also ensure that the metadata included in the PDF does not give away the authors. If you are improving upon your prior work, refer to your prior work in the third person and include a full citation for the work in the bibliography. For example, if you are building on *your own* prior work in the papers\u00a0, you would say something like: \"While the authors of\u00a0 did X, Y, and Z, this paper additionally does W, and is therefore much better.\" Do NOT omit or anonymize references for blind review. There is one exception to this for your own prior work that appeared in IEEE CAL, arXiv, workshops without archived proceedings, etc.\u00a0as discussed later in this document.\n\n**Figures and Tables:** Ensure that the figures and tables are legible. Please also ensure that you refer to your figures in the main text. Many reviewers print the papers in gray-scale. Therefore, if you use colors for your figures, ensure that the different colors are highly distinguishable in gray-scale.\n\n**References:** There is no length limit for references. *Each reference must explicitly list all authors of the paper. Papers not meeting this requirement will be rejected.* Since there is no length limit for the number of pages used for references, there is no need to save space here.\n\n# Paper Submission Instructions\n\n## Guidelines for Determining Authorship\n\nIEEE guidelines dictate that authorship should be based on a *substantial intellectual contribution*. It is assumed that all authors have had a significant role in the creation of an article that bears their names. In particular, the authorship credit must be reserved only for individuals who have met each of the following conditions:\n\n1. Made a significant intellectual contribution to the theoretical development, system or experimental design, prototype development, and\/or the analysis and interpretation of data associated with the work contained in the article;\n\n2. Contributed to drafting the article or reviewing and\/or revising it for intellectual content; and\n\n3. Approved the final version of the article as accepted for publication, including references.\n\nA detailed description of the IEEE authorship guidelines and responsibilities is available online.[^1] Per these guidelines, it is not acceptable to award *honorary* authorship or *gift* authorship. Please keep these guidelines in mind while determining the author list of your paper.\n\n## Declaring Authors\n\nDeclare all the authors of the paper upfront. Addition\/removal of authors once the paper is accepted will have to be approved by the program chair, since it potentially undermines the goal of eliminating conflicts for reviewer assignment.\n\n## Areas and Topics\n\nAuthors should indicate specific topics covered by the paper on the submission page. If you are unsure whether your paper falls within the scope of ISCA, please check with the program chair \u2014 ISCA is a broad, multidisciplinary conference and encourages new topics.\n\n## Declaring Conflicts of Interest\n\nAuthors must register all their conflicts on the paper submission site. Conflicts are needed to ensure appropriate assignment of reviewers. If a paper is found to have an undeclared conflict that causes a problem OR if a paper is found to declare false conflicts in order to abuse or 'game' the review system, the paper may be rejected.\n\nPlease declare a conflict of interest with the following people for any author of your paper. A conflict occurs in the following cases:\n\n1. Between advisor and advisee forever.\n\n2. Between family members forever.\n\n3. Between people who have collaborated in the last 5 years. This collaboration can consist of a joint research or development project, a joint paper, or when there is direct funding from the potential reviewer (as opposed to company funding) to an author of the paper. Co-participation in professional activities, such as tutorials or studies, is not cause for conflict. When in doubt, the author should check with the Program Chair.\n\n4. Between people from same institution or who were in the same institution in the last 5 years.\n\n5. Between people whose relationship prevents the reviewer from being objective in his\/her assessment.\n\n'Service' collaborations, such as co-authoring a report for a professional organization, serving on a program committee, or co-presenting tutorials, do not themselves create a conflict of interest. Co-authoring a paper that is a compendium of various projects with no true collaboration among the projects does not constitute a conflict among the authors of the different projects. On the other hand, there may be others not covered by the above with whom you believe a COI exists, for example, an ongoing collaboration which has not yet resulted in the creation of a paper or proposal. Please report such COIs; however, you may be asked to justify them. Please be reasonable. For example, you cannot declare a COI with a reviewer just because that reviewer works on topics similar to or related to those in your paper. The program chair may contact co-authors to explain a COI whose origin is unclear.\n\nMost reviews will be solicited among the members of the PC and the ERC, but other members from the community may also write reviews. Please declare all your conflicts (not just restricted to the PC and ERC) on the submission form. When in doubt, contact the program chair.\n\n## Concurrent Submissions and Workshops\n\nBy submitting a manuscript to ISCA 2020, the authors guarantee that the manuscript has not been previously published or accepted for publication in a substantially similar form in any conference, journal, or the archived proceedings of a workshop (e.g., in the ACM\/IEEE digital libraries) \u2014 see exceptions below. The authors also guarantee that no paper that contains significant overlap with the contributions of the submitted paper will be under review for any other conference or journal or an archived proceedings of a workshop during the ISCA 2020 review period. Violation of any of these conditions will lead to rejection.\n\nThe only exceptions to the above rules are for the authors' own papers in (1) workshops without archived proceedings such as in the ACM\/IEEE digital libraries (or where the authors chose not to have their paper appear in the archived proceedings), or (2) venues such as IEEE CAL or arXiv where there is an explicit policy that such publication does not preclude longer conference submissions. In all such cases, the submitted manuscript may ignore the above work to preserve author anonymity. This information must, however, be provided on the submission form \u2014 the program chair will make this information available to reviewers if it becomes necessary to ensure a fair review. As always, if you are in doubt, it is best to contact program chairs.\n\nFinally, the ACM\/IEEE Plagiarism Policies[^2] cover a range of ethical issues concerning the misrepresentation of other works or one's own work.\n\n# Acknowledgements\n\nThis document is derived from previous conferences, in particular ISCA 2019 and MICRO 2019.\n\n[^1]: \n\n[^2]: \n ","meta":{"dup_signals":{"dup_doc_count":13,"dup_dump_count":13,"dup_details":{"curated_sources":1,"2023-23":1,"2023-06":1,"2022-40":1,"2022-21":1,"2021-49":1,"2021-39":1,"2021-25":1,"2021-17":1,"2021-04":1,"2020-45":1,"2020-34":1,"2023-50":1}},"filename":"out\/1905.04264_extract_main.tex.md"},"subset":"arxiv"} +{"text":"abstract: It is often presumed, that life evolves relatively fast on planets with clement conditions, at least in its basic forms, and that extended periods of habitability are subsequently needed for the evolution of higher life forms. Many planets are however expected to be only transiently habitable. On a large set of otherwise suitable planets life will therefore just not have the time to develop on its own to a complexity level as it did arise on earth with the cambrian explosion. The equivalent of a cambrian explosion may however have the chance to unfold on transiently habitable planets if it would be possible to fast forward evolution by 3-4 billion years (with respect to terrestrial timescales). We argue here, that this is indeed possible when seeding the candidate planet with the microbial lifeforms, bacteria and unicellular eukaryotes alike, characterizing earth before the cambrian explosion. An interstellar mission of this kind, denoted the 'Genesis project', could be carried out by a relatively low-cost robotic microcraft equipped with a on-board gene laboratory for the in situ synthesis of the microbes.\n .\n We review here our current understanding of the processes determining the timescales shaping the geo-evolution of an earth-like planet, the prospect of finding Genesis candidate planets and selected issues regarding the mission layout. Discussing the ethical aspects connected with a Genesis mission, which would be expressively not for human benefit, we will also touch the risk that a biosphere incompatibility may arise in the wake of an eventual manned exploration of a second earth.\nauthor: Claudius Gros\nbibliography: grosGenesis_astr.bib\ntitle: Developing Ecospheres on Transiently Habitable Planets: \n The Genesis Project\n\n# Introduction\n\nThree ongoing lines of research have progressed in the last years to a point which allows us to assess now the feasibility of sending out interstellar probes with the mission of bringing life to otherwise barren exoplanets.\n\nIn first place comes here the insight from exoplanet search efforts, that the diversity of the hitherto discovered exoplanetary systems is very high. This implies, in particular, that no two habitable planets may be alike and that there will be many planets having only limited periods of habitability. There may hence exist in our galaxy a plethora of planets where life could truly thrive, having however not enough time to fully develop on its own. The key idea of the Genesis project is to bring life to these kind of exoplanets.\n\nThe Genesis project is based furthermore on the evolving consensus , that robotic interstellar missions may be realizable within a foreseeable future. A conceivable scenario would be in this context to accelerate lightweight interstellar probes with ground- or orbit-based arrays of powerful lasers . Decelerating could then be achieved, on arrival, using magnetic and\/or electric sails .\n\nThe progress achieved recently in creating synthesized and minimal (in terms of the genome) cells, indicates furthermore that humanity will acquire most probably already within a few decades the capability to synthesize a vast palette of life forms from scratch. We can hence envision that a Genesis probe would be able to cultivate in situ many different types of microbes using a robotic gene laboratory.\n\nThe Genesis project consists hence of three steps.\n\n- Searching for transiently habitable planets.\n\n- Sending interstellar robotic crafts for detailed investigations.\n\n- Seeding the candidate planet with in situ synthesized lifeforms.\n\nIn this study we will review the prospects of discovering Genesis candidate planets, the time scale for evolutionary speedup one may realistically hope to achieve and the time scales the Genesis process may take to unfold. The Genesis project comes of course with serious ethical caveats regarding in particular planetary protection, which we will also discuss.\n\nThe Genesis project is in first place not for human benefit. The key idea is to initiate a self-sustained evolutionary process, which will then carry the developing biosphere of the host planet to its own future. Later stage human interventions are not excluded, but not necessary. The same holds for an eventual human settlement of the candidate planet. In this context we will also discuss the prospective that biosphere incompatibilities may accompany manned missions to second-earth like exoplanets.\n\n## Genesis candidate planets\n\nWe currently do not know whether the bio-geological evolution of earth has been fast or slow in relation to evolutionary processes on other habitable planets. We know however, that the timescale of major geo-evolutionary steps has been typically of the order of one or more billion years (Ga). Taking hence one Ga as a reference time we may consider a planet to be transiently habitable if the time span it remains in the habitable zone (HZ) ranges from at least a few hundred million years to about 1-2\u2006Ga.\n\nThere are various conceivable scenaria why a planet may be habitable for a finite, but prolonged period.\n\n- **Shift of the habitable zone.** Main sequence stars as our sun, which was initially about 30% less luminous, become brighter with the eons passing. A planet starting out at inner edge of the HZ may hence become eventually too hot . An initially too far away planet may reversely warm up, but possibly only after a few Ga have passed. In the later case not enough time for the unfolding of a full-fledged evolutionary process may be left.\n\n- **Long-term orbital instabilities.** Several kinds of long-term orbital instabilities, like the Hill- and the Lagrange instability discussed in Sect.\u00a0 and , may throw a planet out of the habitable zone or, reversely, promote a planet into the HZ . Habitability may also be interrupted by later stage orbital resonances, as the one which has possibly caused the late heavy bombardment of our home planet (compare Sect.\u00a0).\n\n- **Indigenous processes.** An initially habitable planet may become inhabitable also all by itself. Various indigenous processes are conceivable in this context.\n\n - Plate tectonic may cease functioning after a certain time, giving way to the type of stagnant lid magma convection presumable in place on Venus. Episodic overturns of the crust may then result in global resurfacing events.\n\n - Shifts in the CO$_2$ balance could lead to the complete depletion of atmospheric CO$_2$ levels. On earth this is actually happening, as discussed in Sect.\u00a0, albeit only very slowly. A continuous accumulation of CO$_2$ could result on the other side in either a catastrophic runaway, or in a welcome Greenhouse effect.\n\nReviewing the prospects of discovering Genesis candidate we will rely in part on an analysis of the timeline of the geo-evolutionary history of our home planet. Of particular interest for the Genesis project are here the involved time scales and the question, whether alternative evolutionary routes would have been possible.\n\n# Terrestrial timescales\n\nA central question of the present study regards the time one may expect a Genesis process will need to unfold on an exoplanet. As a backdrop to this question we start with a short review of some of the key events in the geo-biological evolution of earth, where a self-organized Genesis process is known to have occurred. Times will be given either in Giga-years ($10^9$, Ga) or Mega-years ($10^6$, Ma). Some of the key events shaping our home planet are shown on scale in Fig.\u00a0.\n\nShortly after earth was formed together with most of the solar system about 4.6\u2006Ga ago, an impact with a Mars-size object led to the creation of the moon . Our moon is exceptionally large and it is established that its size helps to stabilize the rotation axis (the obliquity) of earth . Seasonal variability (between sumer and winter) would be otherwise substantially larger. The presence of the moon is however not a precondition for habitability per se.\n\n## 4.5-4.0\u2006Ga: The hadean CO$_2$ sequestration \n\nIt is not known how wet earth initially was, viz which percentage of today's water was initially present and to which extent water was brought to young earth from further-out solar objects . It is however believed that the outgasing of the volatiles from the initially hot magma lead to a dense CO$_2$ atmosphere (about 100\u2006bar of CO$_2$, as for today's Venus), which, in turn, prevented earth to cool below 500$\\degree$C after the formation of the moon. A liquid ocean of some extent would however been present despite the elevated surface temperature as a consequence of the likewise increased atmospheric pressure.\n\nAny planet hoping for an earth-like habitability needs to rid itself of its primordial CO$_2$ atmosphere. This was achieved on earth by the carbonation $$\\mathrm{CaSiO}_3\\, +\\, \\mathrm{CO}_2 \\ \\to\\\n\\mathrm{CaCO}_3\\, +\\, \\mathrm{SiO}_2\n\\label{eq:Urey}$$ of silicates and the subsequent subduction of carbonized rocks (the Urey weathering reaction). It is unsure how long it actually took, possibly up to the end of the Hadean (4\u2006Ga), for the subducted crust to sequester CO$_2$ to levels of only 10-100 times today's levels (about 0.0004\u2006bar).\n\nThe hadean CO$_2$ sequestration was a massive process. For a perspective we note that the present day rate of CO$_2$ sequestration by modern plate tectonics is of the order of $3.3\\times10^{18}$\u2006mol\/Ma . With 100\u2006bar CO$_2$ corresponding to $12000\\times10^{18}$\u2006mol this implies, that the sequestration of the primordial CO$_2$ at present-day rates would have taken $(12000\/3.3)$\u2006Ma, viz $3.6$\u2006Ga. Substantially faster subduction processes must have been consequently at work in the early Hadean .\n\n```latex\n\\begin{table*}[t]\\setlength\\arrayrulewidth{2pt} \\arrayrulecolor{darkOrange}\n\\begin{tabular}{rr|lrrrr}\nZ & & element & solar & earth & crust & body \\\\\n\\hline\n\\rowcolor{lightYellow}\n1 & H & Hydrogen & [*] & 0.67 & 2.9 & 62.5 \\\\\n\\rowcolor{lightGreen}\n2 & He & Helium & [$^\\sharp$]& & & \\\\\n\\rowcolor{lightBrown}\n6 & C & Carbon & 25.05 & 0.16 & 0.035 & 11.6 \\\\\n\\rowcolor{lightYellow}\n7 & N & Nitrogen & 7.39 & 0.005 & 0.003 & 1.2 \\\\\n\\rowcolor{lightGreen}\n8 & O & Oxygen & 54.70 & 48.3 & 59.9 & 24.1 \\\\\n\\rowcolor{lightBrown}\n12 & Mg & Magnesium & 3.59 & 16.5 & 2.0 & 0.007 \\\\\n\\rowcolor{lightYellow}\n13 & Al & Aluminum & 0.29 & 1.5 & 6.3 & \\\\\n\\rowcolor{lightGreen}\n14 & Si & Silicon & 3.48 & 15.0 & 20.9 & 0.006 \\\\\n\\rowcolor{lightBrown}\n15 & P & Phosphorus & 0.03 & 0.1 & 0.07 & 0.22 \\\\\n\\rowcolor{lightYellow}\n16 & S & Sulfur & 1.47 & 0.52 & 0.023 & 0.04 \\\\\n\\rowcolor{lightGreen}\n19 & K & Potassium & 0.01 & 0.01 & 1.1 & 0.03 \\\\\n\\rowcolor{lightBrown}\n20 & Ca & Calcium & 0.21 & 1.1 & 2.2 & 0.22 \\\\\n\\rowcolor{lightYellow}\n26 & Fe & Iron & 2.95 & 14.9 & 2.1 & 0.0007\\\\\n\\end{tabular}\n\\hspace{2ex}\n\\begin{minipage}{0.45\\textwidth}\n\\caption{\n\\label{table_elements}\nThe mol-abundances (relative number of atoms, not weight; in\n percentage) of some selected elements. For the solar system \n(disregarding Hydrogen [*] and Helium [$^\\sharp$]) \\citep{lodders20094}, the earth \n(all) \\citep{mcdonough1995composition}, \nthe crust (of the earth) \\citep{lide2008crc}, and for the human body \n\\citep{lide2008crc}. Note that the curst is weakly reducing in the \nsense that the available oxygen is nearly enough to oxidize all \nother elements via $\\mathrm{Si}+\\mathrm{O}_2\\to \\mathrm{SiO}_2$,\n$4\\mathrm{Al}+3\\mathrm{O}_2\\to 2\\mathrm{Al}_2\\mathrm{O}_3$,\netc. 92.1\\% and 7.8\\% of the overall number of atoms of the\nsolar system are Hydrogen and Helium atoms respectively.\n }\n\\end{minipage}\n\\end{table*}\n```\n\nIt is reasonable to expect CO$_2$ sequestration to be generically vigorous on potentially habitable rocky planets.\n\n- The essentially unlimited reservoir of silicates rocky planets like the earth dispose of, compare Table\u00a0, allows to sequester basically all CO$_2$ from the atmosphere. The steady-state level of atmospheric CO$_2$ will then result on habitable planets from the balance between volcanic outgasing and ongoing sequestration (the inorganic carbon cycle) .\n\n- The hadean CO$_2$ sequestration occurred at a time when earth dissipated its internal heat by bottom-up mantle convection, viz when modern plate tectonics was most probably not yet at work . We will come back to this issue in Sect.\u00a0.\n\nIt is not clear whether Venus had ever been able to start the process of CO$_2$ sequestration even when starting out with earth-like conditions. It may have been, that its initial magma ocean took substantially longer time (100\u2006Ma instead of 1-4\u2006Ma, as for earth) to solidify and that Venus may have had lost most of its primordial water by that time through hydrodynamic escape . The mechanism involves water to rise to the stratosphere and to photodissociate via $\\mathrm{H}_2\\mathrm{O}+\\mathrm{light}\\to H_2 + O$. The free hydrogen molecules then escape the pull of gravity, being too light for a rocky planet, dragging some of the oxygen with it. This process led on Venus to the loss of an ocean worth of water, shutting down also the carbonation of silicate rocks via the Urey weather reaction (), which needs in turn the formation of carbonic acid $\\mathrm{CO}_2+\\mathrm{H}_2\\mathrm{O} \\to\\mathrm{H}_2\\mathrm{CO}_3$ and hence the presence of liquid water in an intermediate step. The concentration of hydrogen bearing molecules like $\\mathrm{H}_2\\mathrm{O}$ and $\\mathrm{NH}_4$ is on the other side very low in the stratosphere of earth, at least nowadays, and such the loss of $\\mathrm{H}_2$ .\n\n## 3.9-3.8\u2006Ga: The late heavy bombardment \n\nAfter the formation of the moon not much happened apart from the ongoing CO$_2$ sequestration for about 600\u2006Ma. Then, by 3.9-3.8\u2006Ga, the late heavy bombardment (LHB) took place in the form of a cataclysmic wave of planetesimals (both asteroids and comets) battering earth together with the entire inner solar system . An event like the LTB would eradicate all higher life forms on a planet, if existing, but it would not sterilize the planet altogether. The LTB may have been delivering in addition a certain fraction of the water present nowadays on earth ($78\\times 10^{21}$ mol H$_2$O in the ocean alone), without affecting otherwise the habitability of the planet.\n\nThe late heavy bombardment did originate, as far as we know, from an instability of a disc of planetesimals left over from the formation of the solar system. What is interesting is, that such disks must have continued to exist long after the formation of the solar system and that the instability occurred with a huge delay. A possible scenario for this to happen is illustrated in Fig.\u00a0 (the Nice model ). It assumes that a 2:1 orbital resonance (in terms of their respective orbital periods) did built up due to the migration of Jupiter and Saturn, which were in turn caused by the interaction with the then still existing outer discs of planetesimals. The orbits of both Jupiter and Saturn were strongly deformed at resonance, with the consequence that the disc of planetesimal was perturbed together with the then substantially denser disc of asteroids between Mars and Jupiter. Objects were subsequently thrown out of their original orbits and sent into the inner solar systems.\n\n## 3.3-2.9\u2006Ga: The archean genetic expansion \n\nLife emerged on earth in a multi-step process which may have started quite soon after the late heavy bombardment , possibly also before, having come to a completion by around 3.5\u2006Ga . Using phylogenomic methods to analyze the evolutionary history of 3983 gene families it was found that the *de novo* creation of bacterial genes was a concentrated process, the archean genetic expansion, starting around 3.3\u2006Ga and ending by 2.9\u2006Ga with the essential completion of the bacterial molecular machinery. Gene transfer and loss tended to dominate bacterial evolution ever since and it is not understood why it took evolution another Ga to develop eukaryotic cells (see Sect.\u00a0).\n\n## 2.4-2.3\u2006Ga: The great oxidation event \n\nLiving organisms need an energy source to power reaction pathways utilizing $\\mathrm{CO}_2$ for the synthesis of organic molecules (carbon fixation). Early in the history of life this energy was provided mostly by chemotrophic reaction pathways , such as $$3\\mathrm{FeS} + 4\\mathrm{H}_2\\mathrm{S} + \\mathrm{CO}_2 \\ \\to\\ \n3\\mathrm{FeS}_2 + \\mathrm{CH}_3\\mathrm{SH} + 2\\mathrm{H}_2\\mathrm{O}~,\n\\label{eq:chemotrophy}$$ which uses in this case the energy released by the reaction of ferrous sulfide ($\\mathrm{FeS}$) with hydrogen sulfide ($\\mathrm{H}_2\\mathrm{S}$). The reaction products are here pyrite ($\\mathrm{FeS}_2$), methanethiol ($\\mathrm{CH}_3\\mathrm{SH}$) and water ($\\mathrm{H}_2\\mathrm{O}$).\n\nThe global bioproductivity of chemotrophy is limited by the rate by which volcanism replenishes the primary reagents. For present day hydrothermal activity levels this implies that about $(2-20)\\times10^{12}$ mol organic C per year could be fixated via chemical pathways . Today's biosphere produces by contrast around $8700\\times10^{12}$ mol organic C per year using the light of the sun to power the photosynthesis reaction $$6\\mathrm{CO}_2 + 6\\mathrm{H}_2\\mathrm{O} + \\mathrm{light}\n\\ \\to\\ \\mathrm{C}_6\\mathrm{H}_{12}\\mathrm{O}_6 + 6\\mathrm{O}_2~.\n\\label{eq:photosynthesis}$$ The invention of photosynthesis occurring by the end of the archean genetic expansion, around 2.8-2.5\u2006Ga , possibly also later , did therefore allow the terrestrial biosphere to expand dramatically. The balance equation () is in addition oxygenic in the sense that free $O_2$ is produced as a waste besides glucose ($\\mathrm{C}_6\\mathrm{H}_{12}\\mathrm{O}_6$).\n\nEarth changed in many ways once life started to produce oxygen in relevant quantities .\n\n- The $\\mathrm{Fe}^{+2}$ ions present till then in the ocean were rapidly precipitated as banded iron formations . Essentially all iron-based biological pathways, like anoxic photosynthesis (which does not produce oxygen), came hence to an abrupt end, at least on a global scale, surviving only in suitable niches.\n\n- Non-$\\mathrm{CO}_2$ greenhouse gases like methane ($\\mathrm{CH}_4$) were equally washed out from the atmosphere. The sky became clear and earth the blue planet of today. Global temperatures dropped consequently.\n\nTaking a closer look at the elements making up our home planet, as listed in Table\u00a0, one notices that the crust is weakly reducing in the sense that the oxidation of the other elements, $$\\mathrm{Si}+\\mathrm{O}_2\\to \\mathrm{SiO}_2,\\quad\\quad\n4\\mathrm{Al}+3\\mathrm{O}_2\\to 2\\mathrm{Al}_2\\mathrm{O}_3,\n\\quad\\quad\\mathrm{etc.}~,$$ would need somewhat more than the actually available amount of oxygen (earth as a whole would be on the other hand strongly reducing). For oxygen to accumulate in the atmosphere two things must consequently happen.\n\n- Part of the carbon fixated via oxygenic photosynthesis must be removed from the surface via sedimentation in a process denoted the organic carbon cycle (see Fig.\u00a0).\n\n- All reducing elements (like iron) present on the surface must first be oxidized.\n\nIt is notoriously difficult to determine how, when and why oxygen did eventually accumulate in the atmosphere . It is however clear, that oxygen appeared at appreciable levels (of the order of a few percent of today's levels) in the atmosphere during the great oxidation event around 2.4-2.3\u2006Ga. Oxygen levels didn't though remain stable afterwards, possibly plunging again for a billion years or more, with modern levels of $\\mathrm{O}_2$ being reached only at the end of Proterozoic, viz shortly before the cambrian explosion. This second rise of $\\mathrm{O}_2$ levels was the prerequisite for the subsequent development of animals and hence all important from a human perspective. Its ultimate causes are however not yet understood .\n\n## 540-520\u2006Ma: The cambrian explosion \n\nThough it is difficult, it is not impossible for bacteria to develop into at least primitive multicellular organisms . All higher plants and animals are built however out of eukaryotic cells, which dispose of a much higher complexity of internal organization. Eukaryotes evolved out of prokaryotes (bacteria and archaea) in a multi-stage process ending, as determined e.g.\u00a0by a multigene molecular clock analysis , by around 1.9-1.7\u2006Ga. At that point, the last common ancestor of all eukaryotes drifted through the waters of the Proterozoic .\n\nVery little is known when the next important step in the evolution of life, sexual reproduction, did take place, . It could even be the case that no additional step was needed, viz that sexual reproduction is inherent to eukaryotic life per se and that the last common eukaryotic ancestor was already sexual. The advantages of sexual reproduction for multicellular organisms are in any case undisputed.\n\nThe pace of evolution did not pick up directly with the invention of eukaryotic cells. Only in the second part of the following about 1.3\u2006Ga, the 'boring billion' , animal phyla started to diverge measurably . It is unknown which of the geo-biological events accompanying the demise of this prolonged period of stasis where the actual drivers both for ending the boring billion and for initiating the following unprecedented divergence of life known as the 'cambrian explosion' .\n\nMulticellular organisms developed not in a singular event but at least on 25 distinct occasions , with the first major multicellular biota emerging being the ediacaran fauna (590-540\u2006Ma). Of all animals crawling on the earth however only the Ecdysozoa (like centipedes) have ediacaran ancestors . All other animal phyla appeared in contrast during the following cambrian explosion, with the initial burst (540-520\u2006Ma) lasting only 20\u2006Ma. Soon after land was colonized by the ancestors of our modern plants and animals (510-470\u2006Ma) .\n\n## 20\u2006Ma: C4 photosynthesis \n\nEarth started, as discussed in Sec.\u00a0, with something like 100\u2006bar of $\\mathrm{CO}_2$, which were then rapidly sequestrated. Volcanic outgasing and limited sedimentation rates kept atmospheric $\\mathrm{CO}_2$ concentrations afterwards at levels which where still relative high in comparison to today's value . By 2.2\u2006Ga atmospheric $\\mathrm{CO}_2$ was, for a reference, about $23\\times\\mathrm{PAL}$ (present day preindustrial atmospheric levels: 280\u2006ppm) . That changed however when global bioproductivity increased after life colonized land in the aftermath of the cambrian explosion , both because of the additional carbon fixation by the land plants and because the roots of the plants intensified, in addition, the weathering of rocks and hence the amount of biologically available mineral nutrients (like phosphorus). The resulting decline of atmospheric $\\mathrm{CO}_2$ led eventually to such low levels of $\\mathrm{CO}_2$ (about present-day PAL) , that life had to readjust.\n\nThe reason is that C3 photosynthesis, the dominant pathway for oxygenic photosynthesis for plants, becomes at first linearly less effective with declining $\\mathrm{CO}_2$ concentration , stopping in the end altogether once the $\\mathrm{CO}_2$ level falls below a certain threshold (which depends in turn on other parameters like humidity, oxygen level and the like). The recent, human-induced raise of atmospheric $\\mathrm{CO}_2$ has led conversely to an ongoing greening of earth . By 20\u2006Ma, with precursors starting around 30\u2006Ma, C4 photosynthesis was developed as a new pathway for photosynthesis by at least 66 different terrestrial plants . The efficiency of C4 photosynthesis does not depend on $\\mathrm{CO}_2$ partial pressures, in contrast to C3 photosynthesis, and it is therefore believed that its evolution constitutes a response of the biosphere to a chronic shortage of atmospheric carbon dioxide .\n\nToday about 23% of the terrestrial NPP (net primary production of organic carbon) is due to C4 plants. During the last 0.4\u2006Ma, when the $\\mathrm{CO}_2$ level oscillated between 180\u2006ppm (during periods of extended glaciation) and 280\u2006ppm (during interglacials) , an additional expansion of C4 vegetation could be observed every time $\\mathrm{CO}_2$ levels dropped to 200\u2006ppm or below . One can regard the emergence of C4 photosynthesis hence as a turning point in the history of our planet, in the sense that earth's biosphere will be entrenched, from now on till the end, in a battle over the ever declining amounts of recycled $\\mathrm{CO}_2$ pumped out by earth's progressively receding geothermal activity . The total amount of $\\mathrm{CO}_2$ remaining in the atmosphere today, $0.05\\times10^{18}$ mol at 280\u2006ppm, is actually so small, that an individual $\\mathrm{CO}_2$ molecule will remain in the air for only 5-10 years . Ever hungry plants are waiting . The residence times for $\\mathrm{O}_2$ and $\\mathrm{N}_2$ molecules are, on the other side, 3\u2006Ma and 13\u2006Ma respectively.\n\n## Earth's lost billions\n\nEvolution is seldomly dependent on singular events. This holds also for major evolutionary steps like the invention of multicellularity and C4 photosynthesis, which have been developed independently, as discussed in Sec.\u00a0 and , on at least 25 and 66 occasions respectively. Why did earth then take about one Ga to develop full fledged bacteria, and another one for the eukaryotic cell? It could not have been for the lack of living organisms.\n\nIt has been estimated in this context that about $(9-32)\\times10^{29}$ bacteria dwell on earth, nowadays, making up in turn a few percent of the overall biomass. Given or taken a few orders of magnitude, we may assume that a similar number of microbes populated earth since the inception of life, with the population density being somewhat smaller before the invention of oxygenic photosynthesis in the late Archean (compare Sect.\u00a0). $10^{29}$ organism with a life cycle of hours to days harbor an enormous evolutionary potential and a better understanding of the mechanism tapping into this evolutionary potential would hence help to clarify greatly our perspective of finding complex life elsewhere in the universe . The alternative view taken here is that habitable planets would have a much higher chance to bear complex life if they would be given a speedup by several Ga.\n\n# Habitability that waxes and wanes\n\nTransient habitability can be classified into four fundamental types (compare Fig.\u00a0):\n\n- fading habitability, when initially clement conditions become progressively adverse,\n\n- delayed habitability, if the planet becomes habitable only late in the lifetime of the host star,\n\n- punctuated habitability, if the habitability is interseeded by relatively short periods of inhabitable conditions and\n\n- intermittent habitability, whenever clement conditions stabilize only intermittently.\n\nPunctuated habitability is in this context the type of habitability taking a prominent place in doomsday scenaria. There is however a difference between catastrophic extinction and the temporarily termination of habitability per se. For a perspective we recall that the permian mass extinction at 252\u2006Ma , the biggest extinction event ever occurring on earth, erased only 80-96% of the marine and about 70% of the terrestrial vertebrate species. Biodiversity had no problems to recover in the aftermath within 8-9\u2006Ma together with overall trophic levels .\n\nIt is actually surprisingly hard to construct a scenario in which a singular event wipes multicellular life completely off the surface of a planet. The 1-1000\u2006sec gamma-ray bursts emitted (possibly as frequent as once every 500\u2006Ga in our galactic habitat ) during the collapse of a nearby massive star, have been discussed extensively in this context together with other conceivable high-energy astrophysical events . Such a burst would indeed sterilize half the planet, viz the exposed face, leading in addition to a series of events which would deplete the ozone layer for a few months by an average of 40% . For comparison we note that the areas affected by the present-day antarctic ozone hole, on earth, which itself has had depletion levels of up to 50%, have actually seen only a reduction of terrestrial plant productivity by less than 6% .\n\n## Stagnant-lid planets\n\nMantle and crustal reorganization processes are driven by the need to dissipate the internal heat. They often settle into steady-state convection patterns , with the two most important types being stagnant-lid and plate tectonics (compare Fig.\u00a0).\n\nPlate tectonics ensures on earth the continuous recycling of carbon, which would be otherwise lost as a consequence of the inevitable sedimentation of biomass and due to the ongoing carbonization of the oceanic lithosphere. Carbon is released back to the atmosphere from the subducted ocean floor in part directly and in part indirectly, through further mantel convection, by arc volcanos and by the mid-oceanic ridges respectively . The oceanic lithosphere is such fully renewed within 170\u2006Ma, recycling on the way about $3.3\\times10^{18}$\u2006mol carbon per Ma . The sequestration of a 100\u2006bar worth of carbon dioxide through the burial of carbonated crust, compare Sect.\u00a0, was however achieved in the Hadean without the help of plate tectonics, which was then not yet operative .\n\nPlate tectonics is the dominant but not the only type of tectonic activity present on our home planet. Other known processes are mid-plate volcanism, such as the one causing the Hawaiian and Yellowstone hotspots , and the subcontinental overheating of magma (or the updwelling of mantle plumes ), which is thought to be causal for the widespread basaltic flooding occurring in conjunction with the breakup of supercontinents .\n\nCarbon recycling will be in contrast a much more dramatic process on planets with stagnant-lid tectonics, which do not dispose of a primary mechanism for the continuous recycling of $\\mathrm{CO}_2$ , but most probably of a discontinuous carbon cycle in the form of episodic basaltic overturns. The resulting fluctuations of atmospheric CO$_2$ levels will however not forestall habitability per se , at least as long as no runaway instability is induced. It has been suggested in this context, that stagnant-lid planets may have a more vigorous mantle dynamics than planets with plate tectonics and that their volcanic activity may abate consequently somewhat faster. Stagnant-lid planets can therefore be expected to to support clement conditions for extended but otherwise limited periods and to be hence prime candidates for the Genesis mission.\n\nStagnant-lid planets may of course also be utterly inhabitable whenever other factors won't allow it. The classical example is Venus, where the stagnant curst is punctuated continuously by updwelling plumes , in part on a local and in part on a global scale . It is not known whether plate tectonics would have set in, eventually, in the case that Venus would not have lost its ocean.\n\n## Hill instability \n\nA pair of planets is said to be Hill unstable when their orbits eventually cross due to their mutual gravitational interaction. The resulting massive orbital deformation (if not a direct collision) would terminate habitability for any planet located initially in a habitable zone. For an illustration of this process we consider a planetary system with two planets having masses $\\mu_i$ ($i=1,2$, relative to the mass of the host star) and eccentricities $e_i$ (a measure of how elliptic an orbit is). The Hill stability line is then determined by $$\\left(\\mu_1+\\frac{\\mu_2}{\\delta^2}\\right)\n(\\mu_1\\gamma_1+\\mu_2\\gamma_2\\delta)^2 \\ >\\\n\\alpha^3 + 3^{4\/3}\\mu_1\\mu_2\\,\\alpha^{5\/3}~,\n\\label{eq:Hill_stability}$$ where $\\alpha=\\mu_1+\\mu_2$ is the total relative mass and $\\gamma_i=\\sqrt{1-e_i^2}$. The distances of the two planets to the host stars are taken to be unity and $1+\\Delta$ respectively, with $\\delta=\\sqrt{1+\\Delta}$.\n\nSolving Eq.\u00a0() for $e_2$ we have plotted in Fig.\u00a0 the stability region for an earth- and Jupiter-like planetary system with $\\mu_2=300\\mu_1$ and $\\mu_2=1\/1000$. The influence of $e_1$ is in this case so small that one can set $e_1\\to0$. Overlayed are the parameters of 129 known exoplanets around G and F stars whose orbits would not cross the orbit of a putative rocky planet located at $1\\,\\mbox{AU}$. Note, that the Hill instability line shown in Fig.\u00a0 has been evaluated only for an outer planet of Jupiter mass. It moves up\/down for outer planets with smaller\/larger masses.\n\nMost exoplanetary systems detectable to date are dominated by super-Jupiter gas giants. It is hence not suprising that configurations allowing for Hill stable habitable planets are rare. A certain time is however needed, the orbital lifetime, before the instability actually happens. This lifetime ranges from typically a few $10^4$ years, deep in the unstable region, to up to few $10^9$ years close to the Hill stability line . Fig.\u00a0 hence suggests, that Genesis candidate planets may be found in systems with weak Hill instabilities.\n\nAbove considerations concerned exoplanets for which habitability is eventually terminated. The opposite may also happen, especially when two or more outer gas giants scatter dynamically . The resulting orbital deformations have been shown to move rocky planets closer to the sun , viz possibly from outside to inside the habitable zone (in the case that the initial distance was too large). A potentially large number of earth-size exoplanets may therefore be newcomers to their habitable zone, yet barren and hence candidates for a Genesis mission.\n\n## Lagrange instability \n\nGenesis candidate planets may also be found in Hill stable, but Lagrange unstable planetary systems. A massive orbital deformation ocurs also in this case, for one of the involved planets, this time however one which leads either to a collision with the central star or to an escape from the planetary system. Estimating the Lagrange lifetimes for a specific exoplanetary system is a demanding task and beyond the scope of the present study. We restrict ourselves therefore to a first assessment whether very long Lagrange lifetimes may potentially exist.\n\nFor this purpose we consider the minimal time $T_L$ for an Lagrange instability to occur. $T_L$ has been estimated to scale roughly as $$\\log_{10}\\left(\\frac{T_L}{T_1}\\right) \\ \\sim\\ 5.2\\left(\n\\frac{\\mu}{\\mu_J}\\right)^{-0.18}\n\\label{eq:langrangeMinimialLifetime}$$ for systems composed of two planets with an equal relative mass $\\mu$ and eccentricities below $\\sim0.3$. $T_1$ is here the orbital period of inner planet and $\\mu_J$ the relative mass of the solar-system Jupiter. This scaling has been derived for a configuration where the relative orbital distance of the two giants is confined to within $[1+0.3(1+e_1)(1+e_2)]$ Hill limits. The two gas giants are hence assumed to be in a Hill stable configuration close to the stability threshold.\n\nIn a Hill stable three-planet system, with an inner rocky planet in the habitable zone and two outer gas giants, a Lagrange instability of the inner gas giant occurring due to the mutual interaction between the two gas giants may also throw the rocky planet out of its orbit and consequently also out of the habitable zone. We have hence evaluated Eq.\u00a0() for all the 91 exoplanets shown in Fig.\u00a0 having an eccentricity less than $0.3$. That is, we have added by hand to each of these exoplanets a putative equal mass companion located further out.\n\nThe distribution of Lagrange escape times resulting from this Gedanken-experiment is shown in Fig.\u00a0. Small minimal Lagrange lifetimes clearly rule out the actual existence of a further-out (but close-by) equal mass companion in the respective exoplanetary system. Fig.\u00a0 shows on the other side that long Lagrange escape times would be present for suitable configurations of gas giants. Transient habitability as resulting from long-term Lagrange- or other orbital instabilities, like resonances (see Sect.\u00a0), of the host exoplanetary system may hence be common. A detailed analysis of the orbital past and future of extrasolar systems constitutes therefore an important screening tool for prospective Genesis missions.\n\n# The Genesis mission\n\nA very large number of habitable planetes may potentially exist in our galaxy. Current estimates range from about 0.06 per sun-like star , to 0.12-0.24 habitable planets per M-dwarf . The problem is however the distance. There are only 9 extrasolar systems (containing 14 stars) within 10 light years (lyr) and about $14\\times10^{3}$ stars within 100\u2006lyr (as extrapolated from the Gliese and HIPPARCOS catalogue entries). Among these we may expect up to a few hundred potentially habitable planets and possibly up to a few dozen Genesis candidate planets.\n\n## Genesis vs.\u00a0interstellar exploration\n\nInterstellar exploration via robotic or manned crafts can be regarded as a long-term research investment aiming to increase our scientific knowledge regarding the geophysics, the habitability and the astrobiology of extraterrestrial planetary systems . An explorative mission would therefore be only realizable if the projected mission duration would respect the maximal planning horizon of the funding institution, which will correlate in turn with typical human life expectancies, say a hundred years . The Genesis project is however not for human benefit and it is consequently irrelevant how long it would take for a Genesis craft to arrive to the target. A few millenia more or less would be negligible on evolutionary time scales.\n\n- The absence of strict travel time requirements allows in first place for reduced cruising velocities. A Genesis probe would therefore need only comparatively modest financial and technical resources.\n\n- The irrelvance of cruising times also allows to consider time-consuming deceleration options, e.g.\u00a0via magnetic and\/or electric drag .\n\n- Overall travelling times are however restricted on the technical side by the lifetime of the constituent components.\n\nA Genesis probe targeting candidate planets within a radius of 100\u2006lyr may hence take a minimum of $10^3-10^4$ years to arrive. The Voyager spacecrafts need in comparison about $19\\times10^3$ years to travel one light year. A close-up examinantion of the target planet, the first thing to be done, would then be followed by the decisive step, the autonomous decision whether to start the seeding sequence. The Genesis process would not be started if higher life forms would be detected from orbit.\n\nThe alternative to an in-situ decision taking, waiting for a response back from earth, would rely on the other hand on the long-term functioning of a trans-generational contract. Such a long-term conditioning would however be a questionable perspective when considering humanity's history of political and social turmoil. In this light it is actually an appealing aspect of the Genesis concept that the craft can be designed as a one-shot launch-and-forget project. The backtransmission of the in situ collected data (to anybody still listening) should be considered that notwithstanding to consitute an integral part of the mission layout.\n\n## Biosphere incompatibilities \n\nLet us digress for a moment and ask the question: What may happen, if humanity's dream of a spaceship load of human settlers setting foot on a second earth would come true? In this case we would bring terrestrial life, microbes included, to a planet with a biosphere as rich as the one of our home planet. Both the alien biosphere and the invading fragment of the terrestrial biosphere would interpenetrate each other and humantity would have started a non-reversible experiment for which the outcome will most probably be determined by how universal the immune system of the respective multicellular organisms are.\n\nThe reason is that all multicellular organisms, plants and animals alike , are vitally dependent on a performing immune system for their defence agains pathogenic microbes. Key to the functioning of an immune reaction is the recognition of 'non-self', which is achieved in turn by the ability of the immune systems, at least on earth, to recognize certain products of microbial metabolism that are unique to microbiota . How likely is it then, that 'non-self' recognition will work also for alien microbes?\n\nHere we presume, that general evolutionary principles hold. Namely, that biological defense mechanisms evolve only when the threat is actually present and not just a theoretical possibility. Under this assumption the outlook for two clashing complex biospheres becomes quite dire.\n\n- In the best case scenario the microbes of one of the biospheres will eat at first through the higher multicellular organism of the other biosphere. Primitive multicellular organism may however survive the onslaught through a strategy involving rapid reproduction and adaption. The overall extinction rates could then be kept, together with the respective recovery times, 1-10\u2006 Ma , to levels comparable to that of terrestrial mass extinction events.\n\n- In the worst case scenario more or less all multicellular organism of the planet targeted for human settlement would be eradicated. The host planet would then be reduced to a microbial slush in a pre-cambrian state, with considerably prolonged recovery times. The leftovers of the terrestrial and the indigenous biospheres may coexist in the end in terms of 'shadow biospheres' .\n\nIt is then clear, that exoplanets harboring multicellular life should be off-limit for Genesis missions. Expanding civilizations may find it equally unattractive to settle other life-bearing planets in the context of the Fermi paradox . Biosphere incompatiblities may be generic.\n\n## Ethics and planetary protection\n\nExoplanets already harboring a biosphere with multicellular lifeforms, being them primitive or advanced, would not be targeted by Genesis probes. The objective of the Genesis mission is after all to give life the chance to prosper in places where it has not yet a foothold, and not to invade and possibly to destroy existing biospheres (see Sect.\u00a0). Regarding the ethics of such an endeavor one may ask whether it is legitimate to bring life to a planet which will cease anyhow to be habitable in the foreseeable future. Here we take the stance that death is no less part of the life cycle than birth, which is equivalent to saying that the value of being alive is not grounded in the avoidance of an inescapable death. We do acknowledge, however, that this is a viewpoint that will not be universally shared (somewhat related issues have been raised in the context of unlimited human life extensions ).\n\nThe situation becomes substantially more tricky when primordial life forms do already exist on the candidate planet, in a stage either before or after the equivalent to the archean genetic expansion (which occurred on earth by 3.3-2.9\u2006Ga, see Sect.\u00a0). The Genesis process could then lead to the destruction of a substantial fraction of indigenous lifeforms and therefore to a flagrant violation of the current consensus regarding planetary protection . In contrast one may note that the microbes living on old earth, being them bacteria or eukaryotes, have never enjoyed human protection. Ethical or other type of arguments in favor of protecting our terrestrial microbes are generically not voiced. Taking a deeper look one may argue that planetary protection draws its justification from two sources :\n\n- The scientific benefit for humanity. Contaminating Mars or any other planet of the solar system with terrestrial microbes could ruin the possibility to study non-terrestrial lifeforms. This argument does not apply to Genesis candidate planets, which are selected expressively for being out or range for in depth science missions. Planetary protection will also break down, by the way, once the doors of a manned spaceship opens on Mars .\n\n- Independently evolved life constitutes a value per se . This argument is actually closely related to core motivation of the Genesis project, namely that life as such is valuable.\n\nIt then boils down to the balancing of two options involving the prospect that the habitable lifespan of the host planet may be too short for the indigenous life to evolve complex life by itself - the very reason the planet has been chosen - whereas the further evolved precambrian terrestrial life may be prepared to do so.\n\n## Seeding with unicellular organisms\n\nThe simplest design for the seeding process would be the from-orbit delivery via nano-sized reentry capsules, which could be in turn ejected backwards at high velocities, viz decelerated with respect to the orbital motion of the main Genesis probe, by a compact railgun . A minimal heat shield would then be enough to protect the content of the delivery capsules during the subsequent drop to the surface.\n\nKey to the success prospects of the Genesis mission is capability of the probe to use a databank of terrestrial genomes for the selection of the right mix of microbes to be synthesized in situ by the on-board gene laboratory. This brew of prokaryotes (bacteria) and unicellular eukaryotes will be optimized with respect to the requirements resulting from the geophysical conditions of the host planet. It could be advantageous for the Genesis probe to spread the seeding process over several centuries, albeit with slowly adapting mixtures of microbes.\n\nThe goal of the Genesis mission is to fast forward the target planet to a precambrian state (see Fig.\u00a0). Life would be given a head start consisting of a biosphere of unicellular organisms, from which on it could further flourish and develop. Direct seeding with multicellular organisms wouldn't be impossible per se, but both substantially more complex and in part also questionable.\n\n- A planet with substantial levels of $\\mathrm{O}_2$ may be expected to develop complex life on its own. Planetary protection arguments in conjunction with possible biosphere incompatibilities would then dictate not to bring higher life forms to its surface (nor to seed it in first place).\n\n- A planet without $\\mathrm{O}_2$, the most probable situation, could also be seeded with multicellular life, as a matter of principle, but only with fully anaerobic animals of the type that are thought to dwell in earth's oxygen-free deep sea environments (their cells are devoid of mitochondria, possessing however hydrogenosomes ). It is however questionable whether a passively dropping reentry probe could successfully deliver these sub-millimeter animals to their proper habitats.\n\nEven though most eukaryotes are aerobic and hence dependent on free oxygen, a wide range of unicellular eukaryotes have adapted to anoxic environments . Seeding of a planet devoid of free oxygen with eukaryotes will hence not pose a problem .\n\n## Post-seeding evolution and the oxidation of the atmosphere\n\nThe mission of the Genesis project is to lay the foundations for a self-evolving biosphere. It is however clear that fine tuning won't be possible and that the primary seeding process will result at best in a highly unbalanced ecosystems of microbes. Global scale 'ecological disasters' are hence expected to occur initially in the post-seeding phase, such as the uncontrolled blossoming of unicellular algae. The ecosystem should however self-stabilize relatively fast, say within a few thousand years. The further evolution will then depend on a flurry of parameters, like the initial concentration of atmospheric $\\mathrm{CO}_2$, the average temperature, the eventual presence of continents and the overall level of hydrothermal activity.\n\nIt is presently not possible to estimate reliably how long it will take afterwards for the photosynthetically produced O$_2$ to accumulate in the atmosphere of a Genesis plant.\n\n- Nearly all excess oxygen ever produced by earth's biosphere has been used oxidizing the crust, compare Fig.\u00a0, with less than 1\/34 accumulating in the end in the atmosphere.\n\n- Planets may dispose of quite different crustal compositions in terms of the relative percentages of oxygen and reducing elements (see Table\u00a0).\n\n - Both the bare metallicity (the abundance of heavy elements) and the relative abundances of the heavy elements will be distinct to a planetary system .\n\n - The initial core-crust segregation will depend likewise on the then present conditions, like the overall mass of the planet and the amount of radioactive heating .\n\n - The post-segregation deposition of elements by comets and asteroids, see Sect.\u00a0, may also influence the composition of the crust.\n\n- Non-biological processes like $$2\\mathrm{FeO} + \\mathrm{H}_2\\mathrm{O}\\ \\to\\ \n \\mathrm{Fe}_2\\mathrm{O}_3 + \\mathrm{H}_2$$ contribute additionally to the net oxidation of the crust whenever the resulting H$_2$ molecule manages to escape to space .\n\nAbout 60% of all atoms present in the crust of the earth are oxygen atoms and one may wonder whether this percentage is already close to saturation, at least with respect of what may be achievable by geo-planetary processes. It would be in any case optimal if the oxidation of the crust through antecedenting inorganic processes would be in an advanced stage. This could be expected to be the case for planets with delayed habitability and with elevated stratospheric $\\mathrm{H}_2\\mathrm{O}$ concentrations , allowing in turn for hydrogen to escape into space (see the analogeous discussion in Sect.\u2006).\n\nA constant and hopefully high flux of minerals and $\\mathrm{CO}_2$ is a general precondition for a Genesis planet to develop a high bioproductivity and hence a prerequisite also for a potentially rapid raise of atmospheric oxygen levels. We note here that about $11 \\times 10^{18}$ out of the $8700 \\times 10^{18}$ mol of organically produced C per Ma are buried in continental sediments , nowadays on earth, from where it is recycled through carbonate weathering within about 350\u2006Ma (compare also Fig.\u00a0). With about $37 \\times 10^{18}$ mol O$_2$ present in the atmosphere that would imply that an atmosphere worth of O$_2$ is produced via organic CO$_2$ fixation every $37\/11=3.4$\u2006Ma.\n\nThe balance of the biotic oxygen production through the weathering of continental carbon deposits occurring on earth will however not start immediately on planets not yet disposing of extended carbon sediments. The pace at which the atmospheric O$_2$ level rises then depends on the overall bioproductivity and on the amount of oxygen lost to the crust. An appreciable level could be achieved relatively fast, within 10-100\u2006Ma, if comparatively small amounts of oxygen would be lost, viz whenever the crust of the Genesis planet would already be in an advanced state of oxidation. The initial surge of oxygen levels would be rebalanced in this scenario only once the weathering rates have caught up respectively.\n\nMost habitable planets will probably take of the order of a few Ga to acquire an oxygen bearing atmosphere, if at all. We are however confident that substantially shorter time scales may be achievable, as discussed above, under optimal conditions and that we will hence be able to find Genesis candidate planets for which an initial seeding would initiate a geo-evolutionary process leading to the subsequent emergence of complex and multicellular life. Our best case estimates however still exceed human planning horizons by many orders of magnitude, implying that the Genesis process is intrinsically unsuited for the preparation of a barren planet for an eventual human colonization.\n\n# Conclusions\n\nToday's scientific environment is made up by a diverse mix of emerging and mature fields, characterized respectively by swift and lackluster rates of progress . The sluggish progress of traditional space launching technologies contrast here, e.g., with the rapid advances in synthetic biology . Transformative concepts are hence critical for reigniting innovation in science and technology time and again. It has been proposed in this context, that robotic interstellar missions of low-weight crafts accelerated by beams of directed energy will become realizable, both on technical grounds and financially, in the near future . At the same time we are discovering that planetary habitability isn't an all-or-nothing feature characterizing exoplanets . Our galaxy is expected in particular to teem with planets which are in part habitable, but for which the clement conditions do not last long enough for higher life forms to evolve on their own.\n\nReversing the argument we have pointed out in this study that complex life may emerge also on transiently habitable exoplanets whenever the extraordinary long time it took earth to develop eukaryotic cells could be leapfrogged. We have argued furthermore that this endeavor could be achieved by a light-weight interstellar craft using a robotic gene laboratory for the seeding the target exoplanet with a brew of in situ synthesized microbes. By the end of the mission, which we call the Genesis project, a precambrian and hopefully thriving biosphere of unicellular organisms would flourish on the candidate planet. Complex life in the form of multicellular animals and plants will evolve autonomously at a later state once the photosynthetically produced oxygen has had the time to accumulate in the atmosphere.\n\nOne of the key issues remaining to be settled at this stage regards the selection procedure for target planets. Remote sensing of exo-planetary biosignatures from earth is possible , albeit only to a certain degree. An even more daring task would be to actually prove that a world is uninhabited . It is hence clear that the final decision to go ahead must be taken autonomously by the on-board artificial intelligence. This may seem an imprudent strategy nowadays, but possibly not so in a few decades.\n\nThe Genesis mission is furthermore unique in the sense that the actual cruising velocity is of minor importance. It could be launched with the help of suitable beams of directed energy and decelerated at arrival by time consuming passive means like magnetic sails. We hence believe that the Genesis project opens a new venue for interstellar missions and for the unfolding of life in our galactic surroundings.","meta":{"dup_signals":{"dup_doc_count":20,"dup_dump_count":16,"dup_details":{"curated_sources":2,"2019-13":1,"2018-51":3,"2018-47":1,"2018-43":2,"2018-34":1,"2018-30":1,"2018-26":1,"2018-17":1,"2018-09":1,"2018-05":1,"2017-51":1,"2017-39":1,"2017-26":1,"2020-29":1,"2017-13":1}},"filename":"out\/1608.06087_extract_grosGenesis_astr.tex.md"},"subset":"arxiv"} +{"text":"# AN INVITATION TO ALGORITHMIC INFORMATION THEORY[^1]\n\nAn Invitation to Algorithmic Information Theory\n\nAn invitation to algorithmic information theory\n\n## \n\nG. J. Chaitin \nIBM Research, P.O. Box 218, Yorktown Heights, NY 10598 \nemail@example.com \nhttp:\/\/www.research.ibm.com\/people\/c\/chaitin\n\n## Abstract\n\nI'll outline the latest version of my limits of math course. The purpose of this course is to illustrate the proofs of the key information-theoretic incompleteness theorems of algorithmic information theory by means of algorithms written in a specially designed version of LISP. The course is now written in HTML with Java applets, and is available at http:\/\/www.research.ibm.com\/people\/c\/chaitin\/lm. The LISP now used is much friendlier than before, and because its interpreter is a Java applet it will run in the Netscape browser as you browse my limits of math Web site.\n\n## Introduction\n\nHi everybody! It's a great pleasure being back in this beautiful, beautiful state. You guys were nice enough to invite me up a bunch of times in the past. And I've usually tried to explain general ideas. I thought this time I'd try to give a different kind of a talk.\n\nI've been working for several years on a project, on a course I call \"The Limits of Mathematics.\" And this course in the past three years has gone through a number of different versions. It has a lot of software attached to it. And I've been changing this software. I've been trying to explain this software to people as the course. In fact a few weeks from now I'll be giving this course in Rovaniemi, Finland, on the arctic circle, for two weeks.\n\nThe way this course is now is that it's on this Web site that I gave you guys the URL for in the abstract. It's an HTML document and it has a lot of LISP code hanging off of it, and just to make life worse this is not normal LISP. It's pretty close to pure LISP, you know, the heart of LISP that everybody learns. But I've had to add a few features to LISP. So this stuff is all available and you're welcome to play with it. The course is there.\n\nWhat's also there by the way on my Web site are the transcripts of the previous talks I've given here at the University of New Mexico. One of them is \"The Berry Paradox,\" it's in HTML. Another one is called \"Randomness in Arithmetic and the Decline and Fall of Reductionism in Pure Mathematics.\" That's another talk that I gave here. And there's a talk that I should have given here but it didn't go well. The version I gave at the Santa Fe Institute is there. It's called \"How to Run Algorithmic Information Theory on a Computer.\" So these are three very understandable papers and I give them out when I give this course.\n\nNow there is a small question: Once a paper is published in a journal are you allowed to have it anymore on your Web site? At my lab at this moment the view is that you should erase it. Los Alamos has a wonderful physics preprint server and they don't erase anything. And all over the academic community everybody has all their key papers on their Web site even after they're published, right? So I don't know. At this moment all this stuff is on my Web site. Some of it may disappear temporarily while the matter is discussed.\n\n## Recursive function theory revisited\n\nMy work basically is doing recursive function theory at the kind of level it was done in the early days in the 1930's before it got technical. At the kind of level that G\u00f6del and Turing were doing it where the concern was what are the limits of mathematical reasoning. And basically I've added two new things. This is sort of revisited sixty years later, this stuff. I've added two new things.\n\nOne new thing I add to the stew is that an idea that was missing which is significant is the idea of program-size complexity, or how big a program is, how many bits of information there are. Not the time! In that sense it's just like the old recursive function theory and Turing machines. I don't care about time. But I do care about the size of a program in bits. Okay? So that's one new idea.\n\nThe other new idea I've added just in the past three years is that I want to really be able to run the programs on interesting examples on a computer. In other words, it's not enough to wave your hands and talk about an algorithm. I not only want a universal Turing machine that I can prove theorems with, but one that is fun to actually program and that you can run interesting examples on with a computer.\n\nJohn McCarthy invented LISP not just for use in AI \\[artificial intelligence\\], not just for use as a practical language. If you look at his 1960 paper on LISP in the *Communications of the ACM,* he said, \"This is a better universal Turing machine. Let's do recursive function theory that way!\"[^2] And the funny thing is that nobody except me has really I think taken that seriously. And the reason of course is that theoreticians didn't really much care about programming, about really playing with the computer. So the thing I've added is I think nowadays you have no excuse! If a theory has to do with the size of computer programs, you want to damn well see the programs, you want to be able to run them, it ought to be a programming language that is easy to use.\n\nSo I've done that using LISP because LISP is simple enough, LISP is in the intersection between theoretical and practical programming. Lambda calculus is even simpler and more elegant than LISP, but it's unusable. Pure lambda calculus with combinators $A$ and $K$, it's beautifully elegant, but you can't really run programs that way, they're too slow. So I would say that I believe that LISP is a powerful language that can actually be used on a computer to run stuff in a finite amount of time, but on the other hand it's simple enough and elegant enough that you can prove theorems about it. So that's the kind of game that I've been playing. Okay, so you'll find this on my Web site.\n\nAnd as general background then I suggest these three papers: \"The Berry Paradox,\" which was published in the first issue of *Complexity* actually. This other lecture on \"The Decline and Fall of Reductionism...\" which was published, among other places, by Oxford University Press in *Nature's Imagination,* that's the name of the volume. Oh there's also a paper called \"A New Version of Algorithmic Information Theory,\" which is more technical, which was published in the latest issue, the current issue, the fourth issue of *Complexity* magazine, the one which just came out. And the paper \"How to Run Algorithmic Information Theory on a Computer\" which kicks around these ideas is going to come out in a future issue of *Complexity.* But for now they're all on the Web site, until they get thrown out!\n\n## My new LISP & complexity measure\n\nOkay, so let me leap in and give you the general idea. What is this big new thing you add to LISP, this change you make to LISP to be able to talk about program-size complexity? Well, you could just take LISP expressions and measure how big are they in characters, right? That's not a bad complexity measure. But it's not the best complexity measure.\n\nThe best complexity measure says, you have a LISP S-expression, which is a computer program, and then you have binary data on the side:\n\n lisp binary\n S-exp data\n\nThe problem with measuring the size of a LISP program is that due to LISP syntax the LISP S-expression is redundant. So you really want to add to the LISP S-expression, which is a powerful way to express an algorithm, some raw binary data. That's like having data on a tape in addition to your program. So in the data the bits can be 0 and 1 independently, and there's some way that the program can get access to the data, and then you look at the size of the whole thing.\n\nSo basically the universal Turing machine I'm using as my yardstick to talk about program size, to use to measure program size, has programs which are big binary programs:\n\n BINARY PROGRAM\n lisp binary\n S-exp data\n\nThat sounds bad, right? We have this awful machine language in binary. You're not going to want to write in binary. Well, it's not so bad. The beginning of a program is a LISP S-expression converted to binary, 8 bits per character. And you know where the LISP S-expression ends, because by convention there is always a special ASCII character at the end.\n\n BINARY PROGRAM\n lisp binary\n S-exp data\n 8 bits\/char\n\nSo the universal Turing machine starts reading the binary tape bit by bit. It doesn't sort of gobble it all in and start running. It reads each bit of the program as it needs to\u2014that's very important. And it starts off by reading a LISP S-expression, and then every time the LISP S-expression says \"Read another bit,\" it gets it from what's left on the tape, that is, in the binary program. And if the S-expression attempts to read beyond the end of the binary data, the program is a failure and is aborted. So that's how it goes. And then you just add the total number of bits, which is 8 times the number of characters in the S-expression plus the number of bits in the data. That gives you the total number of bits in the program, okay?\n\nSo that's done because just using the size in characters of a LISP S-expression (or multiplying it by some factor to have it in bits) is no good. LISP syntax is pretty simple, but even so there's redundancy there. So you've got to add this extra binary data on the side.\n\n## Programming my UTM in LISP\n\nSo let me show you my universal Turing machine programmed out in my new LISP. When I came here a year ago, I had a LISP, it was pretty awful. It was a LISP where atoms and variable names were only one character long. And it was a LISP where there was no arithmetic. The way you had to do integers was as lists of 0's and 1's and you had to define, you had to program out arithmetic. I thought this was cute, this is a fun game to play once in your life, but the problem is it's an obstacle if I want to explain this to anybody else! So now I have a LISP like all other LISP's with long names for atoms, and my universal Turing machine is now programmed like this.\n\nDefine $U$ of $p$, $U$ is the function, $p$ is the program, as follows:\n\n define (U p)\n cadr try no-time-limit\n 'eval read-exp\n p\n\nNow this is it, this is my universal Turing machine that I described in words. `Try` is a primitive function in this LISP sort of like `eval` in normal LISP. `Try` has three arguments, here they are:\n\n no-time-limit\n 'eval read-exp\n p\n\nSo this, argument 2,\n\n 'eval read-exp\n\nis a LISP expression that we're going to evaluate with a time limit. In fact there's no time limit here:\n\n no-time-limit\n\nYou also have associated binary data. `Try` is the way that you give a LISP expression binary data. So this\n\n p\n\nis the binary data, it's the program for the universal Turing machine. It's a list of 0's and 1's. This is `try`'s third argument.\n\nOh by the way, in normal LISP you have to put lots of parentheses. In my LISP I leave out a lot of them because in my LISP every built-in function has a fixed number of arguments, so I don't bother to write all the parentheses.\n\nBack to the LISP code for my UTM.\n\n (read-exp)\n\nis a primitive function with no arguments. So we're trying to evaluate this\n\n ('(eval(read-exp)))\n\nnormally with a time limit, but in this case with no time limit. And `read-exp` reads a LISP expression from the beginning of the binary data `p`. And then we evaluate it, in other words, we run it. And while you're running it, if within the LISP expression that you read you ask for more binary data, you get it from what's left of the binary data.\n\nIt works! `Try` does exactly what it should, I added `try` to normal LISP just for this. And `try` is the real difference between normal LISP and my LISP. Okay, so this\n\n (define (U p)\n (cadr (try no-time-limit\n ('(eval(read-exp)))\n p)))\n\nis my universal Turing machine, which I think is not a very big, complicated program.\n\nAnd the nice thing about this is that you see lots of examples in my course. You have there a Java applet for the LISP interpreter. Which means you can write LISP and see what it does. But most of the course is \"canned\" LISP. What I do is I define some big LISP functions then I run them on examples. And there are lots of comments saying why I'm doing it and what it proves.\n\nActually there are two versions of each of these canned LISP runs, one of them with no comments and running it just on the case that counts, and another one with all the comments in the world and with lots of test cases for all the auxiliary functions. I do this because without comments a program is incomprehensible, but when you put all the comments in it becomes incomprehensible for another reason\u2014it's so big that you can't find anything. So I present it both ways.\n\nOkay, so one of the things you can do is you can actually get a program and run it on this machine. So how do you do it? Well, you take a LISP expression, and then you want to convert it to binary, so I have a primitive function for doing that, and then you concatenate it to the binary data that it needs, and then you feed this into the function $U$ of $p$ that we defined. And in my Web site I have runs of the universal Turing machine that show how it works.\n\n## A simple program for U that proves a theorem\n\nLet me show you a particularly interesting example of this, and I'm also going to prove a theorem, a very simple theorem about my version of program-size complexity. This theorem says that the program-size complexity of a pair of objects is bounded by the sum of the individual program-size complexities of the two objects plus a constant: $$H(x,y) \\le H(x) + H(y) + c .$$\n\nWhat this means is this. $H(x)$ is the size in bits of the smallest program for the universal machine $U$ of $p$ that calculates $x$; $x$ is an S-expression. And $H(y)$ is the size of the one that calculates $y$. And putting them together and adding a fixed number of bits you get a program that calculates the pair $(x,y)$, which won't have a comma by the way, since we're doing LISP, which is without commas.\n\nSo how does this work? Well, let me show you what the program looks like. What you do is you take the minimum size program for $x$, the minimum size program for $y$, and I'll call that $x^\\ast$ and $y^\\ast$, and you concatenate in front a prefix $\\phi_c$, a magic prefix that's going to be $c$ bits long, and I'll show you this prefix. This gives you $$\\phi_c \\, x^\\ast y^\\ast .$$ Then when you give it to the universal machine $U$ of $p$ it's going to produce the pair $(x,y)$.\n\nNow what is this prefix $\\phi_c$ that you're concatenating in front of these two bit strings $x^\\ast$ and $y^\\ast$? Actually you can put any program for $x$ and any program for $y$ here in $\\phi_c \\, x^\\ast y^\\ast$. I don't know the minimum ones. You'll see that this prefix works because if you give it any program $x^\\ast$ to calculate $x$ and any program $y^\\ast$ to calculate $y$, this prefix $\\phi_c$ when you feed it to my universal machine $U$ of $p$ is going to end up being a program that calculates the pair $(x,y)$.\n\nSo what is this prefix $\\phi_c$? Well, the prefix is this. Start with\n\n (cons (eval (read-exp))\n (cons (eval (read-exp))\n nil))\n\nHere I'm putting in all the parentheses. Then convert this to a bit string $\\phi_c$. So you write this expression and use a primitive function to convert it to a bit string and then you use `append` to concatenate these lists of bits and get this long bit string: $$\\phi_c \\, x^\\ast y^\\ast .$$ And then you feed it into $U$ and it produces $(x,y)$.\n\nHow does this\n\n (cons (eval (read-exp))\n (cons (eval (read-exp))\n nil))\n\nwork? It says, \"Read from the binary data.\" The binary data when $\\phi_c$ is running is going to be $x^\\ast y^\\ast$, the rest of the program. So it's going to read and run the program $x^\\ast$ to calculate $x$, and that's going to give it $x$. And then it's going to read off of the rest of the binary data and run the program $y^\\ast$ to calculate $y$, and that's going to give it $y$. And then it's going to `cons` these two things $x$ and $y$ up into a pair. Okay?\n\nSo it's the number of bits in this S-expression $\\phi_c$\n\n (cons (eval (read-exp))\n (cons (eval (read-exp))\n nil))\n\nwhich is the constant $c$ in $$H(x,y) \\le H(x) + H(y) + c .$$ So count the number of characters here in $\\phi_c$, multiply by 8, and that's the constant $c$ in this theorem. Okay? So this is pretty straight-forward.\n\n## \"Computing\" the halting probability $\\Omega$\n\nNow, what do I do next in my course on my Web site? Okay, so I've defined this universal Turing machine $U$ of $p$. The next major step in my theory is to define the halting probability $\\Omega$. This is a number that a lot of people get excited about \\[laughter\\], the halting probability. Well, I'm going to write down a program to calculate the halting probability now. Let me actually write it down for you! Here it goes. Let's write this program out. So I'm going to write a LISP program as we go.\n\nOkay, so we're going to define a function that counts how many programs halt, let's call it `count-halt`, which has two arguments, one is a `prefix`, and the other is `bits-left`.\n\n define (count-halt prefix bits-left)\n\nWhat's the idea? I'm going to count how many program for my universal Turing machine halt... Oh, I should put in time... Let me put in `time`, there's another argument here:\n\n define (count-halt time prefix bits-left)\n\nSo I'm going to count how many programs halt on my universal Turing machine that have this `prefix` with `bits-left` bits added on afterwards, within this amount of time. So this `count-halt` is the auxiliary function that I use. Okay?\n\nSo the first thing to show you is how you get from this the halting probability. So before I define the auxiliary function, let me show you how to define a function `omega`, the main function, that uses this auxiliary function. It's only one main function with one auxiliary function. Let's call it `omega` of $n$:\n\n define (omega n)\n\nThis is the $n$th approximation, the $n$th lower bound on the halting probability. As $n$ goes to infinity, this will give you the halting probability in the limit from below. The problem is it converges very, very, very, very slowly! \\[Laughter\\] Noncomputably slowly! Thanks for the laughs.\n\nLet's define this function. Well, what I want to do is I want to ` cons` up a rational number. And the way I do it is, I count how many programs halt within time $n$ that have as prefix `nil`, the empty bit string, and that add $n$ bits to it.\n\n define (omega n)\n cons (count-halt n nil n)\n\nAnd I `cons` that with division.\n\n define (omega n)\n cons (count-halt n nil n)\n cons \/\n\nAnd `cons` that with, I use `^` for power, 2 raised to the power $n$, and `nil`.\n\n (define (omega n)\n (cons (count-halt n nil n)\n (cons \/\n (cons (^ 2 n)\n nil))))\n\nThis depends on the representation you pick for rational numbers; I just write out a fraction. And so `(omega n)` is a triple, I'm creating a triple. So it's (the number of programs $n$ bits in size that halt within time $n$) divided by $2^n$. Okay?\n\nSo that's the main function. Now let me define the auxiliary function. The auxiliary function goes like this. You start off with the empty prefix `nil` and recursively you start adding bits until you get a full $n$-bit program and then you run it for time $n$ and you see whether it halts. Now how do you program that out in LISP? Well, recursively it's very easy, you go like this.\n\nLet's first see if `bits-left` is not equal to 0. If ` bits-left` is greater than 0, then add a 0 bit to the prefix and subtract one from the number of bits left, see how many of these programs halt within time $n$, and add that to the same thing when you `append` a 1 to the prefix instead of a 0.\n\nSo we've got this:\n\n define (count-halt time prefix bits-left)\n if > bits-left 0\n + (count-halt time (append prefix '(0)) (- bits-left 1))\n (count-halt time (append prefix '(1)) (- bits-left 1))\n\nI'll put parentheses in occasionally and other times not. \\[Laughter\\] Well, actually in my LISP you don't put them in for primitive functions and the interpreter adds them, and writes it out for you just so that you can check that it's what you meant.\n\nSo this is the recursion. This is going to make `count-halt` look at all $n$-bit programs. And what happens when it finally has no bits left to add? It started off with $n$ bits to append to the prefix, and with the empty prefix. So now you've finally got an $n$-bit string, you get all possible $n$-bit strings. So then you take the $n$-bit string, and you `try` for time $n$ `eval read-exp` applied to the $n$-bit data string which is the prefix. If this ` try` is a `success` then `count-halt` is a 1 otherwise it's a 0.\n\nSo now we've got this:\n\n define (count-halt time prefix bits-left)\n if > bits-left 0\n + (count-halt time (append prefix '(0)) (- bits-left 1))\n (count-halt time (append prefix '(1)) (- bits-left 1))\n if = success car (try time\n 'eval read-exp\n prefix)\n 1\n 0\n\nThis recursion is going out and seeing whether all possible $n$-bit programs halt or not. When you've got the whole $n$-bit program, how do you see whether it halts or not? You `try` running it for the time that you were given, which is going to be $n$ for `omega` of $n$. And this part\n\n (try time\n 'eval read-exp\n prefix)\n\ndoes that. It's taking the prefix as the S-expression and the binary data of a program for my universal Turing machine. It's reading an S-expression off of the beginning of the prefix and it's running the S-expression with the rest of the prefix as binary data. Okay?\n\nI don't really have the time to explain this properly. If you were giving this to students in a class you'd probably take a whole class to explain this. And then the students would go and they would play with it and they would use it. And if you look at my Web site you'll find that I'm running `count-halt` on examples to show that it works. I also have a different definition of `omega` of $n$ which is more traditional, which builds up the list of all $n$-bit strings, and then sees which halt, and then counts how many halt. But I think that what I have here is more elegant, it's really all you need.\n\nLet me say it in words again.\n\nWhat you see here in this version of LISP is the $n$th lower bound on $\\Omega$. With increasing positive integers $n$, `omega` of $n$ gives you better and better lower bounds on the halting probability. The problem is that you never know how far out to go to get the halting probability with a given degree of accuracy. To get $n$ bits of the halting probability right, you have to go out Busy Beaver function of $n$, that is basically the idea.\n\nYou look at all $n$-bit programs, as prefix you start off with the empty list of bits, and the bits left to add is $n$ initially, and you're going to count how many of them halt within time $n$. That's what\n\n (count-halt n nil n)\n\ndoes.\n\n## Discussion of the program for $\\Omega$\n\nLet's look at this conceptually and let's look at it pedagogically. Pedagogically, I certainly learned a lot writing this program, right? I had to invent the programming language, I had to write the interpreter! So I learned a lot from this. Is this a good way to transmit an idea to a student? I don't know! Reading somebody else's code is not great, right?\n\nYou really understand an algorithm when you've programmed it and tried it on examples and debugged it yourself. So I think that to use this in a class effectively you need exercises. You need to ask the students to do variations on what you've presented, so they make it their own. I don't do this here. This is a very concise version of the course.\n\nIf someone really wanted to use this as a course, they could start with this, which it is at least feasible that someone could understand. Before it wasn't. I could understand it. A few very committed people could sort of try to think they understood it, but I think there's more of a chance with this, it looks more like normal LISP. So that's from the pedagogic point of view.\n\nNow let's look at this philosophically. The point is this. This program is $\\Omega$'s definition. This is really it. So if this number is going to be very mind-boggling, you'd like to pin down very concretely how it's defined. And we have.\n\nYou'd like to pin down very concretely how $\\Omega$'s defined before you start being astonished by it. Here $\\Omega$ is really very concrete and you can actually run `(omega 0)`, `(omega 1)`, `(omega 2)`, ... Of course the time grows exponentially as $n$ goes up, but maybe you can get to one interesting example. You can try variations of this program to convince yourself that you understand it. And you can try debugging `count-halt` separately by running examples to show that it works. That's how to convince yourself that this works: run lots of examples. I haven't tried giving a formal proof that this program is correct. I wonder if that would be an interesting project?\n\nOkay, now I once tried explaining this to an astrophysicist who amazingly enough was briefly interested \\[laughter\\], a very bright guy. He was so bright that he could understand the previous version of the program for $\\Omega$, the one in which the names of built-in functions are a single character, and where you had to program out arithmetic, you didn't have positive integers like we do here. He took the definition of `(omega n)` and `count-halt`, printed them on a small sheet of paper, folded it up and put it in his wallet. He said that it was like a mantra, \"I have $\\Omega$ in my pocket!\" But it was $\\Omega$ for him only because he was very bright and he took the trouble to go through this course with me. I think that this new version is much easier to understand than the one that he put in his pocket.\n\n## Proof that $\\Omega$ is algorithmically irreducible\n\nOkay, so what is the next thing that one does in this course on the limits of mathematics? I've tried to throw out all the inessential stuff and just give the main ideas. And now we'll start to see why this number $\\Omega$ is significant.\n\nWell, the next thing I do in the course is I point out that ` (omega 0)`, `(omega 1)`, `(omega 2)`, ... is a monotone increasing sequence of rational numbers. So, this isn't constructive, but we all know that there is a real number which is the least upper bound of this sequence. So `(omega n)` defines in a nonconstructive way a real number $\\Omega$. It's almost constructive. It's pretty close to being constructive. Really constructive would be if you could calculate each digit of $\\Omega$, right? I'm getting better and better lower bounds, the only thing I don't have is I don't have a way to calculate how far out to go to get a given degree of accuracy.\n\nOkay, so this defines a real number that I call $\\Omega$. Now let's imagine this number being written in base-two binary, so there's going to be a 0, a binary point, and then there're going to be some bits, right? $$\\Omega = 0.011101\\ldots$$ And let's think of this bit string as a LISP S-expression, it's going to be a list of 0's and 1's separated by blanks. And let's define $\\Omega_N$ to be the first $N$ bits of this, the LISP S-expression for the first $N$ bits of $\\Omega$. It's a list with $N$ elements.\n\nThe theorem you want to show is that $H(\\Omega_N)$ is pretty complicated, that there's a lot of information in those first $N$ bits of $\\Omega$. In fact it's algorithmically irreducible. You cannot compress the first $N$ bits of $\\Omega$ into a program substantially less than $N$ bits in size. The exact result is that $$H(\\Omega_N) > N - 8000 .$$ This turns out to be the case.\n\nWhy is there a lot of information in the first $N$ bits of $\\Omega$? It's not difficult to see why. If you knew the first $N$ bits of $\\Omega$, it solves the halting problem. It would enable you, slowly, but it would enable you to solve the halting problem for all programs up to $N$ bits in size. So once you do that, you run all the programs up to $N$ bits in size that halt, you see what they calculate, and then you just put what they calculate together into a list. That gives you something which cannot be done by a program less than or equal to $N$ bits in size.\n\nAnd it takes 8000 bits to explain how to do this. There's a prefix $\\phi_{8000}$ which is 8000 bits long that if you put it in front of a program that calculates the first $N$ bits of $\\Omega$, this gives you a program to calculate something whose complexity is greater than $N$. This shows rather concretely that this inequality $$H(\\Omega_N) > N - 8000$$ has to follow. Okay?\n\nSo I'm giving my universal Turing machine $U$ a program which starts with an 8000-bit prefix $\\phi_{8000}$. That sounds like a lot, but in fact it's only a thousand characters of LISP. And then next to it I concatenate any program $\\Omega_N^\\ast$ for calculating the first $N$ bits of $\\Omega$, if somehow I had one. I don't know where to get one! So this is what we've got: $$U( \\phi_{8000} \\, \\Omega_N^\\ast ) .$$ And then you run this and what the machine $U$ does because of $\\phi_{8000}$ is this. First it reads in and runs $\\Omega_N^\\ast$ to calculate the first $N$ bits of $\\Omega$. Then it uses `(omega n)` to get better and better lower bounds on $\\Omega$ until $\\Omega_N$ is correct, until it gets the first $N$ bits of $\\Omega$ right. God, or an oracle, is giving us the program for the first $N$ bits of $\\Omega$!\n\nOnce $U$ has calculated `(omega n)` for $n$ high enough that the first $N$ bits of $\\Omega$ are correct, at that point it's seen all $N$-bit programs that are ever going to halt. In fact, if they ever halt, they halt within time $n$.\n\nAnd then $U$ sees what these programs produce, and it forms the list of the output of all programs up to $N$ bits in size that halt. And to finish up everything, $U$ outputs this list and halts. In other words, this list is the value of $$U( \\phi_{8000} \\, \\Omega_N^\\ast ) .$$ It's precisely the list of all LISP S-expressions $x$ with program-size complexity $H(x) \\le N$.\n\nThis list cannot itself have program-size complexity $\\le N$, because it can't be contained within itself. So it can't be the output of a program less than or equal to $N$ bits in size, it's got to have program-size complexity greater than $N$. Therefore this program $\\phi_{8000} \\, \\Omega_N^\\ast$ for producing the list $$U( \\phi_{8000} \\, \\Omega_N^\\ast )$$ must be greater than $N$ bits in size. In other words, (the size of any program that calculates the first $N$ bits of $\\Omega$) plus 8000 has got to be greater than $N$. So you get this inequality $$H(\\Omega_N) > N - 8000 .$$ Okay?\n\nAnd in my Web site I actually show you this program. I program out in LISP the algorithm that I just described in words. You program it out, it's not a big deal, $\\phi_{8000}$ is about one page of LISP, of my new LISP. But if you put in a lot of comments and run lots of examples then it's more than one page. Okay?\n\nMartin Gardner had an explanation of this algorithm in an article in *Scientific American* on $\\Omega$ that you'll find in one of the Martin Gardner collections. In my Web site this algorithm is actually written out in LISP and you can run it on some examples. You can certainly run the auxiliary functions and other pieces of the algorithm and convince yourself that it works. But when you put it all together, you're not usually going to be able to run it on the real data $\\Omega_N$, because the whole point of this is to show that the bits of $\\Omega$ are hard to know, they're algorithmically irreducible.\n\nBy the way, it's easy to see that $$H(\\Omega_N) > N - 8000$$ implies that $\\Omega$ is violently non-computable. Let's compare the two real numbers $\\Omega$ and $\\pi$. $\\pi$ has the property that the string of its first $N$ bits has very small program-size complexity. Given $N$, you can calculate the first $N$ bits of $\\pi$. So the first $N$ bits of $\\pi$ only have about $\\log_2 N$ bits of complexity. But the first $N$ bits of $\\Omega$ are irreducible, can be reduced at most 7999 bits, maybe down to $N-7999$, we've proved that.\n\nBy the way, program-size irreducibility implies statistical randomness. So $\\Omega$'s irreducibility implies that $\\Omega$ is a normal real number. In any base, its digits all have the same limiting relative frequency.\n\n## Proof that you can't even deduce what the bits of $\\Omega$ are\n\nOkay, so the next thing you do with this, and this is the next (and last) program in this course, is you get an incompleteness result for $\\Omega$ from this. From this inequality $$H(\\Omega_N) > N - 8000$$ which states that $\\Omega$ is algorithmically irreducible. By the way, I don't show in the course that this implies statistical randomness. That's a detour. I want to go straight to the incompleteness result.\n\n(But I think you can sort of wave your hands and try to convince people that irreducibility implies statistical randomness. Or you can actually go and develop algorithmic information theory in excruciating detail! But in my course my goal is to get to the major incompleteness results as quickly as possible, illustrating them with interesting LISP programs, not to develop the entire theory. Now you could also take the proof of every theorem in my Cambridge University Press book *Algorithmic Information Theory* and write down a LISP program for the algorithm in it. I once started doing that but I quickly gave it up. It was too much work, and you don't really want to see every algorithm in the theory in such detail. I'm concentrating on programming in LISP what I think are the fun algorithms in the theory, especially the ones connected with the basic incompleteness results, the ones I want to use to present what I think are the fundamental concepts in my theory.)\n\nSo the last program in this course is the one that yields an incompleteness result for $\\Omega$.\n\nLet's start with this question: How do I represent a formal axiomatic system in LISP? Well, this is the way I do it. Think of it as a black box which every now and then throws out a theorem. So it's a program that you start running, and it may or may not halt, normally it doesn't halt, and every now and then it prints out a theorem.\n\nSo why is this a formal axiomatic system? Well, the idea is that I don't care about the details of what's going on inside, what the axioms are, or what the logic used is like. What I care about is the criterion, which I think was enunciated rather clearly by Hilbert, that states that the essence of a formal axiomatic system is that there should be a proof-checking algorithm. So if that's the case, you can run through all possible proofs in size order, see which ones are correct, and print out all the theorems. Of course this is impractical. The time it takes to do it grows exponentially, and I wouldn't really want to run such a program.\n\nBut you can cheat and you can just have a LISP S-expression which every now and then outputs a theorem... It won't be the final value... Every now and then it uses a primitive function, I call it ` display`, but you could also say \"output intermediate result.\" So I think of a formal axiomatic system as a LISP S-expression that every now and then calls a particular new LISP primitive function which outputs an intermediate result. And that way it puts out a lot of LISP S-expressions, which are the theorems.\n\nAnd you can cheat and instead of putting a real formal axiomatic system in there with a proof-checking algorithm and running through all possible proofs, which is terribly slow (and never halts), you can cheat and put in a little example of something that just `display`'s a few sample theorems and then halts. That way you can run a little example and debug algorithms which work with formal axiomatic systems.\n\nHow can algorithms work with these toy formal axiomatic systems? Let's say that you have a formal axiomatic system. We've agreed that it's a LISP program that is going to put out intermediate results using a new primitive function called `display`. Actually all normal LISP's have a function for outputting intermediate results; you use it for debugging, right? But it becomes more important in this framework. It plays an official role in my computerized version of algorithmic information theory, because `try` captures all the intermediate results. That's very important.\n\n`Try` is a time-bounded execution\/evaluation of a LISP expression. You use `try` to run a program with a time limit, giving the program raw binary data on the side, and **capturing all the intermediate output** as well as the final value (if any). That's very important. You see, the new primitive function `try` that I added to LISP is the way that you can run a formal axiomatic system for a while and see what are the theorems that it produces. That's put in as a primitive function in this LISP. The value returned by ` try` is a 3-element list $( \\alpha \\, \\beta \\, \\gamma )$. $\\alpha$ is `success` or `failure`. $\\beta$ is the value of the expression being tried if the `try` is a success, and is ` out-of-time` or `out-of-data` if the `try` is a failure. And $\\gamma$ is the list of captured `display`'s, the list of theorems. The theorems don't get displayed, they end up in this list instead. Okay?\n\nLet me say by the way that this LISP started off as three-hundred lines of Mathematica. I invented this LISP using Mathematica as my programming tool; I wrote the LISP interpreter in Mathematica. That way I could play with my LISP and try it out as the design evolved. Mathematica is the most powerful programming language that I know. But it's slow. So the next thing I did was I rewrote the interpreter for this LISP in C. And it's a thousand lines of C, it's a hundred times faster than in Mathematica, but the program is incomprehensible of course.\n\nSo the next thing was, I rewrote the interpreter in Java, because by this time HTML, the World Wide Web, and Java had appeared! And in Java it's 750 lines of code. But I cheat, I didn't take the trouble to program out `bignum`'s, arbitrarily large integers. I did that in C, Mathematica comes with it built in. And in Java you get 18-digit decimal integers built in, so I said, \"That's enough!\" Probably somebody someday is going to add arbitrarily large integers to the Java class libraries, it's object oriented, a lot of stuff is in those libraries. I don't want to do that work.\n\nSo it's 750 lines of Java, and I think that the Java code is pretty nice. The C code is incomprehensible! Like all good C programs, it's too clever. You can deal with it while you're writing it, and immediately afterwards it's already incomprehensible, even for the programmer! The Mathematica code is much easier to understand, but you have to know Mathematica, which is a pretty big language, and you have to know the subset of it that I'm using, which may not be the subset that you like to play with normally.\n\nThe Java code I have to say I really like. It's 750 lines of Java. I think it's pretty understandable, I think it's pretty clean, and I think that if I were presenting this as a course I would go through the Java interpreter with the students. I think it's important that they should see the code. It's only 750 lines of Java, Java's a fairly reasonable language, and they could make changes to the interpreter as a way of showing that they understand it. So it's there, the Java source code is there on my Web site.\n\nOkay, so what's the next (and the last) program? The next program works with a formal axiomatic system, which is just a mechanism for producing an infinite set of theorems. It's given to us as a binary program for $U$, FAS$^\\ast$, which is a LISP expression plus binary data, and we're going to measure its complexity in bits the same way that we did before. I'll prove that if the formal axiomatic system has program-size complexity $N$, then it can enable you to determine, to prove what is the value of at most $N+15328$ bits of the halting probability $\\Omega$.\n\n> If a FAS has program-size complexity $N$, then it can enable you to determine at most $N+15328$ bits of $\\Omega$.[^3]\n\nSo how do I show this? Well, I have a Berry paradox kind of proof. The idea is that if you could prove a lot of bits of $\\Omega$, then $\\Omega$ wouldn't be this irreducible $$H(\\Omega_N) > N - 8000 .$$ That would give you a way to compress $\\Omega$ into the axioms of the FAS. It would give you too concise a way to calculate $\\Omega$. If you could prove what the bits of $\\Omega$ are, you'd do it systematically by searching through all possible proofs, and that would give you a way to calculate the bits of $\\Omega$. That's all we're saying, that it would contradict this $$H(\\Omega_N) > N - 8000 .$$\n\nSo deduction and computation are very close, as Turing already noticed in his famous 1936 paper \"On Computable Numbers...\" where he proves an incompleteness result using the unsolvability of the halting problem. My argument is at the same level, it's analogous.\n\nThe new business here is roughly like the paradox of the first uninteresting positive integer. You know, that's sort of an interesting number. So if you could prove that a number's uninteresting, that would be an interesting fact about it, and \"uninteresting\" is the notion of algorithmically incompressible. So it turns out that you can't prove that an $N$-bit string is algorithmically incompressible if $N$ is larger than the complexity of your axioms. You can't prove that an $N$-bit string is algorithmically irreducible if it has more bits than the axioms you're using for the proof. And similarly it turns out that with $N$ bits of axioms you get at most $N+15328$ bits of $\\Omega$.\n\nThat's the general idea. Here are the details.\n\nI write out a 7328-bit program $\\phi_{7328}$, it's about one page of LISP code. Why 7328 bits? Because you have to add that to the constant in the inequality for $H(\\Omega_N)$ to get the constant in our incompleteness result. So the difference between these two constants 8000 and 15328 is the size of the next program in this course, the last program. Divided by 8, you get the size of a LISP expression in characters.\n\nAnd what this LISP program $\\phi_{7328}$ does is this. Using ` try` it starts running the formal axiomatic system that you're assumed to be given that's $N$ bits of code. So we're looking at this: $$U( \\phi_{7328} \\, \\mbox{FAS}^\\ast \\ldots ) .$$ The prefix $\\phi_{7328}$, the program that outputs the theorems of the FAS, and some extra stuff that I'll explain later are concatenated and fed to $U$.\n\n$\\phi_{7328}$ starts running the formal axiomatic system using larger and larger time bounds, capturing the intermediate output, which are the theorems. And it looks at the theorems to see how many bits of $\\Omega$ it got. And I allow partial determinations where you get some of the bits but you leave holes with unknown bits between some of the bits that you do know.\n\nAnd what $\\phi_{7328}$ does is it looks for a small set of axioms that enable you to prove substantially more bits of $\\Omega$ than there are in those axioms\u2014at least 15329 bits more. And then it just fills in the holes, the missing bits, which costs just one bit per bit, one bit for each missing bit. So $\\phi_{7328}$'s final output and the value of $$U( \\phi_{7328} \\, \\mbox{FAS}^\\ast \\, \\mbox{missing bits of $\\Omega$} )$$ will be one of the $\\Omega_N$'s. For some $N$, it'll be the list of the first $N$ bits of $\\Omega$. By the way, $\\phi_{7328}$ may not need all the bits of FAS$^\\ast$, because it only keeps a finite part of the potentially infinite computation for the formal axiomatic system.\n\nSo if you could use $N$ bits of axioms to get essentially more than $N$ bits of $\\Omega$\u2014more than $N+15328$ bits, in fact\u2014then you could fill in the missing bits at a cost of one bit each, and you get into trouble with this inequality $$H(\\Omega_N) > N - 8000 .$$ That's the point.\n\nSo that's how it goes, the basic idea is straight-forward. And I actually have this 7328-bit LISP program $\\phi_{7328}$ and you can run it. Well, you can certainly test the auxiliary functions. You can certainly run all the auxiliary functions on examples to convince yourself that they work. But to test the whole algorithm, its main function, you have to cheat, you don't give it a real formal axiomatic system, because if you really try to run through all possible proofs in size order it would take too long. So you cheat, you give the algorithm a LISP expression that just throws out a few test theorems that will put the algorithm $\\phi_{7328}$ that's running the formal axiomatic system through it's paces.\n\nThat's how you convince yourself that this all works. And you may have to \"tweak\" things a little bit, you may have to slightly change the LISP code so that it works with your simple test cases. So you can convince yourself by running little examples that this would all work if you really gave it a genuine formal axiomatic system, say Zermelo-Fraenkel set theory.\n\nI would like to have somebody program out Zermelo-Fraenkel set theory in my version of LISP, which is pretty close to a normal LISP as far as this task is concerned, just to see how many bits of complexity mathematicians normally assume. You see, the whole effort here has been to get these constants 8000 and 15328. And if you programmed ZF, you'd get a really sharp incompleteness result. It wouldn't say that you can get at most $H(\\mbox{ZF})+15328$ bits of $\\Omega$, it would say, perhaps, at most 96000 bits! We'd have a much more definite incompleteness theorem. I hope that somebody will program ZF in LISP. I stop at this point \\[laughter\\], you know, programming fatigue! And you want to leave something for the students to do, right? So this is a little job for them. And if they're really clever maybe it'll put out the theorems in Zermelo-Fraenkel set theory reasonably quickly, not by running through all possible proofs in size order, which would take too long, but by doing some kind of tree search so that you can actually have something interesting happen in a reasonable amount of time.\n\n## Discussion\n\nOkay, so why is all this interesting? Well, some of you may know already, but let me wave my hands frantically in the last few minutes and say why.\n\nThis looks like a kind of programming madness, right? Up to know, it looks like what I've been telling you is, \"Oh how much fun it was to program something that nobody's ever programmed before!\" Or maybe just that this is neater than it was in the previous versions of my course. Yes, programming can be an obsession, it can ruin your life. You can sit at your terminal watching your life fall apart as you stay there hacking away until 4 in the morning! But is there any other justification for this except as a kind of drug? Well, I think so! I think this has some philosophical significance.\n\nWhat's the point about this incompleteness theorem, about this technical result?\n\n> If a FAS has program-size complexity $N$, then it can enable you to determine at most $N+15328$ bits of $\\Omega$.\n\nWell, what it's really telling you is that you might get some bits of $\\Omega$, the first ones, say. You might be able to calculate the first few bits of $\\Omega$. In fact, in a different version of this theory, with a different UTM, (this was with my one-character LISP, my one character per atom LISP), I did get the first 7 bits of $\\Omega$, and they were all 1 bits. You see, once your lower bound on $\\Omega$ is 127\/128ths, you know that these bits can't change. The first 7 bits were 1's. And now I have a much friendlier LISP, but I lost this, I can no longer determine the first 7 bits of $\\Omega$ this way.\n\nBut anyway, you might be able to get some bits of $\\Omega$ without contradicting $\\Omega$'s logical and computational irreducibility, without contradicting this incompleteness result and this inequality:\n\n- You can determine at most $H(\\mbox{FAS})+15328$ bits of $\\Omega$.\n\n- $H(\\Omega_N) > N - 8000.$\n\nBut in spite of this, the basic point is that $\\Omega$ really shows that some areas of mathematics have no structure, have no pattern at all.\n\nLet me put it this way. Normally you think that if something is true, it's true for a reason, right? In mathematics, the reason is called a proof, and the job of a mathematician is to find the reason that something is true, to find a proof. But the bits of $\\Omega$ are mathematical facts that are true for no reason, they're accidental!\n\nNow this is a very specific $\\Omega$, I've programmed it out. Imagine it being written in binary. And you ask, an individual bit, say the 33rd bit, is it a 0 or a 1? Let's say you're trying to prove which it is. And the answer is, you can't! The reason you can't is because, whether that particular bit is a 0 or a 1 is true for no reason, it's true by accident. It's so delicately balanced whether it's going to be a 0 or a 1, that we will never know!\n\nIt's like independent tosses of a fair coin. Independent tosses of a fair coin has got to come out heads or tails in each case, but there's no reason that it comes out one or the other, right? So it's exactly the same story with these mathematical facts, with the bits of $\\Omega$. There is no pattern or structure in the sequence of bits of $\\Omega$.\n\nI don't know why anybody would want to try to determine bits of $\\Omega$. Although people have played with the Busy Beaver function, which in a way is like trying to calculate the bits of $\\Omega$. I don't know why you'd try to determine the bits of $\\Omega$. But if you were to try to do this, what this incompleteness result shows you is that you're in big, big trouble! Because essentially the only way to prove what a bit of $\\Omega$ is, is to add the theorem that you want to prove as a new axiom. It's irreducible mathematical information. Now you can prove **anything** by adding it as a new axiom. The point here is that for $\\Omega$ that's essentially **the only way** to do it. No compression is possible.\n\nSo there is no structure, there is no pattern, $\\Omega$ has maximum entropy, it mirrors independent tosses of a fair coin. Now to a physicist what I'm saying sounds pretty reasonable, right? $\\Omega$ has maximum entropy, the bits of $\\Omega$ are completely uncorrelated. But to a mathematician this all sounds weird!\n\nMy new $\\Omega$ is a particular well-defined real number. It's a real number with a rather simple definition, I can even write the LISP program that defines $\\Omega$ on one computer screen. So you believe, thinking in Platonic terms, that each bit is either a 0 or a 1, even if I'm never going to know which. But it's black or white, right? And what I'm saying is that I think that it's really better to think that it's **grey**. It's really better to think that each bit of $\\Omega$ has probability one-half of being 0 or of being 1\u2014even though it's a particular well-determined bit, because I've written out the program that defines this number. Defines it, not by enabling you to calculate it bit by bit\u2014that would contradict $\\Omega$'s unknowability. But the program for $\\Omega$ does enable you to calculate better and better lower bounds on $\\Omega$, so $\\Omega$ is **almost** a computable real number in the same sense that $\\pi$ is. Almost, but not quite!\n\nSo the game is like this. I'm trying very hard to be constructive, as constructive as possible. Writing out programs is a sure sign that you're a constructivist, right? \\[Laughter\\] I want to settle all the programming details. But I'm trying to be as constructive as possible about non-constructivity! I want to exhibit something that escapes the power of constructivity, of mathematical reasoning, that you can't calculate, but that's just over the border between the constructible and the non-constructible. And I think that $\\Omega$ is pretty damn good, it's just on the border.\n\nNow this doesn't mean that all of mathematics falls down in a heap! But the normal notion of mathematics was that there were a small, finite set of axioms and rules of inference that we could all agree on, from which all the infinite mathematical truth would follow. This is the tradition that goes back to Euclid, to Leibniz, to Peano, to Frege, to Russell and Whitehead, to Hilbert. And G\u00f6del showed that there was a problem. And Turing showed that there was a problem, using a different method involving computers. And I think that $\\Omega$ follows in that tradition and shows that the problem is even bigger. However I don't think this means that you should stop doing mathematics! So what is the significance of $\\Omega$ and of incompleteness? Should it affect how we actually do mathematics? I'll give you my opinion.\n\nOf course the nature of mathematics has been discussed for a long time! Every generation of mathematicians have their own answer. But let me share with you my own feelings about this, my tentative conclusions.\n\nThere's a word that a philosopher coined that's very good. He says that there's an emerging new school, and emerging new *quasi-empirical* view of the foundations of mathematics. One talks about the formalist school, the logicist school, and the intuitionist school. Well, Thomas Tymoczko has a book, *New Directions in the Philosophy of Mathematics,* which has a whole bunch of articles, including two of mine, and he thinks that all these articles tend to support a new quasi-empirical view of the foundations of mathematics.\n\nNow what does quasi-empirical mean? I'll tell you what it means to me. Quasi-empirical means that **pure math ain't that different from physics!** The normal notion of pure math is that mathematicians have some kind of direct pipeline to God's thoughts, to absolute truth! But poor physicists! You know, they tried Newtonian mechanics, it looks good for a while, then Einstein shows it's all wrong. Then\u2014surprise!\u2014quantum mechanics shows that Einstein was wrong! And now there's superstring theory. And is it right? Is it wrong? And mathematicians laugh and say, \"Oh those poor physicists! It's such a messy subject! They always have to backpedal, they don't know what they're doing! It's all so tentative!\"\n\nWell, I think that mathematics and physics are not really that different!\u2014Physicists love it when I say this! \\[Laughter\\]\n\nLet me try explaining this another way. Euclid said that mathematics is based on self-evident truths. But my impression is that maybe axioms are not self-evident truths. I don't believe in self-evident truths. Maybe it's more like in physics. Maybe mathematics should be done more like physics, where you're willing to add new axioms because they're useful, not because they're self-evident. And then of course you have to be prepared to say \"I goofed!\"\u00a0and remove an axiom, which mathematicians don't like to have to do.\n\nThis may sound completely crazy, but in fact it's not just my opinion. G\u00f6del makes very similar remarks in Volume II of his *Collected Works,* in his essay on \"Russell's Mathematical Logic.\" This essay was originally published in the Paul Arthur Schilpp volume *The Philosophy of Bertrand Russell.*\n\nI talked about new axioms to a mathematician once, and he replied, \"Okay, I'm willing to add the Riemann hypothesis as a new axiom if you can prove to me that it doesn't follow from the usual axioms.\" Well, that's hard to do, because if the Riemann hypothesis were false, then there would be a numerical counter-example that one could easily verify that shows that it's false. So if you could show that the Riemann hypothesis is beyond the power of the usual axioms, that would imply that the Riemann hypothesis is true!\n\nSo these ideas are very controversial.\n\nBut the whole point of algorithmic information theory, the whole point of my information-theoretic approach to incompleteness, is that sometimes to get more information out of a set of axioms, you've just got to put more in. So let's put more axioms in! Physicists have always done that.\n\nI had these ideas a long time ago. I proved my first information-theoretic incompleteness theorem in 1970. Although it's only in the past two or three years that I discovered how to actually program the algorithm in my original proof and run it on examples. And I was recently surprised to discover or rediscover that there are highly relevant quotes from Einstein and from G\u00f6del. Let me throw them into this discussion, let me end with that.\n\nEinstein has a very nice remark that I angered some mathematicians with. But what do they care, he's only a physicist, right? \\[Laughter\\] In his essay \"Remarks on Bertrand Russell's Theory of Knowledge\" in the Paul Arthur Schilpp volume *The Philosophy of Bertrand Russell,* Einstein says that \"the series of integers is obviously an invention of the human mind, a self-created tool which simplifies the ordering of certain sensory experiences.\" So you can see that Einstein's attitude is very empirical.\n\nI think that Einstein's position is that the positive integers are not *a priori,* they're not God given, we invent them like we invent all of physics. But the positive integers look more *a priori* than other concepts because they've been around longer, we invented them a long time ago. After all, when an idea has been around for a few thousand years, it's not surprising that people think that it's obvious, that it's just common sense, that it's \"a necessary tool of thought.\" The other extreme is the latest field theory in physics. That looks a lot more tentative. It hasn't been here long, and it'll probably be shot down next week, right? And there are probably thirteen different versions! But in Einstein's view there is no fundamental difference. The positive integers have been around for a long time, but they're still just an invention.\n\nSo I like this quote from Einstein, but it doesn't convince mathematicians.\n\nThen there are some very interesting remarks made by G\u00f6del. You can find them in G\u00f6del's *Collected Works.* G\u00f6del's philosophical position is the exact opposite of Einstein's. G\u00f6del believed in the Platonic universe of mathematical ideas, he believed that the positive integers are just as real as tables and chairs. There are an infinity of positive integers, and they're out there somewhere. They're in the Platonic universe of mathematical ideas, that's where mathematical objects are!\n\nI don't know! I used to laugh at all of this when I was a kid. But think about it seriously. You're young, you're trying to learn mathematics, you're doing elementary number theory, and you don't really start to worry about the fact that elementary number theory presupposes arbitrarily large positive integers, and how do they fit in the universe? Imagine a positive integer that is $$10^{10^{10^{10}}}$$ digits long. Does it exist? In what sense does it exist? You don't care\u2014right?\u2014you prove theorems about it, you know that it would be commutative, right? $a+b$ is going to be equal to $b+a$ even if neither number fits in the universe! \\[Laughter\\] But then later on in life you start to worry about this! \\[Laughter\\]\n\nSo I'm not sure if the positive integers exist anymore. But G\u00f6del thinks that they do, and that philosophical position is associated with the name of Plato. But it's really just the classical mathematical position that the positive integers really exist, that an infinity of them are really out there. And starting from that, G\u00f6del comes to a very surprising conclusion. Since the positive integers are just as real as tables and chairs, you can do experiments with them by doing calculations, and if you see a pattern, you can just go ahead like a scientist would in dealing with electrons. Since integers are just as real as electrons, why can't we use the same kinds of methods that scientists use?\n\nHere are G\u00f6del's exact words, taken from a previously unpublished manuscript *1951* that is in Volume III of his *Collected Works:*\n\n> \"If mathematics describes an objective world just like physics, there is no reason why inductive methods should not be applied in mathematics just the same as in physics.\"\n\nAnd in his essay \"What is Cantor's Continuum Problem?\"\u00a0in Volume II of his *Collected Works,* G\u00f6del says that maybe we'll come up with new versions of set theory, maybe we'll come up with new ideas about sets the same way that physicists come up with new ideas about physical objects. The justification for these new principles would be that they're useful, that they help us to organize our mathematical experience, just as the ultimate justification for physical principles is that they help us to organize our physical experience.\n\nI think it's very funny! Here's Einstein, who's a diehard empiricist, and here's G\u00f6del, who's a diehard Platonist, and they sort of come to the same conclusion! But they were buddies at the Institute for Advanced Study, so it's not surprising that they influenced each other. And the information-theoretic viewpoint that I've explained to you today leads me in the very same direction.\n\nAnother funny thing is that I don't believe that all this work on the foundations of mathematics has changed at all the way mathematicians actually work. G\u00f6del's incompleteness theorem was initially very shocking. But then mathematicians noticed that the kind of assertion that G\u00f6del constructed that's true but unprovable is not the kind of assertion that you deal with in your everyday work as a mathematician. And I think it's fair to say that about $\\Omega$ too. The longer I live with $\\Omega$, the more natural it looks to me. But a skeptic would say, \"I don't care about the bits of $\\Omega$, so what if there's trouble there!\"\n\nBut I do think that something else is making a difference in how mathematicians carry on their everyday work. It's the fact that the computer has so vastly expanded mathematical experience, that we just have to deal with it somehow. A lot of physicists do experimental work on the computer now. There's even a journal called *Experimental Mathematics.*\n\nThe computer has so vastly increased mathematical experience, that how do we keep up with all of it? Well, the answer is that sometimes you see that something seems to be the case, and it'd be nice if you could prove it, but for the moment you can't, so you sort of conjecture it. And if you're doing mathematical physics, then you're on the borderline between math and physics, and it's certainly okay to behave in this way. But if you're a mathematician, if you're just over the border, then it's not so clear anymore.\n\nWell, I once had a conversation like this with a mathematician, and the Riemann hypothesis came up, and he said, \"It works fine the way we do things now. You have a paper, and you say the paper is 'modulo the Riemann hypothesis.' So why do you need to call it a new axiom?\" And he has a point. But I do think that if a principle is very helpful for a long time, and nobody shoots it down, why not call it a new axiom? But one shouldn't do this too quickly. One has to be careful, the way physicists are, although I sometimes wonder how careful physicists really are! \\[Laughter\\]\n\nOkay, that's the idea. And if you go to my Web site you'll find there in HTML all my less technical papers. And you've also got this course there, and the Java source code and byte code for my LISP interpreter. So you're welcome to go to my Web site and play with it, and you're welcome to send me e-mail about it, and I'd be pleased as Punch if one of you tried giving my course on the limits of mathematics to real students in a normal university setting.\n\nAnyway, you guys have been nice enough to invite me for many years to give talks, and it's been very stimulating to talk with you, and for the moment I think I've exhausted this train of thought. Of course, I've been thinking that since I was fifteen years old, but fortunately every now and then I get a new idea! \\[Laughter\\] But for now, that's all I have to say, and maybe someday I'll have more to say, and I'll be lucky enough to come out here again and have a chance to kick it around with you! Thank you very much! \\[Applause\\]\n\n[^1]: *Lecture given Wednesday 24 April 1996 at a Computer Science Colloquium at the University of New Mexico. The lecture was videotaped; this is an edited transcript.*\n\n[^2]: Of course these aren't McCarthy's words, it's my paraphrase.\n\n[^3]: Perhaps one should refer to this as the *logical (or deductive) irreducibility* of $\\Omega$, to distinguish it from the *computational (or algorithmic) irreducibility* of $\\Omega$, namely the fact that $H(\\Omega_N) > N - 8000$.","meta":{"dup_signals":{"dup_doc_count":12,"dup_dump_count":7,"dup_details":{"curated_sources":2,"2015-11":1,"2014-10":1,"2013-48":2,"2013-20":2,"2015-18":1,"unknown":3}},"filename":"out\/chao-dyn9609008.tex.md"},"subset":"arxiv"} +{"text":"author: Naoto Nishizuka, Applied Electromagnetic Research Institute, National Institute of Information and Communications Technology, Tokyo 184-0015, Japan, firstname.lastname@example.com \nY\u00fbki Kubo, Applied Electromagnetic Research Institute, National Institute of Information and Communications Technology, Japan, firstname.lastname@example.com \nKomei Sugiura, Department of Information and Computer Science, Keio University, Japan, firstname.lastname@example.com \nMitsue Den, Applied Electromagnetic Research Institute, National Institue of Information and Communications Technology, Japan, firstname.lastname@example.com \nMamoru Ishii, Applied Electromagnetic Research Institute, National Institute of Information and Communications Technology, Japan, email@example.com\ntitle: Operational Solar Flare Prediction Model Using Deep Flare Net\n\n# 1. Introduction\n\nThe mechanism of solar flares is a long-standing puzzle. Solar flares emit X-rays, highly energetic particles, and coronal mass ejections (CMEs) into the interplanetary space in the heliosphere, whereby these flares become one of the origins of space weather phenomena . The prediction of flares is essential for reducing the damage to technological infrastructures on Earth. Solar flares are triggered by a newly emerging magnetic flux or magnetohydrodynamic instability to release excess magnetic energy stored in the solar atmosphere . Such phenomena are monitored by the Solar Dynamic Observatory and the Geostationary Orbital Environment Satellite (GOES), and observation data are used for the prediction of flares. \nCurrently, flare prediction is tackled by the following four approaches: (i) empirical human forecasting , (ii) statistical prediction methods , (iii) machine learning methods , and (iv) numerical simulations based on physics equations . Some of the models have been made available for community use at the Community Coordinated Modeling Center (CCMC) of NASA . It is useful to show the robust performance of each model, and in benchmark workshops, prediction models were evaluated for comparison, where methods that included machine learning algorithms as part of their system were also discussed . \nRecently, the application of supervised machine learning methods, especially deep neural networks (DNNs), to solar flare prediction has been a hot topic, and their successful application in research has been reported . However, there is insufficient discussion on how to develop the methods available to real-time operations in space weather forecasting offices, including the methods for validation and verification of the models. Currently, new physical and geometrical (topological) features are applied to flare prediction using machine learning , and it has been noted that training sets may be sensitive to which period in the solar cycle they are drawn from. . \nIt has been one year since we started operating our flare prediction model using DNNs, which we named Deep Flare Net . Here, we evaluate the prediction results during real-time operations at the NICT space weather forecasting office in Tokyo, Japan. In this paper, we introduce the operational version of DeFN in sections 2 and 3, and we show the prediction results in section 4. We propose the use of time-series cross-validation (CV) to evaluate operational models in section 5. We summarize our results and discuss the selection of a suitable evaluation method for models used in operational settings in section 6.\n\n# 2. Flare Forecasting Tool in Real-Time Operation\n\n## 2.1. Procedures of Operational DeFN\n\nDeFN is designed to predict solar flares occurring in the following 24 h after observing magnetogram images, which are categorized into the two categories: ($\\geq$M-class and $<$M-class) or ($\\geq$C-class and $<$C-class). In the operational system of DeFN forecasts, observation images are automatically downloaded, active regions (ARs) are detected, and 79 physics-based features are extracted for each region. Each feature is standardized by the average value and standard deviation and is input into the DNN model, DeFN. The output is the flare occurrence probabilities for the two categories. Finally, the maximum class of flares occurring in the following 24 h is forecast by taking the maximum probability of the forecasts. \nOperational DeFN was redesigned for automated real-time forecasting with operational redundancy. All the programs written in IDL and Python languages are driven by cron scripts at the prescribed forecast issuance time as scheduled. There are a few differences from the original DeFN used for research, as explained in the next subsection. A generalized flow chart of operational DeFN is shown in Figure 1.\n\n## 2.2. NRT Observation Data\n\nThe first difference between development DeFN and operational DeFN is the use of near real-time (NRT) observation data. We use the observation data of line-of-sight magnetograms and vector magnetograms taken from Helioseismic and Magnetic Imager on board SDO, ultraviolet (UV) and extreme ultraviolet (EUV) images obtained from Atmospheric Imaging Assembly through 1600 \u2006\u00c5\u00a0and 131 \u2006\u00c5\u00a0filters; and the full-disk integrated X-ray emission over the range of 1\u20138 \u2006\u00c5\u00a0observed by GOES. For visualization, we also use white light images taken from HMI and EUV images obtained using AIA through 304 \u2006\u00c5\u00a0and 193 \u2006\u00c5\u00a0filters. The time cadence of the vector magnetograms is 12 min, that of the line-of-sight magnetograms is 45 s, those of the 1600 \u2006\u00c5\u00a0and 131 \u2006\u00c5\u00a0filters are both 12 s, and that of GOES is less than 1 min. \nThe data product of SDO is provided by the Joint Science Operations Center (JSOC) of Stanford University. The HMI NRT data are generally processed and available for transfer within 90 min after observations . This is why DeFN was designed to download the observation dataset 1 h earlier. If the observation data are unavailable because of processing or transfer delays, the target of downloading is moved back in time to 1 to 5 h earlier in the operational DeFN system. When no data can be found beyond 5 h earlier, it is considered that the data are missing. Here, the time of 5 h was determined by trial and error. Forecasting takes 20\u201340 min for each prediction; thus, it is reasonable to set the forecasting time to as often as once per hour. The 1 h cadence is comparable to that of the time evolution of the magnetic field configuration in active regions due to flux emergence or changes before and after a flare. However, DeFN started operating in the minimum phase of solar activity, so we started forecasting with a 6 h cadence instead of a 1 h cadence. \nThe NRT vector magnetograms taken by HMI\/SDO are used for operational forecasts, whereas the calibrated HMI 'definitive' series of vector magnetograms are used for scientific research. The NRT vector magnetograms are accessed from the data series 'hmi.bharp_720s_nrt' with segmentations of 'field', 'inclination', 'azimuth', and 'ambig'. These segmentations indicate the components of field strength, inclination angle, azimuth angle, and the disambiguation of magnetic field in the photosphere, respectively. Additionally, the NRT line-of-sight magnetograms are downloaded from the data series 'hmi.M_720s_nrt', and the NRT white light images are from the 'hmi.Ic_noLimbDark_720s_nrt' (jsoc2) series. The NRT data of AIA 131 \u2006\u00c5\u2006 193 \u2006\u00c5\u2006 304 \u2006\u00c5\u2006 and 1600 \u2006\u00c5\u00a0filters are retrieved from the 'aia.lev1_nrt2' (jsoc2) series. \nNote that the HMI NRT vector magnetogram is not for the full disk, in contrast to the HMI definitive series data. HMI Active Region Patches (HARP) are automatically detected in the pipeline of HMI data processing , and the HMI NRT vector magnetogram is limited to the HARP areas plus a buffer, on which we overlaid our active region frames detected by DeFN and extracted 79 physics-based features . Furthermore, the correlation between the HMI NRT data and the definitive data has not been fully statistically revealed. A future task is to reveal how the difference between the HMI NRT and definitive series data affects the forecasting results. The same comments can be made for the AIA NRT and definitive series data. \n\n## 2.3. Implementation of Operational DeFN\n\nOperational DeFN runs autonomously every 6 h by default, forecasting at 03:00, 09:00, 15:00, and 21:00 UT. The forecasting time of 03:00 UT was set to be before the daily forecasting meeting of NICT at 14:30 JST. The weights of multi-layer perceptrons of DeFN were trained with the 2010-2014 observation datasets, and we selected representative hyperparameters by the observation datasets in 2015. \nFor the classification problem, parameters are optimized to minimize the cross entropy loss function. However, since the flare occurrence ratio is imbalanced, we adopted a loss function with normalizations of prior distributions. It is the sum of the weighted cross entropy.\n\nHere, $p(y_{nk}^*)$ is the initial probability of correct labels $y_{nk}^*$, i.e., 1 or 0, whereas $p(y_{nk})$ is the estimated value of probability. The components of $y_{nk}^*$ are 1 or 0; thus, $p(y_{nk}^*)$=$y_{nk}^*$. $w_k$ is the weight of each class and is the inverse of the class occurrence ratio, i.e., \\[1, 50\\] for $\\geq$M-class flares and \\[1, 4\\] for $\\geq$C-class flares. Parameters are stochastically optimized by adaptive moment estimation with learning rate = 0.001, $\\beta_1$ = 0.9, and $\\beta_2$ = 0.999. The batch size was set to 150 . \nTheoretically, the positive and negative events, i.e., whether $\\geq$M-class flares occur or not, are predicted in the following manner. The following equation is commonly used in machine learning: $$\\hat{y} = \\mathop{\\rm argmax}\\limits_{k} p(y_k).$$ Here, $\\hat{y}$ is the prediction result, and the threshold is usually fixed. For example, in the case of two-class classifications, the events with a probability greater than 50 % are output. When we use the model as a probabilistic prediction model, we also tried smaller threshold values for safety in operations, although there are no obvious theoretical meanings. \nNote that the loss function weights cannot be selected arbitrarily. The positive to negative event ratios of $\\geq$M-class and $<$M-class or $\\geq$C-class and $<$C-class flares, which are called the occurrence frequency and the climatological base rate, are 1:50 and 1:4 during 2010-2015, respectively, in the standard. Only when the cross entropy is applied to the weight of the inverse ratio of positive to negative events does it become theoretically valid to output the prediction by equation (2). Therefore, we used the base rate as the weight of cross entropy. \nThe DNN model of the operational DeFN was developed as in Nishizuka et al. (2018). Because the full HMI and AIA datasets obtained from 2010 to 2015 were too large to save and analyze, the cadence was reduced to 1 h, although in general a larger amount of data is useful for better predictions. We divided the feature dataset into two for training and validation with a chronological split: the dataset obtained from 2010 to 2014 for training and the 2015 dataset for validation. The point of this paper is to contrast how well the DeFN model can predict solar flares in the real-time operations and in the research using time series CV methods (=shuffle and divide CV is insufficient). Then, we will discuss that the gap between the prediction accuracies in operations and in research using a time-series CV is small (see section 4.3). \nThe time-series CV is stricter than a K-fold CV on data split by active region. It might be true that a K-fold CV on data split by active region can also prevent data from a single active region being used in training and testing . However, a K-fold CV on data split by active region allows the training set to contain future samples from different active regions. This may affect the prediction results, when there is a long-term variation of solar activity. As well, the number of active regions which produced X-class and M-class flares is not so large that a K-fold CV on data split by active region may be biased and not equal. \nIndeed, solar flare prediction in operation has been done in a very strict condition, where no future data is available. Our focus is not to deny a K-fold CV on data split by active region. Instead, our focus is to discuss more appropriate CVs in operational setting. \nThe model was evaluated with a skill score, the true skill statistic , which is a metric of the discrimination performance. Then, the model succeeded in predicting flares with TSS = 0.80 for $\\geq$M-class and TSS = 0.63 for $\\geq$C-class (Table 1). Note that the data for 2016\u20132018 were not used, because there were fewer flares in this period than in the period between 2010 and 2015. \nFlare labels were attached to the 2010\u20132015 feature database for supervised learning. We collected on the disk all the flare samples that occurred from the flare event list. We visually checked the locations of the flares, compared them with NOAA numbers, and found the corresponding active regions in our database when there were two or more active regions. Then we attached flare labels to predict the maximum class of flares occurring in the following 24 h. If $\\geq$M-class flares are observed within 24 h after observations, the data are attached with the label (0, 1)$_{\\rm M}$; otherwise, they are attached with the label (1, 0)$_{\\rm M}$. When two M-class flares occur in 24 h, the period with the label (0, 1)$_{\\rm M}$ is extended. Similarly, the labels (0, 1)$_{\\rm C}$ and (1, 0)$_{\\rm C}$ are separately attached for the prediction model of C-class flares. The training was executed using these labels. On the other hand, in real-time operation, we do not know the true labels of flares, so we attached the NRT feature database with dummy labels (1, 0), which are not used in the predictions. It is possible to update the model by retraining it using the latest datasets if the prediction accuracy decreases. However, the pretrained operational model is currently fixed and has not been changed. \n\n# 3. Operation Forecasts Using DeFN\n\n## 3.1. Graphical Output\n\nThe graphical output is automatically generated and shown on a website (Figure 3). The website was designed to be easy for professional space weather forecasters who are often not scientists to understand. Prediction results for both the full-disk and region-by-region images are shown on the website, and the risk level is indicated by a mark, \"Danger flares\", \"Warning\", and \"Quiet\". Images are updated every 6 h as new data are downloaded. Details of the DeFN website are described below: \n\n- **Solar full-disk images and detected ARs:** \n Images obtained by multiwavelength observations, such as magnetograms and white light, 131 \u2006\u00c5\u2006 193 \u2006\u00c5\u2006 304 \u2006\u00c5\u2006 and 1600 \u2006\u00c5\u00a0images taken by SDO, are shown along with ARs detected by DeFN, where the threshold is set to 140 G in the line-of-sight magnetograms taken by HMI\/SDO . \n\n- **Probabilistic forecasts at each AR:** \n Probabilistic forecasts of flare occurrence at each AR are shown for $\\geq$M-class and $<$M-class flares or $\\geq$C-class and $<$C-class flares by bar graphs, in analogy with the probabilistic forecasts of precipitation. Note that this forecasted probability does not indicate the real observation frequency, because the prior distributions are normalized to the peak at 50 % by the weighted cross entropy, where the loss function weights are the inverse of the flare occurrence ratio . Thus, operational DeFN is optimized for forecasting with the default probability threshold of 50 %. That is, operational DeFN forecasts flares if the real occurrence probability, which is forecast by the non-weighted cross entropy loss function, is greater than the climatological event rate, and it does not forecast flares if the real occurrence probability is less than the climatological event rate . Therefore, the normalized forecasted probability of $\\geq$M-class flares sometimes becomes larger than that of $\\geq$C-class flares. \n\n- **Full-disk probability forecasts and alert marks:** \n The full-disk flare occurrence probability of $\\geq$M-class flares, $P_{FD}$, is calculated using\n\n where $S$ is the set of ARs on the disk, $i$ is the number of ARs on the disk, element $AR_i$ is a member of set $S$, and $P_i$ is the probabilistic forecast at each $AR_i$ . The risk level is indicated by a mark based on $P_{FD}$ and is categorized into three categories: \"Danger flares\" ($P_{FD}$ $\\geq$``{=html}80 %), \"Warning\" ($P_{FD}$ $\\geq$``{=html}50 %), and \"Quiet\" ($P_{FD}$ $<$``{=html}50 %). This is analogous to weather forecasting, e.g., sunny, cloudy, and rainy. \n\n- **List of comments and remarks:** \n Forecasted probabilities (percentages), comments, and remarks are summarized in a list. \n\n## 3.2. Details of Operations and Redundancy\n\nOperational DeFN has operated stable forecasts since January 2019. In this subsection, we explain the redundancy and operational details of operational DeFN.\n\n- **Forecast outages:** A forecast is the category with the maximum probability of a flare in each of the categories in the following 24 h after the forecast issue time. A forecast is normally issued every 6 h. If problems occur when downloading or processing data, the forecast is skipped and handled as a forecast failure.\n\n- **Data outages of SDO\/HMI, AIA:** There are delays in the HMI\/SDO data processing when no applicable NRT data are available for a forecast. In this case, the NRT data to download is moved back in time to 1 to 5 h earlier. In such as case, the forecasting target will change from 24 h to 25-29 h, though the operational DeFN is not retrained. If no data can be found beyond 5 h earlier, the \"no data\" value is assigned and the forecast is skipped.\n\n- **No sunspots or ARs with strong magnetic field on disk:** If there are no sunspots or ARs detected with the threshold of 140 G on the disk image of the line-of-sight magnetogram, feature extraction is skipped, a forecast of \"no flare\" with a probability forecast of 1 % is issued, and the \"no sunspot\" value is assigned.\n\n- **Forecasts at ARs near\/across limb:** DeFN is currently not applicable to limb events. If an AR is detected across a limb, it is ignored in forecast targets.\n\n- **Flares not assigned to an active region:** Detected active regions by operational DeFN are not completely the same as active regions registered by NOAA. There are cases where flares occur in decaying or emerging active regions which are not detected by DeFN with the threshold of 140 G. This occurs most often for C-class and lower intensity flares, for example, C2.0 flare in NOAA 12741 on 2019 May 15. Such a flare is missed in real-time forecasts but included in evaluations.\n\n- **Retraining:** DeFN can be retrained on demand, and a newly trained model can be used for forecasting. Currently, the pretrained model is fixed and has not been changed so far.\n\n- **Alternative data input after SDO era (option):** Since DeFN is designed to detect ARs and extract features by itself, it can be revised and trained to include other space- and ground-based observation data in DeFN, even when SDO data are no longer available.\n\n# 4. Forecast Results and Evaluation\n\n## 4.1. Operational Benchmark\n\nThe purpose of machine-learning techniques is to maximize the performance for unseen data. This is called generalization performance. Because it is hard to measure generalization performance, it is usually approximated by test-set performance, where there is no overlap between the training and test sets. \nOn the other hand, as the community continues to use a fixed test set, on the surface the performance of newly proposed models will seem to improve year by year. In reality, generalization performance is not constantly improving, but there will be more models that are effective only for the test set. This is partially because that models with lower performance than state-of-the-art are not reported. In other words, there are more and more models that are not always valid for unseen datasets. It is essentially indistinguishable whether the improvement is due to an improvement in generalization performance or because it is a method that is effective only for the test set. \nThe above facts are well-known in the machine learning community, and the evaluation conditions are mainly divided into two, basic and strict. Under the strict evaluation conditions, only an independent evaluation body evaluates each model using the test set only once. The test set is not published to the model builders . The solar flare prediction is originally a prediction of future solar activity using a present observation dataset, and the data available to researchers are only the past data. This fact is consistent with the strict evaluation condition in machine-learning community. \nIn this section, we evaluate our operational forecasting results. We call this the \"operational benchmark\" in this paper. In the machine learning community, a benchmark using a fixed test set is used only for basic benchmark tests. The basic approach is simple but is known to be insufficient. This is because no one can guarantee that the test set is used only once. In strict machine learning benchmarks, evaluation with a completely unseen test set is required. Only organizer can see the \"completely unseen test set\", which cannot be seen by each researcher. This is because, if researchers use the test set many times, they implicitly tend to select models effective only for the fixed test set. \nWe think that the evaluation methods of operational solar flare prediction models are not limited to evaluations using a fixed test set. However, this paper does not deny the performance evaluation using a fixed test set. The purpose of this paper is to show that the operational evaluation is important. From a fairness perspective, the strict benchmarking approach takes precedence over the basic approach. Our operational evaluation is based on the strict benchmarking approach. We did not retrain our model after the deployment of our system. \n\n## 4.2. Forecast Results and Evaluation\n\nWe evaluated the forecast results from January 2019 to June 2020, when we operated operational DeFN in real time. During this period, 24 C-class flares and one M-class flare were observed. The M-class flare was observed on 6 May 2019 as M1.0, which was originally reported as C9.9 and corrected to M1.0 later. The forecast results are shown in Table 2. Each contingency table shows the prediction results for $\\geq$M-class and $\\geq$C-class flares. operational DeFN was originally trained with the probability threshold of 50 % to decide the classification, but in operations, users can change it according to their purposes. In Table 2, we show three cases for $\\geq$M-class and $\\geq$C-class predictions using different probability thresholds, such as 50 %, 45 %, and 40 % for reference. \nEach skill score can be computed from the items shown in contingency tables, and not vice versa. This is a well-known fact. No matter how many skill scores you show, you will not have more information than one contingency table. The relative operating characteristic (ROC) curve and the reliability diagram, which are shown in Leka et al. (2019), can also be reproduced from the contingency table if it is related to the deterministic forecast (forecast of this paper). The ROC curve is a curve or straight line made by plots on a probability of false detection (POFD) - probability of detection (POD) plane. The ROC curve for a deterministic forecast is made by connecting three points (0,0), (POFD, POD) for a deterministic forecast, and (1,1) . For reference, we introduce skill scores used in Leka et al. (2019), such as the accuracy, Hanssen & Kuiper skill score\/Pierce skill score\/True skill statistic (TSS\/PSS), Appleman skill score (ApSS), equitable threat score (ETS), Brier skill score, mean-square-error skill score (MSESS), Gini coefficient, and frequency bias (FB). \nAccording to Table 2, the flare occurrence was very rare and imbalanced in the solar minimum phase. Most of the forecasts are true negative. When we decrease the probability threshold, the number of forecast events increases. We evaluated our results with the four verification metrics in Table 3: accuracy, TSS, false alarm ratio (FAR), and Heidke skill score (HSS) . They show that operational DeFN optimized for $\\geq$C-class flare prediction achieved accuracy of 0.99 and TSS of 0.70 with the probability threshold of 50 %, whereas they were 0.98 and 0.83 with the probability threshold of 40 %. DeFN optimized for $\\geq$M-class flare prediction achieved accuracy of 0.99 but TSS was only 0.24 because only a single M1.0 flare occurred. Operational DeFN did not predict this flare because it was at the boundary of the two categories of $\\geq$M-class and $<$M-class flares. This happens a lot in real operations, and this is a weakness of binary classification systems used in operational settings. \nThe trends of the contingency tables are similar to those evaluated in the model development phase. (Table 2). However, there are two differences. First, the data used were the NRT data, whereas the definitive series was used for development. However, in this case, there was negligible difference between them. Second, the evaluation methods are different. The operational DeFN was evaluated on the actual data from 2019 to 2020, whereas the development model was validated with the 2010-2014 dataset and tested with the 2015 dataset. It appears that the chronological split provides more suitable evaluation results for operations than the common methods, namely, shuffle and split CV and K-fold CV. \n\n## 4.3. Time-series CV\n\nHere we propose the use of time-series CV for evaluations of operational forecasting models. In previous papers on flare predictions, we used hold-out CV, where a subset of the data split chronologically was reserved for validation and testing, rather than the na\u00efve K-fold CV. This is because it is necessary to be careful when splitting the time-series data to prevent data leakage . To accurately evaluate prediction models in an operational setting, we must not use all the data about events that occur chronologically after the events used for training. \nThe time-series CV is illustrated in Figure 4. In this procedure, there are a series of testing datasets, each consisting of a set of observations and used for prediction error. The corresponding training dataset consists of observations that occurred prior to the observations that formed the testing dataset and is used for parameter tuning. Thus, model testing is not done on data that may have pre-dated the training set. Furthermore, the training dataset is divided into training and validation datasets. The model prediction accuracy is calculated by averaging over the testing datasets. This procedure is called rolling forecasting origin-based CV . In this paper, we call it time-series CV, and it provides an almost unbiased estimate of the true error . \nNote that the time-series CV has the following advantages: (i) The time-series CV is the standard validation scheme in time-series prediction. (ii) A single chronological split does not always reflect low generalization error . In other words, the trained model is not guaranteed to work for unseen test set. To avoid this, the time-series CV applies multiple chronological splits. The ability to predict new examples correctly that differ from those used for training is known as generalization performance . Therefore, the time-series CV is more generic and appropriate. \nThe evaluation results obtained by time-series CV using the 2010\u20132017 datasets are summarized in Table 4. The datasets were chronologically split to form the training, validation, and testing datasets. TSS is largest with the 2010\u20132014 datasets for training, the 2015 datasets for validation, and the 2016 datasets for testing. This is probably because it is not possible to obtain a reliable forecast based on a small training dataset obtained from 2010 to 2012. By averaging over the five testing datasets, we found that TSS is 0.70 for $\\geq$M-class flares and 0.59 for $\\geq$C-class flares. This procedure will be more suitable for an observation dataset with a longer time period. \n\n# 5. Summary and Discussion\n\nWe developed an operational flare prediction model using DNNs, which was based on a research version of the DeFN model, for operational forecasts. It can provide probabilistic forecasts of flares in two categories occurring in the next 24 h from observations: $\\geq$M-class and $<$M-class flares or $\\geq$C-class and $<$C-class flares. DeFN has been continuously used for operational forecasting since January 2019, and we evaluated its performance using the forecast and actual flare occurrences between January 2019 and June 2020. We found that operational DeFN achieved an accuracy of 0.99 and TSS of 0.70 for $\\geq$C-class flare predictions, whereas the accuracy was 0.99 but TSS was only 0.24 for $\\geq$M-class flare prediction using a probability threshold of 50 %. using a probability threshold of 40 %, the accuracy was 0.98 and TSS was 0.83 for $\\geq$C-class flares, whereas they were 0.98 and 0.48 for $\\geq$M-class flares. \nOperational DeFN has the advantages of a large TSS, good discrimination performance, and the low probability of missed detection of observed flares. This is why it is useful for operations that require that no flares are missed, such as human activities in space and critical operations of satellites. On the other hand, it tends to over-forecast and the false alarm ratio (FAR) increases. Because the number of true negatives is very large in an imbalanced problem such as solar flare prediction, TSS is less sensitive to false positives than to false negatives. Currently, the prior distributions of $\\geq$M-class and $<$M-class flares are renormalized to increase TSS at threshold probability of 50 %, but this results in an increase in FAR. \nWhen we compared the evaluation results, we observed no significant difference between the pretrained and operational results. This means that, at least during January 2019 \u2013 June 2020, the difference between NRT and definitive series science data did not greatly affect the forecasts. We found a TSS of 0.63 for the $\\geq$C-class model evaluated using the pretrained model was maintained and even increased to 0.70 (0.83) for operational forecasts with the probability threshold of 50 (40) %. This suggests that the chronological split is more suitable for the training and validation of the operational model than shuffle and split CV. \nHere, we discuss how to train and evaluate machine learning models for operational forecasting. For an exact comparison, it is desirable to use the same datasets among participants. If this is not possible, there are three points that require attention. \n\n1. Observation Database: The ratio of positive to negative events should not be artificially changed, and datasets should not be selected artificially. Data should be the climatological event rate and kept natural. This is because some metrics are affected by controlling the positive to negative event ratio of datasets, especially HSS, which will result in a difference from the operational evaluations. For operational evaluations, it is also desirable to include ARs near the limb, although they are excluded in most papers because the values of magnetograms are unreliable owing to the projection effect. Currently, in machine learning models, limb flares are not considered, but they also need to be considered in the near future, using GOES X-ray statistics as in human forecasting or magnetograms reproduced by STEREO EUV images . \n\n2. Datasets for Training and Testing: We recommend that a chronological split or time-series CV is used for training and evaluation of operational models. Although K-fold CV using random shuffling is common in solar flare predictions, it has a problem for a time-series dataset divided into two for training and testing when the time variation is very small, e.g., the time evolution of magnetic field. If the two neighboring datasets, which are very similar, are divided into both training and testing sets, the model becomes biased to overpredict flares. It might be true that a K-fold CV on data split by active region can also prevent data from a single active region being used in training and testing. However, a K-fold CV on data split by active region allows the training set to contain future samples from different active regions. Therefore, in the point of view of generalization performance, a time-series CV is stricter and more suitable for operational evaluation. \n\n3. Selection of Metrics: The ranking of models is easily affected by the selection of the metric. Depending on the purpose, users should select their preferred model by looking at the contingency tables and skill scores of each model. After understanding that each skill score can evaluate one aspect of performance, verification methods should be discussed in the space weather community . \n\nIn this paper, we showed contingency tables of our prediction results. No matter how many skill scores you show, you will not have more information than one contingency table. We evaluated our prediction results as a deterministic forecasting model. The ROC curve and the reliability diagram, which are shown in Barnes et al. (2016) and Leka et al. (2019), can also be reproduced from the contingency table if it is related to the deterministic forecast. \nWe demonstrated the performance of a machine learning model in an operational flare forecasting scenario. The same methods and discussion of prediction using machine learning algorithms can be applied to other forecasting models of space weather in the magnetosphere and ionosphere. Our future aim is to extend our model to predicting CMEs and social impacts on Earth by extending our database to include geoeffective phenomena and technological infrastructures. \n\n# Declarations\n\n# Availability of data and materials\n\nThe code is available at https:\/\/github.com\/komeisugiura\/defn18. In the README file, we explain the architecture and selected hyper parameters. The feature database of DeFN is available at the world data center of NICT (http:\/\/wdc.nict.go.jp\/IONO\/wdc\/). The SDO data are available from the SDO data center (https:\/\/sdo.gsfc.nasa.gov\/data\/) and JSOC (https:\/\/jsoc.stanford.edu\/). The GOES data are available at https:\/\/services.swpc.noaa.gov\/json\/goes\/.\n\n# Competing interests\n\nThe authors declare that they have no competing interests.\n\n# Funding\n\nThis work was partially supported by JSPS KAKENHI Grant Number JP18H04451 and NEDO. A part of these research results was obtained within \"Promotion of observation and analysis of radio wave propagation\", commissioned research of the Ministry of Internal Affairs and Communications, Japan.\n\n# Authors' contributions\n\nN.N., Y.K. and K.S. developed the model. N.N. analyzed the data and wrote the manuscript. M.D. and M.I. participated in discussing the results.\n\n# Preparing tables\n\n| $\\geq$M-class Flares | | Observed Events | |\n|:--------------------:|:---:|:----------------|:------|\n| 3-4 | Yes | No | |\n| Forecast | Yes | 963 | 4382 |\n| 2-4 Events | No | 54 | 25937 |\n\nContingency tables of DeFN using 2010-2015 datasets.\n\n| $\\geq$C-class Flares | | Observed Events | |\n|:--------------------:|:---:|:----------------|:------|\n| 3-4 | Yes | No | |\n| Forecast | Yes | 4967 | 4420 |\n| 2-4 Events | No | 1171 | 20778 |\n\nContingency tables of DeFN using 2010-2015 datasets.\n\n| $\\geq$M-class Flares | | Observed Events | |\n|:--------------------:|:---:|:----------------|:-----|\n| 3-4 | Yes | No | |\n| Forecast | Yes | 1 | 28 |\n| 2-4 Events | No | 3 | 2201 |\n\nContingency tables of DeFN forecasts in operation from January 2019 to June 2020. They show the forecast results for $\\geq$M-class flares and for $\\geq$C-class flares, with three different probability thresholds such as 50 %, 45 %, and 40 %.\n\n| $\\geq$C-class Flares | | Observed Events | |\n|:--------------------:|:---:|:----------------|:-----|\n| 3-4 | Yes | No | |\n| Forecast | Yes | 27 | 18 |\n| 2-4 Events | No | 11 | 2177 |\n\nContingency tables of DeFN forecasts in operation from January 2019 to June 2020. They show the forecast results for $\\geq$M-class flares and for $\\geq$C-class flares, with three different probability thresholds such as 50 %, 45 %, and 40 %.\n\n| $\\geq$M-class Flares | | Observed Events | |\n|:--------------------:|:---:|:----------------|:-----|\n| 3-4 | Yes | No | |\n| Forecast | Yes | 1 | 31 |\n| 2-4 Events | No | 3 | 2198 |\n\nContingency tables of DeFN forecasts in operation from January 2019 to June 2020. They show the forecast results for $\\geq$M-class flares and for $\\geq$C-class flares, with three different probability thresholds such as 50 %, 45 %, and 40 %.\n\n| $\\geq$C-class Flares | | Observed Events | |\n|:--------------------:|:---:|:----------------|:-----|\n| 3-4 | Yes | No | |\n| Forecast | Yes | 30 | 27 |\n| 2-4 Events | No | 8 | 2168 |\n\nContingency tables of DeFN forecasts in operation from January 2019 to June 2020. They show the forecast results for $\\geq$M-class flares and for $\\geq$C-class flares, with three different probability thresholds such as 50 %, 45 %, and 40 %.\n\n| $\\geq$M-class Flares | | Observed Events | |\n|:--------------------:|:---:|:----------------|:-----|\n| 3-4 | Yes | No | |\n| Forecast | Yes | 2 | 34 |\n| 2-4 Events | No | 2 | 2195 |\n\nContingency tables of DeFN forecasts in operation from January 2019 to June 2020. They show the forecast results for $\\geq$M-class flares and for $\\geq$C-class flares, with three different probability thresholds such as 50 %, 45 %, and 40 %.\n\n| $\\geq$C-class Flares | | Observed Events | |\n|:--------------------:|:---:|:----------------|:-----|\n| 3-4 | Yes | No | |\n| Forecast | Yes | 32 | 34 |\n| 2-4 Events | No | 6 | 2161 |\n\nContingency tables of DeFN forecasts in operation from January 2019 to June 2020. They show the forecast results for $\\geq$M-class flares and for $\\geq$C-class flares, with three different probability thresholds such as 50 %, 45 %, and 40 %.\n\n| Probability threshold | Accuracy | TSS | FAR | HSS |\n|:----------------------|:---------|:-----|:-----|:-----|\n| 50 % | 0.99 | 0.24 | 0.97 | 0.06 |\n| 45 % | 0.98 | 0.24 | 0.97 | 0.05 |\n| 40 % | 0.98 | 0.48 | 0.94 | 0.10 |\n\nEvaluations of operational forecast results by DeFN from January 2019 to June 2020 with three verification metrics.\n\n| Probability threshold | Accuracy | TSS | FAR | HSS |\n|:----------------------|:---------|:-----|:-----|:-----|\n| 50 % | 0.99 | 0.70 | 0.40 | 0.64 |\n| 45 % | 0.98 | 0.78 | 0.47 | 0.62 |\n| 40 % | 0.98 | 0.83 | 0.52 | 0.61 |\n\nEvaluations of operational forecast results by DeFN from January 2019 to June 2020 with three verification metrics.\n\n| Datasets | TSS ($\\geq$M-class flares) | TSS ($\\geq$C-class flares) |\n|:--:|:--:|:--:|\n| Training (2010-2011), Validation (2012), Test (2013) | 0.49 | 0.53 |\n| Training (2010-2012), Validation (2013), Test (2014) | 0.66 | 0.60 |\n| Training (2010-2013), Validation (2014), Test (2015) | 0.77 | 0.66 |\n| Training (2010-2014), Validation (2015), Test (2016) | 0.87 | 0.56 |\n| Training (2010-2015), Validation (2016), Test (2017) | 0.72 | 0.61 |\n| Average | 0.70 | 0.59 |\n\nEvaluation of DeFN forecasts using the time-series CV.","meta":{"dup_signals":{"dup_doc_count":11,"dup_dump_count":3,"dup_details":{"curated_sources":1,"2024-22":1,"unknown":9}},"filename":"out\/2112.00977_extract_main.tex.md"},"subset":"arxiv"} +{"text":"abstract: Explosive growth in the amount of genomic data is matched by increasing power of consumer-grade computers. Even applications that require powerful servers can be quickly tested on desktop or laptop machines if we can generate representative samples from large data sets. I describe a fast and memory-efficient implementation of an on-line sampling method developed for tape drives 30 years ago. Focusing on genotype files, I test the performance of this technique on modern solid-state and spinning hard drives, and show that it performs well compared to a simple sampling scheme. I illustrate its utility by developing a method to quickly estimate genome-wide patterns of linkage disequilibrium (LD) decay with distance. I provide open-source software that samples loci from several variant format files, a separate program that performs LD decay estimates, and a C++ library that lets developers incorporate these methods into their own projects.\nauthor: Anthony J. Greenberg [^1]\nbibliography: tony.bib\ntitle: Fast ordered sampling of DNA sequence variants\n\n# Introduction\n\nGrowth in the amount of genomic data available is matched by increasing power and storage space of consumer-grade computers. Using such low-cost systems to perform genomis analyses can speed development cycles and empower users operating under economic constraints. The range of such analyses can be extended with light-weight software tools that carefully manage system resources.\n\nCollections of single-nucleotide polymorphisms (SNPs) and copy number variants (CNVs) genotyped in groups of individuals are fundamental to numerous applications. These data sets are stored in a variety of formats and often contain millions of variants genotyped in thousands of individuals. It is often desirable to create random subsets of such large polymorphism tables. For example, relatively small samples can be used to quickly test software pipelines. In addition, when using genome-wide SNPs to predict phenotypes of individuals using genome selection methods, it is often important to learn the minimal marker set that achieves good accuracy . Finally, repeated creation of data subsets is a variant of the jack-knife procedure and can be used to construct empirical distributions of genome-wide statistics.\n\nTo be useful for a wide range of applications, any sampling scheme must meet several criteria. The subset generated must be in the same order as in the original data set. Variants must be sampled without replacement, each locus has to be picked with the same probability, and the size of the resulting data set must always reflect the value required by the user. The time required to sample variants must grow at most linearly with the number of polymorphisms in the sub-sample. Furthermore, because the original data set may be very large, the time required to pick loci must be as insensitive as possible to the overall size of the data set. In addition, since my aim is to empower researchers with limited access to powerful hardware, the implementation should minimize the use of system resources, particularly avoiding reading large files into memory. Surprisingly, after an extensive search I was unable to find existing software that performs according to these criteria. However, the general problem of ordered on-line sampling of records from files was solved 30 years ago . Unfortunately, this work is relatively unknown with limited application in computer system management and sampling of business data streams. No genetics papers appear to reference Vitter's articles.\n\nI implemented a version of Vitter's algorithm that samples loci from a variety of variant file formats while minimizing system resource use. I examine the method's performance compared to a simple sampling scheme and provide an example application to estimate genome-wide patterns of linkage disequilibrium. I also provide a library that allows developers to incorporate these methods into their software. All source code, data, and analysis methods are openly available.\n\n# Methods\n\n## Sampling scheme\n\nVitter's main insight was to derive a scheme that samples the number of records to skip given how many remain to be picked and the number left in the file . There are additional speed-ups available if one is willing to store the index values in an array, but I opted to save memory instead and save the sampled records to the output file right away. Preliminary tests suggested that file I\/O time dominates random number generation even on a machine with a solid state drive (SSD, results not shown), so the increase in sampling speed would not be noticeable.\n\nOther than the above deviation, I implemented Vitter's method D as described in the Appendix A, algorithm A2 in . The implementation uses a hardware random number generator (RNG) (). If not supported, the software substitutes the 64-bit Mersenne Twister seeded with the processor's time stamp counter. The decision is made automatically at run time and does not involve user input.\n\n## Included software\n\nThis report describes three pieces of software: a C++ library `libsampFiles` and two stand-alone programs: `sampleSNPs` produces ordered samples from variant files and `sampleLD` uses locus samples to calculate distributions of linkage disequilibrium statistics. All software is released under the BSD three-part license. The whole set of programs can be trivially compiled using the included Makefile, with no external dependencies required. Compilation was tested on Mac OS with llvm\/clang and on RedHat Linux using the GNU compiler collection.\n\n#### C++ class library.\n\nThe `libsampFiles` library allows users to easily include in their own software support for sampling loci from most commonly used file formats (.tped and .bed from `plink` , VCF , and HapMap ), as well as a generic text and binary file. Reading and writing in these formats is supported, as well as limited manipulation (see the reference manual for details). Format conversion is not supported at present. Random number generators and population indexing facilities are also available. The library is constructed using hierarchical classes and is built with extensibility in mind. File manipulations are implemented to reduce random-access memory (RAM) use, without unduly reducing execution speed. The trade-offs were tested on a laptop with a solid state drive (SSD) and 16 gigabytes of RAM. Performance may differ on other system types.\n\nIn addition to the software, a directory with example SNP files is provided in the distribution for testing purposes. The project GitHub page () provides a mechanism for users to report problems. Detailed library interface documentation is available at .\n\n#### Sampling variants.\n\nI used the `libsampFiles` library to write a stand-alone program, `sampleSNPs`, that subsamples variant files. All formats mentioned above are supported. The program runs via command line using standard Unix-style flags to pass execution parameters. The README file included with the project and available on the GitHub documentation page has detailed instructions. Sampled SNPs are saved into a file in the same format as the original. Auxiliary files, if present (e.g., .fam and .bim for .bed), are modified or copied as appropriate.\n\n#### Linkage disequilibrium among sampled loci.\n\nAs an example of an application of locus sampling, I implemented a stand-alone program that estimates genome-wide LD decay with between-locus distance. A full accounting of this relationship would require the calculation of linkage disequilibrium statistics for all $N_p = n(n-1)\/2$ pairs of loci, where $n$ is the number of genotyped variants. This task quickly becomes unmanageable as the number of genotypes in the data set grows. One solution, implemented in `plink` , is to calculate LD only among loci falling within a neighborhood window on a chromosome. A complementary approach: implemented here, is to sample $2N_s$ ($N_s$ is the desired number of sampled pairs) loci using Vitter's method and calculate LD between consecutive pairs. Justification for this approach is provided in Appendix A. Once a pair of loci is picked, `sampleLD` calculates two linkage disequilibrium statistics: $r^2$ and $D^{\\prime}$ . Missing data are removed (only individuals successfully genotyped at both loci are considered). If there are not enough genotypes to produce a meaningful result, \"-9\" is reported. If a file with a population index is provided, the program will calculate LD statistics within each population and report them separately.\n\nUnlike `sampleSNPs`, `sampleLD` currently only supports `plink` .bed files as input. The auxiliary .bim and .fam files are also required. A detailed description of input file requirements, command line flags, and output format are in the README file included with the project and on the documentation page.\n\n## Test data\n\nExecution timing was performed with SNP files extracted from the *Drosophila* genome nexus . I used the Zambia and France populations from that data set. LD measurements were performed on cultivated rice (*Oryza sativa*) genotypes . I extracted a random sample of 100 *indica* (IND) and 100 tropical *japonica* accessions, and filtered out loci with minor allele counts less than two. I estimated the smoothed relationships between LD and distance, with their confidence intervals, using the `ggplot2` R package .\n\n# Results and Discussion\n\nAfter an extensive search I was unable to find existing software that performs uniform sampling of loci from files without replacement while preserving their order. The widely-used command-line tool `plink` does have a function (accessible via the `--thin` flag) that extracts a random sample of SNPs while preserving order. However, the program simply examines each locus and includes it with the specified probability. Thus, the resulting sample varies in size (Supplemental Fig. S).\n\n## Ordered sampling of loci\n\nGiven that no other software appears to be available, I set out to implement a light-weight solution that quickly generates ordered samples without replacement even from very large data sets. The simplest idea is to examine every variant record in turn and decide, given the current number of loci picked, the number remaining in the input file, and the total to be sampled, whether to pick the current one . The selected records are read into memory and saved to the output file. While this solution is obvious and easy to implement, it requires an examination and a (pseudo)random number sampling step for each line in the input file.\n\nAn alternative approach has been proposed by Vitter . The idea is to decide how many loci to skip, given the current number already picked and remaining to be examined. demostrated that this approach (Vitter's Method D) is faster than the simple line-wise decision-making outlined above (referred to as Method S). However, the tests were performed 30 years ago. The files were stored on tape, and random number generation was computationally expensive. Therefore, I implemented both Method S and Method D in C++, using comparable language facilities (see Methods and Supplemental files for details), and tested them in a number of scenarios to determine which scheme is preferable on modern-day computers.\n\nSeveral variables can influence algorithm execution speed. Random (or pseudorandom) number generation is used extensively to generate samples from required distributions. However, code profiling (not shown) revealed that at least the hardware RNG I chose for this implementation (see Methods for details) is never the rate-limiting step, even when files are stored on a fast solid state drive. Rather, it is file input\/output that takes most time during a given execution cycle. There are two parameters to consider when we investigate file read\/write timing. One, storage can be either on a solid-state (SSD) or a spinning drive (HDD). The former is generally faster and allows for random file access. Second, the files can be either in a binary format (I use the popular and highly-compressed `plink` `.bed`), or in plain text. The important difference is that lines in files with text records are in general variable in length. Thus, if we want to skip several records we have no choice but to read each row in turn and discard unwanted loci. In contrast, binary formats use fixed-size fields, leading to uniform row sizes. It is then trivial to compute the number of bytes to skip without reading before arriving at the desired record.\n\nGiven that I am interested in creating a tool that can be used on personal workstations and laptops, and since solid state drives have become the standard choice, I focus on execution timing on a laptop (mid-2015 15-inch MacBook Pro) with an SSD. However, I also replicated the results on a 2014 Mac Mini with an HDD with essentially the same results (see Supplemental Figures and ). I first held the input file size constant and varied the number of loci sampled. As shown in Fig.\u00a0, time taken by both Method D and Method S grows approximately linearly with the number of loci sampled. This is the case for both binary and text files. Method D (Vitter's skip-over algorithm) outperforms the simpler Method S several-fold when the number of records is much smaller than the total in a binary file. This is not surprising, given that in this case Method S examines and discards many more loci for each one picked. As expected, the difference largely disappears when we sample from a text file. This is because both methods have to at least read and discard file lines one at a time. Interestingly, I obtained similar results on an HDD (Supplementary Fig.\u00a0 and ), even though a spinning drive should not allow the same level of random access as an SSD. It is notable that in every case working with a binary file is about an order of magnitude faster, even though I am sampling more loci from a bigger file (500,000 loci in the binary *vs* 100,000 in the text file). Finally, although the performance benefit of Method D is not always dramatic, it never underperforms Method S. The relatively small amount of extra time taken by Method S likely reflects the additional operations that are necessary to decide whether to include a given record.\n\nGiven that Vitter's method decides ahead of time how many loci to skip before sampling, I would expect that it should be relatively insensitive to the total size of the input data set. Indeed, this is the case for binary file sampling (Fig.\u00a0, panels A and B). Increasing input file size 2.5-fold results in no measurable rise in execution time. Method S execution time, and that of both methods on a text file (Fig.\u00a0, C and D), grows approximately linearly with input size. Again, Method D is always at least as fast as Method S.\n\nGiven that Method S never consistently outperforms Vitter's Method D, I included only the latter in my implementation. While I do include the facility to read various text variant file formats, it is clear that using the `.bed` binary files, ideally on an SSD, results in optimal performance.\n\n## Linkage disequilibrium distributions\n\nEstimating rates of LD decay with distance on a chromosome are necessary, for example, in genome-wide association studies where such rates determine peak resolution. Because calculating LD between all pairs of loci is infeasible and unnecessary, a typical approach is to estimate linkage statistics in sliding windows. This technique is employed in `plink`. I implemented an alternative approach, picking loci according to Vitter's algorithm (see Methods for details) and then calculating LD statistics between consecutive variants in the sample. To test my implementation, I used a data set of 638,699 SNPs from the rice high-density array . I first ran `plink` to calculate $r^2$ and $D^{\\prime}$ between loci no more than 500 kb or 20 SNPs apart. This yields more than 12 million locus pairs, stored in a 1.2 gigabyte file. The relationships between linkage disequilibrium and distance are depicted in Fig.\u00a0(A, B). As expected, precision of LD estimates between distant loci diminishes due to undersampling. I then analyzed the same data set using my approach, sampling 30,000 SNP pairs (the resulting file occupies a mere 1.4 megabyte). While the confidence intervals from these estimates are wider (Fig.\u00a0A, B), the pattern of LD decay is the same as that captured by the considerably larger sample set produced by `plink`. Thus, my light-weight approach may be the best option when great precision is not required and computational resources are limited.\n\nAn extra feature of my `sampleLD` program, unavailable in `plink`, is the ability to make separate LD estimates for each population present in a set of individuals. I illustrate this possibility by estimating linkage disequilibrium in *indica* and tropical *japonica* rice varietal groups (Fig.\u00a0C, D). It is well established that LD levels are lower in *indica*. My analyses recapitulate this pattern.\n\nThe software described in this report enables users to quickly generate subsets of large SNP or CNV data sets, opening the door to numerous applications that constrain resources available for genetic data manipulation. This work exemplifies the kinds of approaches needed to speed discovery cycles and empower researchers lacking access to expensive hardware.\n\n# Web resources\n\nProject name:\n\n: sampleSNPs\n\nProject homepage:\n\n: \n\nProject documentation:\n\n: \n\nOperating systems:\n\n: Unix-like (Linux, BSD, Mac OS)\n\nProgramming language:\n\n: C++\n\nOther requirements:\n\n: No dependencies other than the C++11 standard library\n\nLicense:\n\n: BSD three-part\n\n# Acknowledgements\n\nThe idea for this project was spurred by Kevin Lawler's blog post on Vitter's work (). The data for implementation testing were obtained from the *Drosophila* Genome Nexus () and the Rice Diversity website ().\n\n# Supplemental Data\n\n#### Supplemental file 1 \u2013 Timing of sampling schemes\n\nThis is an archive of the directory that contains the R and C++ code, as well as data files, necessary to reproduce the algorithm timing analyses presented in this manuscript. Compilation and running instructions are included.\n\n#### Supplemental file 2 \u2013 LD analyses\n\nThis is an archive of the directory that contains the R code and data files necessary to reproduce the linkage disequilibrium results presented in this report.\n\n#### Supplemental file 3 \u2013 Software source code\n\nThis is an archive of the directory that contains the source code of software described in this paper. Compilation and testing instructions are included. An up-to-date version can be found on GitHub ().\n\n# Appendix A: LD sample scheme derivation\n\nFor the sample to accurately reflect whole-genome values, the probability of picking each pair must be equal. Furthermore, pairs must be sampled without replacement. To derive such a scheme, I order the list of all possible locus pairs as they would appear in a pair-wise relationship matrix. Since LD measures are symmetric, we need only concern ourselves with the upper (or, equivalently, lower) triangle of this matrix. The first row of the upper triangle lists the pairings of the first locus on a chromosome with all $n - 1$ subsequent loci other than itself. Next row lists $n - 2$ relationships between the second variant and the rest, excluding itself and the first locus. The process continues until we reach the last locus, which has no additional pairs. Sampling from this list will yield the desired uniform representation of locus pairs. Each variant on the list of pairs is represented $n - i$ times in a row, where $i = 1, \\dots, n-1$ is the index of the locus position. Thus, instead of going through the pairs list (which contains $N_p = n(n - 1)\/2$ elements) we can use a two-step scheme. We start by picking the first locus in a pair by sampling variants with weights reflecting the length of their run in the pairs list. We would then randomly pick another variant from the remaining loci. Finally, we go back to the first SNP or CNV in the pair and use it to sample the jump length to the next locus according to a weighted algorithm and repeat the process until we have the desired number of pairs. The initial sampling weight for locus $i$ under this scheme is\n\n$$w_i = \\dfrac{p_i}{\\sum_i p_i},$$ where $p_i$ is the probability of sampling locus $i$. Since, as mentioned above, each variant is represented $n - i$ times on the list,\n\n$$p_i = \\dfrac{n - i}{N_p} = \\dfrac{n - i}{n(n - 1)\/2}$$ Since $p_i$ are probabilities, $\\sum_i p_i = 1$. This leads to the expression for $w_i$: $$\\begin{aligned}\nw_i &=\\quad \\dfrac{p_i}{\\sum_i p_i}\\\\\n &=\\quad p_i\\\\\n &=\\quad \\dfrac{2(n - i)}{n(n - 1)}\\\\\n &=\\quad \\dfrac{2}{n-1} - \\dfrac{2i}{n(n-1)}\\\\\n &\\approx\\quad \\dfrac{2}{n} - \\dfrac{2i}{n^2} \\quad\\text{when $n$ is large}\n\\end{aligned}$$ Thus, the deviation from an equal-weight random sampling (with all $w_i = \\tfrac{1}{n}$) depends solely on the value approximately $\\tfrac{2i}{n^2}$, which is tiny for large data sets ($n \\ge 100,000$) we typically encounter.\n\nAccording to the scheme presented above, once we have the first locus in a pair, we would then sample randomly from the loci further down on the chromosome to obtain the second variant for LD calculations. The next round would then require us to go back in the file to the first locus in the pair and continue with our scheme. This step is potentially computationally expensive, especially for text files with variable-width lines. To eliminate this complication, I further simplify the algorithm by instead using Vitter's method to sample $2N_s$ ($N_s$ is the desired number of sampled pairs) loci. I then calculate LD between consecutive pairs of variants. A slight correction is needed only when more than one chromosome is present in the data set. In such cases, locus pairs that are located on different chromosomes are discarded and the additional pairs are sampled to restore the total to the required value. The resulting scheme approximates true uniform sampling very well when data sets are large and sample sizes are relatively small (preliminary tests suggested that sample sizes as large as 1\/3 the total number of SNPs still yield reasonable results).\n\n# Figures\n\n[^1]: Bayesic Research, Ithaca, NY, USA; email@example.com","meta":{"dup_signals":{"dup_doc_count":19,"dup_dump_count":15,"dup_details":{"curated_sources":4,"2021-25":1,"2021-17":1,"2021-10":1,"2020-34":1,"2020-24":1,"2019-47":1,"2019-43":1,"2019-22":1,"2019-09":1,"2018-51":2,"2018-43":1,"2018-34":1,"2018-22":1,"2021-39":1}},"filename":"out\/1711.06325_extract_sampleSNPs.tex.md"},"subset":"arxiv"} +{"text":"abstract: Each year, crowd disasters happen in different areas of the world. How and why do such disasters happen? Are the fatalities caused by relentless behavior of people or a psychological state of panic that makes the crowd 'go mad'? Or are they a tragic consequence of a breakdown of coordination? These and other questions are addressed, based on a qualitative analysis of publicly available videos and materials, which document the planning and organization of the Love Parade in Duisburg, Germany, and the crowd disaster on July 24, 2010. Our analysis reveals a number of misunderstandings that have widely spread. We also provide a new perspective on concepts such as 'intentional pushing', 'mass panic', 'stampede', and 'crowd crushs'. The focus of our analysis is on the contributing causal factors and their mutual interdependencies, not on legal issues or the judgment of personal or institutional responsibilities. Video recordings show that people stumbled and piled up due to a 'domino effect', resulting from a phenomenon called 'crowd turbulence' or 'crowd quake'. This was the consequence of amplifying feedback and cascading effects, which are typical for systemic instabilities. Hence, things can go terribly wrong in spite of no bad intentions from anyone. Comparing the incident in Duisburg with others, we give recommendations to help prevent future crowd disasters. In particular, we introduce a new scale to assess the criticality of conditions in the crowd. This may allow preventative measures to be taken earlier on. Furthermore, we discuss the merits and limitations of citizen science for public investigation, considering that today, almost every event is recorded and reflected in the World Wide Web.\nauthor: Dirk Helbing$^{1,2,3}$; Pratik Mukerji$^1$\nbibliography: loveparade.bib\ndate: Received: date \/ Accepted: date\ntitle: Crowd Disasters as Systemic Failures: Analysis of the Love Parade Disaster\n\n# Introduction\n\nCrowd disasters are known since at least the Roman Empire. As a consequence, building codes for stadia were developed. The Coliseum in Rome, Italy, which is considered to be one of the seven world wonders, is probably the best known example of Roman building experience. While it could take up between 50,000 and 73,000 visitors, it had 76 numbered entrances, and visitors exited through the same gate through which they had entered. In fact, exits were located side by side, around the entire circumference of the Coliseum. As a consequence, the Coliseum could be evacuated within just 5 minutes, an efficiency that is not even reached by modern stadia due to their smaller number of exits.\n\nBuilding codes and regulations for mass events have also been written and updated after recent crowd disasters, such as the ones in Bradford (1985) or Hillsborough, Sheffield (1989) . Today's knowledge about the dynamics of crowds is considerable and summarized in Refs. . Furthermore, a lot of experience in organizing safer mass events has recently been gained from the organization of religious pilgrimage . In recent years, there is also a quickly growing body of literature on evacuation experiments and pedestrian simulations , and various related commercial software products are now available. Thus, how was it possible that 21 people died and more than 500 were injured during the Love Parade on July 24, 2010?\n\nA crucial point for the safety of mass events is that they are (or at least should be) organized in a way that is robust against many kinds of disturbances (such as weather conditions, human errors, etc.). This is why the organization of a mass event includes the elaboration of contingency plans. Why then can crowd disasters still happen?\n\nThis paper will reveal that the Love Parade disaster was not the result of a single mistake. We will rather show that the Love Parade disaster resulted from the interaction of *several* contributing factors. It is probably the first time that a detailed analysis can be performed with publicly available documents: not just investigation reports by public authorities and the media, but also maps from Google Earth and 360 degree photographs , videos accessible through YouTube , documents released by Wikipedia and Wikileaks , and other sources. In some sense, this opens up a new age of public investigation. However, to avoid misunderstandings, we would like to underline that our analysis focuses on the course of events and causal interdependencies among them, while they do not draw any conclusions regarding legal issues or personal or institutional responsibilities, which must be judged by other experts (see, for example, Ref. ).\n\nThe remainder of this paper is structured as follows: Section 2 provides an overview of the situation before and during the Love Parade disaster. This includes a historical background, a description of the festival area (including in- and outflows), and a timeline reconstructed from many video recordings. Section 3 will analyze various factors contributing to the disaster, while Section 4 will focus on causal interdependencies and interaction effects. Section 5 discusses our findings and Section 6 concludes with lessons learned for the organization of future mass events. The novelty of this paper is four-fold: it concerns (1) the structured analysis of large amounts of publicly available video recordings of a disaster, (2) the interpretation of the disaster as a systemic failure (where the interaction of various factors created a systemic instability, causing an overall loss of control), (3) a revision of common views about crowd disasters, and (4) the introduction of a scale reflecting the criticality of crowd conditions (and proposed counter-measures).\n\n# Overview of the Situation\n\nThe following section will try to give a short overview of the situation during the Love Parade in Duisburg and the planning beforehand. A large number of documents are now publicly available (see Ref. for a collection of links). This includes the planning documents , the event log of the regulatory authority of the city of Duisburg , and the evacuation analysis . Publicly accessible materials and eye witness reports now amount to several hundred pages and more than 500 video recordings . This useful collection of materials is the result of the efforts of many volunteers. It is certainly not possible (but also not the purpose) of this article to give a complete representation of materials. We will rather focus on the most relevant details in order to avoid a distraction of the reader from the main factors that have contributed to the disaster.\n\nThe interested reader is invited to gain a more complete picture himself or herself, based on the media reports provided in Refs. and documentaries of several TV channels . The view of the organizer is presented in Ref. . Further video documentations are available from private persons . An interpretation of the events, overlayed to a satellite picture, can be found in Ref. .\n\nIn order to make an independent assessment possible, our own analysis will largely refer to authentic materials that are publicly accessible. Videos of a subset of surveillance cameras are available until 16:40 . Timelines can be found in Refs. . Complementary to this article, we provide a time-ordered and geo-located collection of videos from visitors of the Love Parade . A YouTube channel with videos of the Love Parade exists as well . The collection contains further videos. Many of these videos have been synchronized , and some of them have been cut together in the form of multi-view videos documenting the course of events . A set of highly relevant private videos around the time of the disaster can be found in Refs. .\n\nNote that, when referring to secondary sources (such as public media reports), we will sometimes use wordings such as \"apparently\" or \"seems to\", in order to indicate that access to primary sources would be desirable for an in-depth analysis.\n\n## History of the Love Parade\n\nThe Love Parade is a popular electronic dance music festival in Germany that was first organized in Berlin in 1989, and annually repeated in the same city until 2003. The events in 2004 and 2005 had to be cancelled because of funding problems and a coordinated opposition of political parties (e.g. related to the waste resulting from the event) . In 2006, the parade made a comeback with the support of a fitness studio. The Love Parade in summer 2007 was again planned for Berlin, but the event was cancelled, since the Senate of Berlin did not issue the necessary permits on time. After negotiations with several German cities, it was then decided to move the Love Parade to the Ruhr Area, an agglomerate of major German cities, in the next years. The first of these events took place in Essen on August 25, 2007, with 1.2 million visitors. In July 2008, it was organized in Dortmund. The 2009 event, planned for Bochum, was cancelled due to security concerns, particularly as a critical situation had apparently occurred the year before . The last Love Parade took place on July 24, 2010, in Duisburg, where 21 people died and more than 500 were injured in a crowd disaster. The chain of events underlying this disaster will be analyzed in the following sections.\n\n## Description of the Festival Area\n\nThe festival area of the Love Parade in 2010 was approximately 100,000 square meters large and located in the area of a previous freight station of the city of Duisburg. For a 360 degree view of the festival area and its surroundings see Ref. . In contrast to the open area concept of the Love Parade in Berlin (see the picture in Ref. ), the annual Carnival in Cologne, and the 20th World Youth Day gathering with the Catholic Pope in 2005 in Cologne-Marienfeld, Germany , the festival area was constrained by railway tracks on the East and by a freeway on the West. In response to concerns from the regulatory authority that the area would be too small for the expected number of up to 1.4 million expected visitors , the city of Duisburg combined its late approval of the event with the condition to restrict the number of concurrent visitors to 250,000.\n\nTo overcome security issues seen by the regulatory authority (there was some discussion to cancel the event overall), the organizer of the Love Parade decided to fence the whole festival area. This moved the responsibility to the building regulatory agency and required the event to satisfy the \"Versammlungsst\u00e4ttenverordnung\" , which is the German safety regulation for the organization of mass events. However, there were still concerns that the standard safety requirements would not be met. It is conceivable that these concerns were not fully considered due to a desire to approve the event , particularly as Duisburg was nominated as Germany's 'cultural capital' of the year, and the opinion prevailed that the Love Parade would make the cultural program and the city more attractive . To overcome the concerns, an expert opinion was requested from a prominent crowd researcher. The report argued that the festival area could be sufficiently well evacuated in an emergency situation . However, the study did not analyze normal entry and exit conditions in detail.\n\nFigure gives an overview of the festival area. It shows that the festival area could be entered only via a tunnel, \"Karl-Lehr-Stra\u00dfe\", which also served as the only exit from the area. In the middle of that tunnel, there is the main ramp that leads to the festival area. The tunnel and the ramp together determine an inverse T-shaped geometry of in- and outflows. A side ramp in the West (\"Am G\u00fcterbahnhof\") was assigned as an additional exit ramp , but basically not used. The smallest overall diameter of the tunnels in the East and in the West was about 20 meters . The ramp itself was 26 meters wide and 130 meters long . Based on the maximum flow value of 1.225 persons per meter per second , this would imply a hypothetical maximum flow of 114,660 persons per hour and a density of 1.75 persons per square meter, if the entire ramp width was usable. However, the actual capacity was significantly lower than this due to the following factors (see also Sec. ):\n\n1. The maximum possible flow is inconvenient and potentially unsafe, and therefore not suited as a basis for planning .\n\n2. Counterflows are expected to reduce the capacity by $6-14\\%$ , resulting in a maximum hypothetical flow of 98,608 persons per hour.\n\n3. The 90 degree turn to and from the tunnels is expected to reduce the capacity as well.\n\n4. Walking in groups reduces the capacity further .\n\n5. Alcohol and drugs are expected to have a negative impact on capacity as well.\n\n6. A considerable amount of capacity must have been lost due to fences , a food stand , and vehicles on the side of the ramp .\n\nThe flow model of the organizer assumed the following numbers :\n\n| Time | Expected inflow\/h | Expected outflow\/h |\n|:------------|:-----------------:|:------------------:|\n| 14:00-15:00 | 55,000 | 10,000 |\n| 15:00-16:00 | 55,000 | 50,000 |\n| 16:00-17:00 | 55,000 | 45,000 |\n| 17:00-18:00 | 90,000 | 55,000 |\n\nExpected inflows and outflows estimated by the organizers (see Ref. for more details). Based on these values, the maximum number of visitors on the festival area was expected to be 235,000 (while a capacity of 250,000 was approved and more than 1 million visitors were expected, according to announcements before and during the event ). Estimates based on surveillance videos of camera 13 suggest that the actual flows were considerably below the values in the above table. According to Ref. , the inflow in the time period between 14:00 and 15:00 varied between 280 and 600 persons per minute and the outflow between 6 and 80 persons per minute. Between 15:00 and 15:40, it varied between 450 and 750 persons per minute and the outflow between 40 and 250 persons per minute. This is 30-50% below expectations of the organizer of the Love Parade and implies a maximum number of visitors on the festival area of about 175,000.\n\nAccording to Table , between 17:00 and 18:00 the organizers expected an inflow of 90,000 and an outflow of 55,000 people, which could not have been handled by the wide ramp without the use of suitable crowd control. Problems had to be expected already for much smaller flow rates, as there were vehicles and a food stand as well as fences on the ramp, which must have reduced its capacity considerably. This risk factor certainly had to be carefully considered by the crowd management concept. In fact, the side ramp (see Fig. ) was attributed as an additional exit ramp, and the organizational concept foresaw the possibility to reduce the visitor flows through 'isolating devices' (access control points), which were located in front of the tunnel entrances . Despite this, access control was given up intermittently because of the large pressure from incoming visitors (see Ref. and Tables to ). The festival area itself was apparently not overcrowded (see caption of Table and aerial photographs ). So, why and how did the crowd disaster happen in the inverse T-section formed by the tunnel and the ramp, even though the visitor flows were apparently smaller than expected (see Table ) and a more than 3,000 people strong police force was on duty? To address this question, we will first present an expert opinion on the crowd disaster. Then, we will summarize the course of events, and analyze the contributing factors in more detail.\n\n## Expert Report by Prof. Dr. G. Keith Still\n\nAn expert report dated December 9, 2011, which became public in February 2012 , analyzes the implications of the flow model presented in Table . In the following, we summarize the essence of this report in our own words:\n\n1. Safe crowd conditions can be usually assumed for densities up to 2-3 persons per meter and minute and a maximum acceptable flow of 82 persons per meter and minute (which is considerably below the maximum possible flow) .\n\n2. All areas, in which higher crowd densities may occur or where many people may accumulate, must be analyzed for risks.\n\n3. The safety concept must list those risks and also, who is responsible to handle them. The organizational structure (in particular, who takes what kinds of decisions) must be fixed before the event. Particular attention must be paid to crowd management and communication (loud speakers, signs, maps and other plans).\n\n4. All authorities involved in the organization of the event are responsible for the safety of the crowd. The division of responsibility should be regulated in the concept of the event. A mass event should not be approved, if it does not satisfy the applicable safety regulations.\n\n5. The reason for most crowd disasters in the past was a failure to regulate the flow of people in high throughput areas.\n\n6. The organization of an event needs plans for normal operation, but also contingency plans for all kinds of incidents.\n\n7. There are basically three ways of influencing the safety of crowds: design, information, and crowd management.\n\n8. At the Love Parade in Duisburg, the capacity of the main ramp to and from the festival area was given by the minimum usable width of the ramp. Due to two triangular fence structures , which were apparently not shown in the maps, the effective width of the ramp was only 10.59 meters. According to the expert report, this implies a maximum safe flow of 10.59 meters $\\times$ 82 persons per meter and minute $\\times$ 60 minutes = 52,103 persons per hour. However, the maximum expected flow between 17:00 and 18:00 was 145,000 persons per hour, which would require a width of 29.5 meters. Therefore, at the Love Parade in Duisburg, problems with the in- and outflows and a critical accumulation of people had to be expected.\n\n9. Once the crowd density exceeds between 4 or 5 persons per square meter, congestion can build up quickly, which implies high risks for people to stumble or fall (particularly if the ground is uneven). Therefore, injuries can easily happen.\n\n10. People in a dense crowd cannot see what happens a few meters away from them, and they are not aware of the pressure in front.\n\n11. The density, noise, and chaos in a dense crowd cause a natural desire to leave the crowd. Due to a lack of suitable crowd control and guidance, visitors of the Love Parade in Duisburg could only see a narrow staircase as a possible emergency exit (see Fig. ). When trying to get there, the pressure towards the staircase increased and eventually triggered the crowd disaster.\n\nThe analysis of the effective capacity of the main ramp suggests that problems on the ramp were foreseeable, and the question arises, why the obstacles were placed there. However, a complete assessment should also consider the existence of the side ramp (see Fig. ). Moreover, due to the applied access control, the flows on the main ramp did not reach the expected flows by far. This can be directly concluded from the fact that there was never any significant congestion between the two triangular obstacles defining the narrowest part of the ramp, before the flow was controlled in this area from 16:02 on; this is clearly visible in the surveillance videos . An active bottleneck, in contrast, would be characterized by the formation of a queue .\n\nQueues of people did not form in the middle of the ramp, but rather at the upper end, where visitors were trying to enter the festival area. This, however, is not the location where the crowd disaster happened. Therefore, while one *had* to expect problems in the middle of the ramp where the triangular obstacles were located, the crowd disaster was actually *not* caused by those obstacles. The course of events that resulted in the crowd disaster involved many contributing factors, as we will show in the following. This conclusion of our study is in line with a quote referring to the Hillsborough disaster of 1989, which apparently goes back to the Archbishop of York and can be found in Keith Still's PhD thesis : \"Events of the magnitude of Hillsborough don't usually happen just for one single reason, nor is it usually possible to pin the blame on one single scapegoat... Disasters happen because a whole series of mistakes, misjudgements and mischance happen to come together in a deadly combinations.\" This should be kept in mind when Keith Still's expert report on the crowd disaster in Duisburg points out that it is merely based on the evidence presented to him and that it answers only the questions posed to him.\n\n## Timeline\n\nThe chronology presented in Table is an abbreviated version of the timeline that was originally provided by the organizers of the Love Parade together with their documentary movie . It is largely supported by the surveillance videos and other public sources. Additional points will be discussed afterwards.\n\n| | |\n|:---|:---|\n| 12:02 | The festival area is opened. Visitors can enter the area via the access control points from East and West via the tunnel. |\n| 13:00 | The inflow is reduced by closing 10 of 16 isolating devices, both on the East and West entrance towards the tunnel. |\n| 13:45-14:15 | No important disturbances or queues of visitor flows occur in the entry area. |\n| Around 14:00 | Official start of the Love Parade. |\n| 14:15-14:30 | The concentration of visitors increases at the end of the entrance ramp towards the festival area (due to obstructions by 'floats', i.e. moving music trucks). |\n| 14:30-15:15 | The crowd manager tries to order support by the police. The organizer states that the person responsible for connecting to the police (the 'liaison officer') did not have a working walkie talkie or mobile phone. |\n| 14:30-15:06 | The visitor flow on the ramp and from the West increases. |\n| Around 15:00 | Reduction of the visitor flow by closing as many isolating devices as possible. |\n| 15:12-15:34 | Change of police shifts . 5 police cars drive into the ramp area. |\n| 15:31 | Visitors ignore the fence on the side of the main ramp, following police forces, who have temporarily opened it. Shortly later, visitors overcome fences also on the other side of the ramp, which should prevent them from taking the steep slope up to the festival area. |\n| 15:50 | A first chain of police forces (police cordon) is formed in front of the side ramp, blocking in- and outflows in the West (see cordon 1a in Fig. ). |\n| 15:50-15:57 | A second police cordon closes the tunnel to the East (see cordon 2 in Fig. ). |\n| Around 16:02 | There is a sudden strong visitor flow towards the festival area from the West. The first police cordon is moved behind the side ramp (see cordon 1b in Fig. ). |\n| From 16:02 | Police forces start to control the flows to (and from) the festival area in the middle of the ramp (where the ramp is narrowest due to some fences) . Queues start to form on both sides of the resulting bottleneck . |\n| Around 16:06 | There are just a few visitors between the three police cordons. |\n| Around 16:07 | A jam of visitors forms in the West part of the tunnel. |\n| Around 16:09 | A jam of visitors forms above the chain of police forces on the ramp, when trying to exit the festival area. |\n| 16:12-16:28 | The third police cordon is completed (see cordon 3 in Fig. ). It stops the in- and outflows completely, where the fences narrow down the ramp. |\n| Around 16:13 | The small ramp is opened as entrance to the festival area. Visitors climb over fences. |\n| Around 16:14 | The second police cordon in the East opens up, and visitors enter the area of the big ramp from below . |\n| Around 16:17 | First visitors try to enter the festival area via a narrow staircase connecting the lower part of the ramp with the festival area on top . Afterwards, the staircase is blocked by two security people . |\n| Around 16:21 | The first police cordon in the West dissolves . The previously waiting visitors move towards the ramp and encounter there the dense flow of visitors coming from the East. |\n| 16:22 | First people climb the pole . |\n| 16:22-16:24 | The third police cordon still keeps the ramp closed, while the pressure increases from both sides (i.e. inflow and outflow). |\n| 16:24-16:28 | The third police cordon is dissolved . |\n| Around 16:27 | The narrow staircase is used by people to get up to the festival area . Someone climbs on top of a traffic sign . |\n| 16:31-16:37 | A fourth police cordon is formed in the upper area of the ramp . At the same time, the density in the lower area of the ramp increases steadily. |\n| After 16:40 | The situation gets out of control. More and more visitors try to get up to the festival area via the small staircase, the pole and a container (used by the crowd management, located at the lower end of the ramp in the South). |\n\nTimeline according to the organizers of the Love Parade.\n\nThe video recordings of the surveillance cameras and the related chronology, which were publicly provided by the organizer, end at 16:40 (\"in respect of the victims\"). Tables and present additional information that is relevant for a reconstruction of the causes of the crowd disaster. A time-ordered, geo-coded video link collection supplementing this paper allows the readers to gain an independent impression .\n\n| | |\n|:---|:---|\n| 8:03 | The police receive an e-mail informing them about the official approval of the Love Parade . |\n| Until 12:00 | The construction work (leveling work) of a bulldozer on the festival ground takes longer than planned and delays the opening of the festival for approximately one hour . |\n| 13:33 | 20,000 techno fans are waiting in the West and are creating a lot of pressure to get in . |\n| 13:44 | The police are worried that the access point may be overrun . |\n| Around 14:00 | A police officer asks the crowd management to make a loudspeaker announcement, but this cannot be done, because there is no working loudspeaker equipment despite requirements to have one . |\n| After 14:03 | Visitors are obstructed by floats (music trucks), while trying to enter the festival area from the ramp . |\n| 14:42 | The obstruction by the floats on the festival area causes a jam of arriving visitors on the ramp almost up to the tunnel . |\n| 14:52 | For some time, it is not possible to enter the festival area from the ramp . |\n| 15:06 | The minister of interior visits the crisis management team . |\n| 15:30-18:00 | Mobile phones do not work due to an overload of the mobile phone networks . |\n| From 15:31 | Visitors start to climb the slope in the West of the main ramp and one minute later in the East to get to the festival area . |\n| Around 16:00 | Turmoil and critical crowd conditions occur in front of the access points. A policeman instructs the crowd management to open the access point in the West . The access point in the East is intermittently opened to reduce the pressure in the crowd . |\n| 16:31 | A fence at the West side of the tunnel is opened to allow an emergency vehicle to enter. Hundreds of visitors make use of the occurring gap to enter the tunnel . |\n| Around 16:30 | Visitors overcome fences in the tunnel . |\n| 16:35-16:43 | People scream for help and shout at others they should hurry up; some seem to panic, but others try to calm them down; the situation changes quickly: people change between screaming and laughing; some people manage to climb the staircase, but there is still no continuous flow of people on the staircase . People scream they are about to die . The traffic sign is already bent . People shout from above that those on the narrow staircase should move on . |\n| Around 16:36 | Crowd turbulence and critical situation around the pole . |\n| Starting 16:38 | Police are limiting the number of people on the staircase (usually 2 or 3 at a time), but make sure that people do not stop on the staircase . |\n| Around 16:40 | An unconscious women is passed on to the narrow staircase and elevated up . A sparse, slowly moving crowd in the tunnel moves towards the festival area . |\n| Starting approx. 16:40 | Police cars in the city make loudspeaker announcements that the festival area is completely full and will not be accessible to further visitors anymore until the end of the day . |\n| Around 16:44 | Some people climb a pole and the narrow staircase next to the ramp (see Fig. ). Several people try to elevate themselves from the crowd by climbing a billboard. Many seem to be in trouble between the staircase and the tunnel . |\n| 16:47 | Interview with the Love Parade organizer, who does not seem to be aware how critical the situation is . |\n| Around 16:48 | A command is given to stop inflows to the tunnel and the ramp area completely. It is executed within minutes . Sound of police sirens; some people have fallen to the ground and raise their hands into the air for help . |\n| Around 16:50 | An emergency vehicle is entering the ramp area through the tunnel and opens its sliding door. An interaction between the crowd and people in the emergency vehicle takes place. The trouble between the staircase and tunnel is becoming more and more serious . A video from the West looking down on the crowd shows shockwaves in the crowd. Police forces are having a hard time holding a fence back at the container, which is used by the crowd management . |\n\nFurther relevant events.\n\n| | |\n|:---|:---|\n| Starting 16:53 | The emergency vehicle stops in the middle of the crowd. Strong shock waves occur all over the crowd and push people to the ground between tunnel and staircase . Arms are lifted up and people are screaming. A group of people is aggressively pushing their way towards the tunnel (see Ref. between minutes 1:28 and 1:35). Some people are crawling on top of others to get towards the staircase. A helicopter flies overhead. Someone fixes a rope above the tunnel to allow people to climb up . |\n| 16:54-17:03 | Some people get pulled up to the narrow staircase. A ladder is lowered down to the container at the South end of the ramp, and a woman, who seems to be hurt, is lying down on the container . |\n| Starting 16:57 | People are pulled up one by one via the container . People in the crowd are being pushed around. A few people climb onto other people, trying to get out of the crowd. A woman is screaming loudly . |\n| Starting 16:58 | The situation is extremely crowded. Some people scramble up the narrow staircase. Many people yell for help . |\n| Starting 16:59 | More people are pulled up from the crowded container to the festival area above. Security guards and police walk along the East side. A police officer is filming . An ambulance car is approaching on the freeway in the West. |\n| Starting 17:01 | View of emergency forces near the staircase area . |\n| Starting 17:02 | People scramble up the stairs. Many people are yelling for help. The situation is extremely crowded. Police attempt to control the crowd . |\n| 17:02 | First victims are reported on the ramp . |\n| Starting 17:03 | The stairs are clearing slightly, and some people are able to get up . |\n| Starting 17:03 | A man is trying to grab people and pull them up on the South over the container. Police holds the fence back. An orange ladder is used to get people out from the container . |\n| Around 17:04 | Seven policemen are talking to a few people. Two are helping someone on the ground . |\n| Starting 17:05 | A view from the tunnel shows some people climbing up over the container, also with the help of ropes. It seems that people in the tunnel behind are still reasonably fine. Some of them appear to be dancing . |\n| Starting 17:05 | More people are able to get up via the staircase. The density in the ramp area is reduced, and the police are turning around some people at the back of the crowd, who are still trying to get to the stairs.. |\n| Starting 17:05 | A crowd of people has fallen in front of the stairs, raising their arms up. Some rescue workers and festival attendees are pulling people out. One policeman tries to hold back the crowd. An emergency vehicle is guided to the ramp area by the police, coming from the East tunnel . |\n| Starting 17:07 | The stairs are still crowded. Someone is shouting for help by the police. Some policemen on the stairs help people up . |\n| Starting 17:08 | Someone is yelling at the police . People are pulled out of the fallen crowd, and some receive first aid. The crowd below the staircase seems \"cleared\" by the end of the video, and there is a considerable amount of police and rescue forces . |\n| Starting 17:08 | People can be seen lying on top of each other. The situation is still crowded, but the density eventually reduces . |\n| Starting 17:09 | The situation continues to be crowded, but people are starting to move more smoothly up the stairs. The area around the fallen people empties . |\n| 17:15 | The operation room of the city of Duisburg does not seem to be aware of the critical situation. It still calls the Love Parade a big success . |\n| Starting 17:16 | The situation on the ramp has cleared up, but the group of fallen people still seems to be without professional help. A rescue crew appears in the South-West corner. A person is lying unconsciously on the ground. Many people try to resuscitate others. Fallen visitors are pulled out of the pile of people . |\n| Around 17:20 | The crowd has mostly dissolved. Fire and ambulance cars are parked in the South of the ramp. A woman tries to provide first aid to a man in the South-West corner. At least 2 other people provide first aid to people on the ground . |\n| Around 18:00 | It is decided not to terminate the Love Parade to avoid further critical situations (by evacuating the festival area too quickly) . |\n\nFurther relevant events (continued).\n\nAn overview of the videos (as well as the locations and times when they were taken) are provided on a supplementary webpage . However, we would like to point out that the times provided on the videos or in the respective video portal may not always be exact. A synchronized video collection is now also available .\n\n# Contributing Factors\n\nAfter the occurrence of a disaster, it is natural to ask, who is responsible. In many cases, people are trying to find one person or organization (the 'scapegoat') to blame. In fact, after the Love Parade disaster, it seems that everybody was blaming everybody else: the visitors, the organizers, the police, the city of Duisburg. What makes things difficult is that nobody is totally right and nobody is totally wrong: in the following, we will argue that it is the interaction of many contributing factors that caused the crowd disaster. Before we discuss the interaction of these factors, however, let us shed more light on some of them in separation. While doing so, we will address a number of hypotheses regarding the cause of the crowd disaster, which have been formulated after the event. Given the many victims and pictures reminding of a war zone, some people first thought that a terrorist attack with explosives had happened . Others claimed that the fatalities resulted, because some people had fallen on top of others when unsuccessfully trying to climb the stairs from the side or the billboard (see Fig. ). And again others were blaming the crowd for the outbreak of a 'mass panic' (stampede) or at least some people for improper behavior . The first hypothesis was obviously not true. But what about the others?\n\n## Did the crowd panic?\n\nWhen talking about crowd disasters, public media often use the term 'mass panic', which suggests the occurrence of a stampede as reason of the disaster (see Ref. and also the name of the link in Ref. ). This suggests that crowd disasters happen, because the crowd 'goes mad' . There certainly exist some instances of this kind (such as the stampede in Baghdad on August 25, 2005, due to spreading rumors of an imminent suicide bombing in the crowd , or the stampede in a Chicago night club triggered by rumors of a poisonous gas attack ). However, the hypothesis of a \"psychological state of panic\" as reason of crowd disasters has been questioned many times .\n\nWhat evidence do we have for the Love Parade disaster in Duisburg? Has the crowd 'gone mad' because of influence of alcohol and drugs or because of impatience to get on the festival area? At first sight, one may think so, given that a number of visitors climbed over fences, up the pole, and on the container to reach the festival area. However, as we will see, these activities started at a time when people on the ramp were already exposed to crowded conditions.\n\nLet us discuss this in more detail. The first problems with visitors overcoming fences were reported around 15:31 . However, there were reports as early as 13:40 (see Table ), which show that people waiting for access had difficulties to breathe and asked to open the emergency exits (which did not happen) . These problems demonstrate that the access capacity was far below demand.\n\nProblems related to queues of people aggravate when queues are long and broad, so that little or no progress is visible. In such situations, people will subconsciously reduce their distance eventually. Although the reduction of distance might be negligible, the so-called 'queuing effect' will create the impression of progress. However, it will also cause a compression of the crowd . When the distance is small, there will be inadvertent body contacts, which can add up and cause unintentional pushing. Note that the transition from an acceptable situation with rare body contacts to a stressful situation with frequent contacts can happen quite abruptly . People may interpret this as intentional pushing, which may trigger stress and aggression. At a certain density, it may also be required to push others away in order to be able to breathe .\n\nIf people have to wait long and are not informed about the reasons for this, they will become impatient and may eventually start to push *intentionally* (because they assume that progress can be accelerated). While most impatient pushing happens in the middle of the queue, the situation usually becomes most critical at the front of the queue (but the people who push cannot see this, and they experience much less crowded conditions).\n\nThe situation is particularly bad behind bottlenecks. These can create 'traps' without any possibility to escape. Such situations must generally be avoided. This also means that flow control is not a solution for every problem. It requires suitable designs and an adaptive operation.\n\nAccording to our assessment, it had to be expected that the access points would have to be opened and fences would eventually be overcome, given that the festival area and the inflow capacity were small (in particular as the access was delayed by leveling works). Waiting times often amounted to several hours, and access to entertainment, food, water, and toilets must have been quite limited outside the festival area.\n\nNevertheless, the problems on the ramp were even more serious than at the access points. They were related to the low inflow to the festival area (see Table ). An analysis of surveillance videos suggests that the floats (i.e. the moving music trucks) 'pulled' visitors along with them, as expected by the planners, but this was not apparently effective enough. After the crowd disaster, it was sometimes claimed that the floats even obstructed the inflow of arriving visitors. While the inflow never stopped completely before the cordons were established , the queue forming at the top of the ramp varied considerably over time . The inflow was particularly low, when a float was slowed down or stopped around 15:31 in the neighborhood of the ramp .\n\nWhile the organizers considered the possibility of inflow problems , they assumed that these could be handled by 'pushers'[^1] at the upper end of the ramp and that the floats could be used as well to *reduce* them (by attracting the crowd onto the festival area and moving it along with them) . However, there was apparently a lack of a sufficient number of pushers , and the floats did not manage to overcome the inflow problem. It looks like the floats were slowed down by the dense crowd, which in turn obstructed the inflow of visitors, thereby creating an unfavorable feedback loop. The situation was particularly tense from 14:27 to 15:05 and from 15:55 to 17:00; as a consequence, the crowd manager asked for support by the police at 15:16 (or before) . The responsible officer arrived around 15:30, when a jam had formed on the upper part of the ramp . About 10 minutes later, a joint strategy was found. However, already at 15:31 (i.e. at the time when one of the floats slowed down in front of the ramp), the situation had deteriorated so much that a large amount of visitors decided to overcome fences along the ramp to reach the festival area via the grassy slopes on both sides (see Fig. ) . This mitigated the bottleneck situation at the end of the ramp, which could have caused serious problems at a much earlier time. In fact, it seems that the dangerous phenomenon of crowd turbulence (see Sec. ) first occurred in the upper part of the ramp .\n\nAccording to Table , the first visitors used the narrow staircase at 16:17, and around 16:22 the first people climbed the pole on the East side of the lower ramp area, to get up to the festival area . The first people climbed the container of the crowd management on the South of the ramp at 16:24 . This was the time, when the third police cordon is given up. While the initial flow on the staircase was stopped by police, people used the staircase again around 16:27. At about the same time, a person climbed a traffic sign on the ramp (see Fig. ). All of this might have been interpreted by the security as signs of an excited crowd that did not behave properly, but the temporal coincidence of these events clearly shows that people were trying to escape from the crowd in any possible way, because they felt in danger. In fact, behavior of the crowd that might have been perceived as 'improper' occurred mainly, after the first two cordons had to be given up (around 16:14 and 16:21), while the third one was still closed, which caused an increasingly crowded situation on the ramp.\n\nIn videos recorded at the Love Parade, the phenomenon of crowd turbulence starts to appear between 16:34 and 16:36 . Around the same time one can hear painful shouts, and some people scream for their lives and for help (see Table ). In this situation, at least some people must have experienced a psychological state of panic. Nevertheless, there were no signs of sudden systematic movements of the crowd into a certain direction, which would indicate a stampede, and no people 'crawled' on top of others, yet. Around 16:40, the forces in the crowd were so critical that a traffic sign was bent , and an unconscious women was passed on to the narrow staircase . Around 16:45, several people tried to elevate themselves out of the crowd by climbing a billboard next to the traffic sign . Approximately at the same time, many people between the billboard and the staircase raised their arms into the air (the movie should be watched in full screen mode to see this well). This is usually a sign that they have fallen to the ground and are seeking help from others to get back on their feet. We believe that this was the first sign that people were dying or likely to die. At 16:51, an emergency vehicle entered the ramp, but it was taking care of other problems . Still, there were no sudden moves into one direction visible in the crowd that would speak for a stampede. Rather, people next to those screaming for their lives were trying to calm them down by saying \"you will make it,\" and offering them water . Around 16:55, a group of people was pushing their way through the crowd towards the tunnel in the West (in Ref. this can be seen between 1:28 and 1:35 in full screen mode; the same shows up in Ref. ). Around the same time, some people were trying to 'crawl' over others, hoping to escape the situation . While this was clearly a relentless and potentially harmful behavior, it is not obvious that it killed others, and it occurred under circumstances that were absolutely life-threatening (which should not be misunderstood as a justification of such relentless behavior.) First deaths were reported at 17:02 .\n\n## Were people killed by others falling on them from above?\n\nAs most people died between the staircase and the billboard , the public media initially assumed that they were victims of others, who had fallen down after unsuccessfully trying to climb the staircase from the side or to climb the billboard . There was even a statement that the staircase should have been \"blasted away\" before the event . However, the videos viewed by us do not provide convincing evidence that falling people were the cause of the disaster. It is also not plausible that a few people falling from the staircase could account for 21 fatalities and more than 500 injured people . Moreover, the height of falling was not large, and most victims were not lying on the side of the staircase, but rather between the staircase and the entrance of the tunnel (see \"accident area\" in Fig. ).\n\nNevertheless, the analysis of the video materials and photographs witnesses at least three events of falling people. According to Ref. , the first one happened around 16:57 at the billboard, the second shortly later at the same place. The third incident happened at the same location at 17:03. Furthermore, one person failed to climb the staircase from the side; around 16:40 it fell back to the ground from a low elevation . Apparently, the height of falling was relatively small, and the falling people also did not trigger a stampede of the crowd. Therefore, according to our judgment, it is unlikely that people died as a direct consequence of others falling down from the staircase or billboard.\n\n## Did the Staircase Cause a Crowd Crush?\n\nNevertheless, it is a valid question, whether it was a mistake to let people use the staircase. It is likely that people were turning towards the staircase, hoping that it would provide a chance to escape, and that even a minor movement could seriously increase the local pressure in the crowd, given the high density that had already built up on the ramp. In fact, the situation in the crowd was highly problematic not only next to the staircase , but also next to the pole , and it was apparently the use of the pole that inspired the use of the staircase . Nevertheless, the movement of the crowd towards these improvised 'emergency exits' was not large. The videos we have seen do not show the sudden start waves, which are typical when a waiting crowd (or jammed traffic) starts moving . Therefore, we doubt that the fatalities were caused just by a relentlessly forward pushing crowd, which crushed the people. Crushing due to extreme densities rather happens when a large crowd moves too quickly towards a narrowing . In Duisburg, however, the crowd disaster happened in a crowd that barely moved forward. Even though the situation on the ramp was critical for the crowd from 16:35 on , it seems that most people had a chance to breathe (at least intermittently) and to recover between stressful periods. In fact, the recordings change many times between screams of panic and more positive noises.\n\nWe do not question that the density in the crowd became so high at some locations that it could seriously harm health and lives, but it is puzzling that most victims were not found on the side of the staircase, or next to the pole(s) and the container, where they had to be expected in case of a crowd crush. We also do not deny that the staircase was an attraction point, but we doubt that it can be seen as immediate cause of the disaster. It may have even played a significant role for the evacuation of the overcrowded ramp, since it served as emergency exit. However, this emergency exit was used too late and not very efficiently. A continuous flow of people on the staircase was established only around 16:40 . Before, it stopped or was blocked many times. The same happened during the most critical period, when many people tried to climb the staircase from the side, which considerably obstructed the flow on it .\n\n## Occurrence of Crowd Turbulence\n\nSo far, the cause of the crowd disaster in Duisburg has still not been revealed. If the crowd did not panic, and people did not die from others falling on them, and a rush towards the narrow staircase did not cause the crowd disaster, what then was the reason for it? The answer lies in the dynamics of the crowd, which unintentionally emerged, when the density became too high. John Fruin describes the situation as follows : \"At occupancies of about 7 persons per square meter the crowd becomes almost a fluid mass. Shock waves can be propagated through the mass, sufficient to ... propel them distances of 3 meters or more... . People may be literally lifted out of their shoes, and have clothing torn off. Intense crowd pressures, exacerbated by anxiety, make it difficult to breathe, which may finally cause compressive asphyxia. The heat and the thermal insulation of surrounding bodies cause some to be weakened and faint. Access to those who fall is impossible. Removal of those in distress can only be accomplished by lifting them up and passing them overhead to the exterior of the crowd.\"\n\nIn fact, suffocation was diagnosed as the reason for the death of people during the Love Parade disaster . In simple words, this means that the lungs of the victims have been compressed so much that they were unable to breathe enough to get the required amount of oxygen to survive. Compressive asphyxia was also identified as cause of death in many other crowd disasters.\n\nAccording to recent studies , it is often not the density alone that kills ('crushes') people, but the particular kind of dynamics that occurs when the density is so high that physical interaction between people inadvertently transfer forces from one body to another. Under such conditions, forces in the crowd can add up. Force chains may form, such that the directions and strengths of the forces acting on the body of an individual in the crowd are largely varying and hard to predict. As a consequence, an uncontrollable kind of collective dynamics occurs in the crowd, which is called 'crowd turbulence' or 'crowd quake' . The forces in this dynamical state of the crowd can cause various injuries (in particular of the chest, as in crowd crushes). They are so high that they cannot even be controlled by large numbers of police forces. Individuals can handle the situation even less. They are exposed to a large risk of losing balance and stumbling .\n\nOnce people have fallen, they constitute obstacles to others and are endangered by others falling on top of them, since these can also not control their steps anymore as they wish. Hence, the surrounding people are likely to stumble as well, which creates a 'domino effect' . The resulting number of falling people may be large. This creates a heap of people, in which nobody can easily get back on their feet again. Those on the bottom have serious difficulties to breathe, and they are likely to suffocate if this state lasts too long, given the weight of others on their top.\n\nDirectly after the Love Parade disaster, when the situation was far from clear, one of the authors conjectured that 'crowd turbulence' was the likely cause of the fatalities . Eye witness reports and the analysis of video recordings confirms this hypothesis. Crowd turbulence can be observed in the crowd at least from about 16:34 on around the pole and from 16:39 on in the lower part of the ramp . Before 16:48, a considerable number of people fell to the ground between the tunnel and the staircase , approximately at locations where computer simulations predict the largest crowd pressures (see Fig. ).[^2] The situation deteriorated further around 16:53, when crowd turbulence affected almost the entire width of the ramp , i.e. hundreds or even thousands of people were irregularly moved around by the pressure in the crowd; many of them stumbled and fell on top of each other . The troubled area agrees with the one, where most victims were found . Under the weight of others lying on them, they must have eventually suffocated, since there were not enough emergency forces to help them back on their feet in time.\n\nPublic blogs have been wondering about the reasons for the layered crowd of fallen people :\n\n1. Did the emergency vehicle driving on the densely crowded ramp trigger the falling?\n\n2. Was there a fence lying on the ramp, that should have covered a broken manhole cover ?\n\nWhile cars moving through a dense crowd can indeed trigger critical conditions, it seems that people had already fallen to the ground (around 16:48), before the emergency vehicle arrived on the ramp (around 16:50 ).[^3]\n\nA broken manhole cover or any kind of obstacle lying on the ground would certainly have made it difficult for people to keep their balance and stay on their feet, when pushed around by turbulent waves. Such obstacles are dangerous and should certainly not have been located in the bottleneck area (the ramp). While, even without obstacles, it is likely that crowd turbulence would have caused people to fall sooner or later, obstacles can act as 'nucleation points' and thereby possibly trigger an earlier falling of people, which may reduce their chances of survival.\n\n# Causal Interdependencies\n\nWe must now discuss the question, how the conditions, which caused the deadly crowd turbulence, have come about.\n\n## Failure of Flow Control\n\nWhen viewing the area of the Love Parade in Duisburg (see Fig. ), the choice of location appears surprising, since the festival area was relatively small and furthermore constrained by railway tracks on one side (in the East) and by a freeway on the other side (in the West). This becomes particularly clear when comparing the area with the one used during the Love Parades in Berlin (see Ref. ). As this circumstance implied a risk and the bottleneck at the ramp during peak hours was foreseeable (see Sec. ), flow control was crucial for the safety of the Love Parade. However, there was a whole avalanche of problems that accumulated and, thereby, caused the crowd disaster.\n\nThe first problem on the day of the Love Parade occurred when the opening of the festival had to be delayed by approximately one hour due to a delay in the completion of the leveling work (see Table ). Therefore, many visitors must have been queued up already at the time when the festival area was opened. It seems that the organization of the mass event could never make up for this delay.\n\nThe overall inflow capacity was apparently further reduced through obstructions by the floats, which had probably not been anticipated to that extent (see Sec. ). As a consequence of this, access control was necessary already at 13:00 (see Table ), much before the expected peak hours. This further increased the queues and the waiting times. The following quote witnesses the problems : \"We parked the car about 3 kilometers away from the freight station (next to the festival area), and it took us almost 5 hours (!) to get to the Love Parade (festival area). On the way, we were facing blocked roads time and again, fences were carried over us, emergency forces could not get through, people collapsed, ...\"\n\nClearly, visitors of the event must have become impatient, particularly because there was probably a lack of food, drinks and toilets outside of the festival area (since such long waiting times were not anticipated). One could, therefore, expect that it would be difficult to control the inflow. In this connection, it is also worth noting that there was not much entertainment outside the festival area to shorten the psychological waiting time and to relieve stress and impatience. Apparently, there was a stage outside the festival area, which was supposed to absorb some of the visitors, who could not get to the festival area, but for some obscure reason, it was moved to another area, where it attracted only a smaller number of people .\n\nBetween 14:30 and 15:10, the organizers found it difficult to control the inflow with the isolating devices (see Table ). This was probably not just a result of the excessive waiting times, which caused impatience, but possibly also because some of their security people were needed elsewhere (e.g. to improve the outflow from the ramp or to guide VIPs) . As a consequence, the organizers tried to get support by the police .\n\nFor a number of reasons, it seems to have taken a considerable amount of time to get the requested police support. Communication by walkie talkies and mobile phones did not work reliably . There were also no functioning loud speakers at the ramp, as there should have been . Moreover, there was a change of police shifts between 15:12 and 15:34, when the situation started to deteriorate . Various reports suggest that police and organizers were not well coordinated, probably due to the afore-mentioned communication problems. It is also likely that the following emergency operations had not been exercised before. As a consequence, the police may have tried to solve the problem with concepts they were familiar with. They formed several police cordons for flow control. This tactic is often applied to get control of violent crowds. However, it failed during the Love Parade, and we will now analyze why.\n\n## A Lack of Overview of Everybody\n\nAs was pointed out in Sec. , when the crowd was trapped in a situation of extreme density, it did not have a chance to get an overview of the situation and possible ways to improve it, in particular to get out of the area. Signs and loudspeaker announcements were not available. The only possible emergency exits they could recognize were the narrow staircase, the pole(s), and the container of the crowd management. They were used accordingly, which was quite reasonable in the more and more dangerous situation that the crowd found itself in.\n\nAt this time, all the hope to get control of the situation rested on the police. The police may have been surprised by the sudden need to take control, which was requested by the crowd manager when difficulties to access the festival area occurred at the upper end of the ramp. The police tried to solve the problem by establishing cordons, but it was soon noticed that police cordons 1a, 2, and 3 (see Fig. ) blocked not only the inflow, but the outflow as well. This is also the reason why cordon 1a was moved behind the side ramp (see cordon 1b in Fig. ), and why police cordon 4 was formed at the upper end of the ramp (after dissolving cordon 3). This would have allowed to re-direct the outflow via the side ramp. However, before these operations could be completed, cordons 1b and 2 had to be given up because of the increasing pressure in the waiting crowd, while cordon 3 was still there .\n\nIt is known that dense counter-flows are unstable and may give rise to mutual blockages, which can cause crowd disasters . For such reasons, it is recommended to separate the flow directions at mass events. Yet, it was not the instability of dense counter-flows which caused the incident in Duisburg. The lack of directional flow separation, however, did not allow one to clear the ramp, after it became crowded by the dissolution of two of the cordons. When cordons 1b and 2 had to be given up, the police suddenly found itself in a situation, where in- and outflows blocked each other, and it was basically impossible to evacuate the ramp in conventional ways, when people quickly accumulated on both sides of cordon 3. A trap without exits or emergency exits resulted, from which people could not get out, and the situation kept getting worse .\n\nFor people in the crowd, it was impossible to gain a sufficient overview of the situation and to find a solution. Police had helicopter surveillance and was filming the ramp from the top. However, it took some time until the criticality of the situation was noticed and evacuation measures were taken. When the evacuation finally became effective, the ramp cleared quickly . But prompt action was delayed by communication problems. It seems that the first loudspeaker announcement could only be made around 17:30, after a loudspeaker vehicle had entered the ramp .\n\n| | |\n|:---|:---|\n| 14:27-15:05, 15:55-17:00 | Queues of arriving visitors form at the upper end of the main ramp, which leads to the festival area. For this case it was planned (1) to use 'pushers' in order to make the people move forward, (2) to close the access points in the East and West in front of the tunnels, (3) to make loudspeaker announcements \\[pp. 20+13\\]. |\n| 15:16 | The crowd manager asks for police support via the liaison officer \\[p. 31\\]. |\n| Around 15:30 | The relevant police officer arrives at the container of the crowd manager \\[p. 31\\]. |\n| 15:30-15:40 | Crowd manager and this police officer jointly decide (1) to ask crowd management\/security staff to work as 'pushers' in order to ensure a better inflow into the festival area from the upper end of the ramp, (2) to close the access points for approximately 10 minutes, (3) to form a cordon in the middle of the ramp in order to shield visitors trying to enter the festival area from behind. \\[pp. 20+31\\] |\n| 15:45 | In the discussion with other police officers, this plan is modified towards forming 2 police cordons in the tunnels to the West and to the East \\[p. 22\\]. |\n| 15:50-16:20 | Police cordon 1 is formed in the tunnel in the West (first before the side ramp and then after it from 16:02 on in order to allow people to use the side ramp) \\[p. 21\\]. |\n| 15:57-16:16 | Police cordon 2 is formed in the tunnel in the East \\[p. 21\\]. |\n| 16:01-16:24 | A third police cordon is formed in the middle of the ramp in order to avoid that visitor flows returning from the Love Parade would undermine police cordons 1 and 2 from behind \\[p. 21+22\\]. |\n| Around 16:10 | When arriving at the relevant area of the ramp, the responsible officer discovers that (1) many people are trying to leave the festival area and (2) the expected dissolution of the jam at the upper end of the ramp did not happen within the 10 minute time period foreseen for this. Therefore, the blockage of the inflows by cordons 1 and 2 must be maintained longer than planned. Due to this delay and since the access points must be intermittently opened, the pressure on police cordons 1 and 2 becomes so high that they must be given up \\[p. 23\\]. |\n| 16:24 | Visitors are jammed up on both sides of police cordon 3. The situation becomes extremely crowded \\[p. 24\\]. Therefore, police cordon 3 is dissolved, also because it is \"ineffective\" between two oppositely directed flows \\[pp. 24+34\\]. |\n| 16:31 | A new (transparent) police cordon is formed at the upper end of the ramp from 16:31 on \\[pp. 21+24\\]. It serves to stop the outflow of leaving visitors via the main ramp and to encourage arriving visitors to use the slopes to enter the festival area (see Fig. ). |\n| 16:39 | The fire brigade reports 'panic-like' movements on the ramp with some over-run people \\[p. 25\\]. |\n| 16:40-16:55 | The festival area is closed for newly arriving visitors (by moving vehicles in front of the access points) \\[pp. 25+35\\]. |\n| After evacuation of ramp area | Some densely crowded spots remain around the container, two poles and the narrow staircase. It is not possible to redirect them by words or gestures \\[pp. 34+35\\] |\n\nCourse of events as presented in the police report . The numbers in square brackets correspond to the page numbers of the report.\n\nWhy did the evacuation start so late? The analysis of the police is presented in Table . It seems that first attempts to direct the crowd towards the upper end of the ramp started around 16:40 , but were not very effective . It is true that evacuation attempts take some time, but there was also a lack of efficient means of communication (such as loudspeakers or megaphones). Moreover, we would like to point out the following: In crisis situations, decision-makers are often overwhelmed by the pace of events , mainly for two reasons: First, it takes time to collect information locally, and bring it to the attention of the chief police officer, who then takes a decision and gives commands. These are then transmitted down to the local police forces through the command chain. Second, critical situations are often characterized by incomplete, contradictory, and ambiguous information, which makes it difficult to assess the situation correctly and come to the right conclusions.\n\nWhen the situation on the ramp became unbearable and life-threatening, people started to escape via the pole, the container and the staircase next to the ramp. This could have been misinterpreted as aggressive attempts of impatient visitors to storm the festival area, but in reality, it was a sign of emergency. However, due to the noise level, screams for help were hard to comprehend. Also visitors (on the East), looking on the ramp from above around 16:30 did not have a sense of emergency . This makes it understandable, why pressure relief operations were not yet effective, when the crowd disaster was about to start.\n\nOnce the evacuation process on the ramp started, the area emptied quickly . The narrow staircase also might have played an important role as an emergency exit at this time . Others managed to leave the ramp towards the festival area, following the emergency vehicle . However, people close to the staircase were still focused on it . This might have been a result of the 'tunnel vision' that develops when people are stressed. Even when the surrounding crowd had dissolved, it took a long time, until those who had fallen to the ground between the tunnel in the West and the staircase got back on their feet, if they managed this at all . In fact, many of them were injured or died.\n\nA lack of overview is typical for crises situations. During the Love Parade disaster in Duisburg this is, for example, reflected by the fact that, around 15:06, the minister of interior visited the Love Parade (see Table ), but despite first signs of overcrowding, he left the festival area before the incident. At 16:47, the organizer gave an interview, which still called the event a success , and as late as 17:15, the city's situation room made a similar statement . Emergency forces were also responding late. As a consequence, a triage procedure had to be applied. (This procedure is typical for war zones, major disasters, and terrorist attacks.) Therefore, many people in critical health conditions did not get first aid .\n\n# Discussion\n\nIn the following, we try to gain an integrative view of causal factors of the crowd disaster, which strictly needs to be distinguished from a legal analysis or a determination of responsibilities. We also want to stress that the main purpose of our analysis is to learn for the future, i.e. to identify factors that need to be paid more attention to.\n\n## Resilience, Systemic Instabilities, and Cascading Effects\n\nNote that, generally, a good organizational concept should be resilient ('forgiving'), i.e. it should be robust to mistakes and complications. Therefore, many disasters do not have a single causing factor. They are a result of interaction effects. This also applies to the Love Parade disaster which, as we will argue below, can be understood as result of a systemic instability.[^4] The term 'systemic instability' is used here for situations, where small perturbations can trigger a series of events through mutual amplification effects in a way that things eventually get out of control, even if everyone makes best efforts. At the Love Parade, people were dying although nobody wanted this and everyone was trying to prevent the death of people. Other examples for systemic instabilities are\n\n- spontaneous breakdowns of traffic flows above a certain critical density (even when everyone is driving in a circle and trying hard to maintain a finite speed) ,\n\n- breakdowns of cooperation in social dilemma situations, which give rise to 'tragedies of the commons' ,\n\n- political revolutions and\n\n- financial breakdowns .\n\nMany systemic instabilities come along with cascading effects, which tend to create extreme events : the overload of one component of the system challenges other components, which therefore causes a propagation of problems through the system. Usually, cascading effects do not occur during normal operation, but are triggered by (random) perturbations or the coincidence of several complicating factors. They tend to occur when the interdependencies in the system exceed a critical strength. For example, cascading effects are observed in traffic jam formation (when the density is too high) , blackouts of power grids, for many kinds of disasters , for the current financial crisis , and for the Arab Spring revolutions .\n\n## What Caused the Crowd Disaster: Causal Interdependencies of Contributing Factors\n\nThe following analysis discusses cascading effects that have (most likely) contributed to the Love Parade disaster in Duisburg (see Fig. for an illustration).\n\n- Berlin rejects to host the Love Parade (LP), and other cities take over . The Love Parade moves from city to city, which creates new organizational challenges each time (in more difficult locations than in Berlin with its wide roads and expansion areas). The change of organizational teams makes it difficult to accumulate crowd management experience over many events.\n\n- Bochum has to cancel its Love Parade, because it cannot manage the security challenges .\n\n- Duisburg\/Essen is elected as cultural metropole 2010 . It is under pressure to come up with an attractive cultural program. This seems to have created a desire to approve the Love Parade .\n\n- The festival area does not provide capacity reserves and implies a number of organizational difficulties. In the tunnel and on the ramp, in- and outflows are not separated, and there is no separate route for emergency vehicles (i.e. they have to use the tunnel as well).\n\n- To overcome security concerns, an evacuation study is commissioned. It mainly focuses on evacuation scenarios , assuming a maximum concurrent number of visitors as it was required by the security concept of the city .[^5]\n\n- Due to the late approval of the event (see Table ), the security concept may have been finished 'last minute' (and vice versa). The likely consequence is that contingency plans may have been insufficient and could not be exercised enough. There was probably also not enough time to ensure a good coordination between organizers and police forces.\n\n- Due to delays in finishing the leveling work (see Table ), the festival area of the Love Parade is opened later than expected . This implies an early overload of the access points and causes an impatient crowd (particularly as facilities, supply and entertainment were probably scarce outside the festival area).\n\n- People enter the Love Parade area later and return earlier than expected.\n\n- The interaction of the floats with the crowd does not enable a sufficient inflow to the festival area. This apparently requires that crowd management forces are moved away from the isolating devices to the end of the ramp, in order to improve the inflow; requested VIP support seems to absorb some manpower as well .\n\n- The crowd management faces problems to control the isolation devices, and it tries to organize police support .\n\n- There are difficulties in the communication and coordination between organizers and police. Suitable communication means are missing or not used or are not working in a reliable way . Therefore, the feedback between the situation, the crowd management, and the crowd is insufficient.\n\n- Due to communication problems and a change in police shifts, police support may have been delayed . Moreover, it must have been difficult for the new shift to get an overview of the situation.\n\n- Maybe due to the urgency of the situation, it is decided to form two police cordons in the tunnels leading to the ramp. A third police cordon is established in the middle of the ramp, where fences narrow down the diameter of the ramp. It shall prevent that leaving visitors undermine the police cordons in the tunnel from behind .\n\n- The police cordons in the tunnel are given up, probably because of the high pressure of the arriving crowd. This replaces the problem at the upper end of the ramp by an even bigger problem in the middle of it: A lot of visitors are moving into the lower ramp area through the tunnel, while many others are waiting at the upper end to leave the event. As the third cordon blocks in- and outflows, jams of arriving and leaving visitors are quickly growing on both sides of cordon 3. The cordon is dissolved, because it is ineffective, and a new police cordon is formed at the upper end of the ramp.\n\n- At this time, the situation in the crowd is already critical. The lack of separation of opposite flow directions makes it difficult to let people out without letting people in . Therefore, it is impossible to evacuate the ramp efficiently.\n\n- People on the ramp try to escape the life-threatening situation over the staircases, the pole(s), and the container (see Sec. ). This may have been misinterpreted as a 'mob' trying to force its way into the festival area, which needs to be controlled. Pressure relief efforts become effective only very late.\n\n- In absence of separate emergency routes, fences and cordons must be opened to allow an emergency vehicle to pass (see Table ). This creates openings for a further inflow of people.\n\n- The overcrowded situation causes dangerous 'crowd turbulence' (see Sec. ). Many people are falling and pile up on top of each other. Emergency forces cannot reach people quickly enough. 21 of them die of suffocation, and more than 500 are injured .\n\n- As an unexpectedly large number of people need help, there are not enough emergency forces at the location of the accident . Therefore, a triage procedure is applied in the tunnel . As a consequence, many people in critical health conditions do not receive first aid.\n\n## What Might Have Stopped the Feedback and Cascading Effects\n\nOverall, one gets the impression that problems occurred on all sides (but we admit that it is easier to identify them afterwards than at the time when decisions must be taken on the basis of often limited and imperfect information). The above analysis shows that things went wrong from the very beginning, and that the situation increasingly got out of control over time. However, we believe that there were also many possibilities to mitigate or overcome problems that contributed to the disaster. Therefore, we will now discuss, how the deadly cascading effect described in the previous subsection might have been stopped or how its size and impact could have been reduced:\n\n- One might have been able to find a better suited area for the organization of the event.\n\n- One could have required higher organizational standards (such as a separation of flow directions).\n\n- The decision to hold the event or not could have been taken earlier. This would have facilitated a better preparation and a better coordination. It would also have reduced the commercial and public pressure in case of deciding against the event.\n\n- Safety and security concerns could have been taken more seriously. The fact that the responsible police officer quit his job could have been seen as advance warning sign.\n\n- Superior contingency plans could have been elaborated, in order to be better prepared for the occurrence of various problems. This applies particularly to the handling of the main bottlenecks of the system: the ramp and the access points.\n\n- If the evacuation study had raised serious concerns, this might have been able to stop the approval of the event.\n\n- The various stakeholders could have foreseen larger safety margins and more reserves (also in terms of staff).\n\n- It might have been possible to work out a different flow concept, which separates in- and outflows. A circulatory flow organization (where people would come in via the tunnels and both ramps, but leave over the closed-down freeway) would have been interesting to consider.\n\n- Obstacles on the ramp (such as the food stand, fences in the way, and police cars) could have been avoided.\n\n- Efforts could have been made to ensure better communication between the different stakeholders (by reserving \\[more\\] priority lines in the mobile phone network) and better communication between the organizers and the crowd (by installing loudspeakers at the ramp and elsewhere). A loudspeaker vehicle could have been moved to the ramp, when it was noticed that no loudspeaker equipment was available on the ramp (around 14:00, see Table ), or megaphones could have been used to communicate with the crowd.\n\n- When it became clear that people had difficulties to enter the festival area and jams formed on the ramp, one might have been able to move the floats further away from the ramp. Moreover, the side ramp could have been used to avoid the jam on the main ramp.\n\n- The use of more 'pushers' might have been able to increase the outflow from the ramp to the festival area (but it is not clear how effective this measure would have been, given that the entrance area to the festival ground was quite packed).\n\n- More emergency forces (rescue units) could have been positioned on the ramp and next to it.\n\n- When it was recognized that the crowd management and control did not work as expected, the first police shift might have been extended.\n\n- When the situation became crowded, cordons could have been established at the isolation devices and at the end of the main ramp. The outflow of people could have been redirected (either via the side ramp or via the emergency exits).\n\n- With loudspeakers or megaphones, people on the overcrowded ramp could have been evacuated earlier and in a more effective way, e.g. by organizing an outflow from the ramp to the festival area behind the chain of police cars that were standing on the ramp (see Fig. ). Additionally, a continuous evacuation via the staircase could have been established from 16:15 on (or even from 15:31 on, when people needed to use the slopes to get on the festival area) . Furthermore, the tunnels could have been used to evacuate the ramp, if the flow directions would have been separated.\n\nGiven the above alternatives, the crowd disaster might have been avoided in many ways. Already around 13:00 there were first signs that the crowd management concept would not work as planned (see Table . Between 14:30 and 15:15 it was noticed that the ramp constituted a bottleneck that could get out of control. Around 16:25, people climbing the pole, staircase and container were serious warning signs of a critical situation (see Sec. ). At this time, it would probably have been possible still to evacuate the ramp, if suitable communication tools had been used. However, the ramp emptied only after 17:00.\n\n# Lessons to be Learned and Recommendations\n\n## Summary\n\nOne of the noteworthy points of the Love Parade disaster is that most evidence is available online, which allows many scientists and also the broader public to form an opinion. This dramatically changes the situation compared to many previous disasters, where a lot of evidence is of confidential nature, accessible only to a small number of experts. We believe that the new openness of data can have many beneficial effects on society. This study, for example, hopes to make a contribution to a better understanding of crowd disasters and their avoidance in the future. The accessibility of the materials can also serve organizers of mass events, the police and emergency forces to prepare themselves better.\n\nThrough the analysis of publicly available materials and videos, we could identify many factors that have contributed to the Love Parade disaster. Our judgement is that the capacity of the area of the mass event already implied various problems, which the organizational concept wanted to overcome by crowd control. However, the delayed start of the event and the unexpected obstruction of the inflow to the festival area from the ramp (i.e. two factors which were probably not anticipated) caused queues that were difficult (or impossible) to manage. Already in the organizational phase, but also in the attempt to manage the flows, many problems came together, and the mutual interaction of these problems made the situation worse. In particular, the cordons that were intended to dissolve the jam at the entrance to the festival area did not yield the expected relief. While they might have worked in case of unidirectional flows, the situation became worse due to the fact that a flow of returning visitors encountered an inflow of arriving people without a separation of the flow directions. From the very beginning, the interaction of many factors resulted in cascading effects, which eventually created a situation that got totally out of control (see Fig. ).\n\nOrganizational concepts for mass events are supposed to be robust to the occurrence of single perturbations ('single points of failure'). This in itself, however, does not exclude the possibility that the coincidence or interaction of problems can cause a systemic failure. When certain factors have amplifying effects on other factors (or there are even feedback loops), this can create systemic instabilities. We learn from this that, in order to reach a resilient organization of mass events (and actually any complex system), it is not sufficient to ensure the robustness of each contributing factor. One must also study their *interaction* effects, to guarantee that the overall organization is resilient to the coincidence of unfavorable factors as much as possible.\n\nOur study also sheds new light on issues that have been controversially discussed. Immediately after the Love Parade disaster, the behavior of the crowd and the staircase were blamed for the fatalities. However, our analysis yields a different interpretation: the Love Parade incident shows the typical features of crowd disasters, such as the existence of bottlenecks (and therefore the accumulation of large numbers of people), organizational problems, communication failures, problematic decisions, coordination problems, and the occurrence of crowd turbulence as a result of high crowd densities.\n\nIt is likely that the staircase encouraged a movement of the crowd towards it, when people were trying to escape from the life-threatening density in the ramp area, but the collective movement seems to have been small (it is not clearly visible in the video recordings). In any case, effective measures (such as an evacuation of the crowd) should have been taken long before critical conditions developed. Given the high density in the ramp area, the occurrence of crowd turbulence or 'crowd quakes' was unavoidable. In this dynamical state of the crowd, the lives of people are in serious danger, as people will fall sooner or later. The triggering of this deadly dynamics does not require a particular reason.\n\nFurthermore, note that the pushing in the crowd at high densities is not necessarily a result of violent behavior, but of the fact that physical forces are transmitted via the bodies of others and adding up. Under such conditions, it is very difficult to keep control over the motion of one's own body, since one is literally moved around by the crowd. The situation in the crowd is difficult also, because no one has an overview of the scene, and the noise level (as well as the overload of the mobile phone network) make communication largely impossible. While the conditions in the crowd were likely to cause a high level of stress, this was a reasonable response to the life-threatening situation. However, a mass panic was most likely *not* the cause of the Love Parade disaster. The video recordings from the Love Parade do not provide evidence for a stampede of people, while the dangerous phenomenon of crowd turbulence is clearly visible.\n\nNote that crowd disasters during religious pilgrimage in the past recently led to important insights and also to significant improvements of crowd management and control . Many of the lessons learned can also be transferred to other mass events in order to improve their safety. The authors propose to consider the following points (besides the official regulations, of course):\n\n- Large mass events should preferably take place in locations where experience with the management of large crowds already exists for a long time. It should at least involve some experts who have participated in the organization of previous mass events and know how to handle critical situations. Local organizing teams should be supported by experienced national or supranational professionals.\n\n- The security concept should be finished, distributed, discussed, and exercised at a pre-specified date well in advance of the event.\n\n- The event must be planned on the basis of the number of expected people, not on the basis of capacity.\n\n- An organizational concept that requires keeping many people out or delays them for hours should be avoided.\n\n- Facilities (e.g. toilets), supply (particularly food and water), as well as entertainment should be ensured also for people on the way to the festival area and for those waiting to enter.\n\n- One should implement ways preventing pressure on decisions that may have impact on the safety and security of people. It should not be possible to ignore qualified minority opinions. Contradictory voices should be documented and seriously addressed.\n\n- Consultants should be encouraged to comment on any critical issues (even beyond the scope of the commissioned analysis).\n\n- An analysis of the expected inflows and outflows (and, hence, number of participants) needs to be performed, considering the possibility of large flow variations. A bottleneck analysis is crucial. It must also take into account moving bottlenecks such as floats, but also the operation of police or emergency vehicles. Confluence, turning and intersection points should be determined. In this context, computer simulations with state-of-the art pedestrian software can be useful, but model parameters must be carefully chosen. Note that computer simulations can often help to identify crowded areas, but they are not sufficient to reveal all kinds of organizational challenges.\n\n- Critical points should be removed, and it must be checked, whether the remaining problems can be safely handled by crowd management and control measures also under adverse conditions. Safety margins (such as capacity reserves) should be foreseen , and detailed contingency plans should be worked out for likely and unlikely events, and exercised. (Contingency plans serve to reduce the need of improvisation and to ensure a quick and effective response to any occurring problems.) Interaction, cascading and side effects of complicating factors should be analyzed as well. Remaining areas and factors of concern must be continuously monitored (e.g. by video surveillance and special software for real-time analytics ). Sufficient security and emergency forces should always be present to remove or at least mitigate problems early on. Delays in response must be avoided, as they tend to reinforce problems, i.e. quick action is often key to effective counter-measures . To stop possible interaction and cascading effects, suitable decoupling strategies should be implemented.\n\n- Pressure relief and evacuation strategies must be prepared for any potentially critical areas. Evacuation measures must be started before an area becomes over-crowded.\n\n- Intersecting flows should be avoided and different flow directions should be separated (as dense counter-flows are unstable and dangerous ). A 'circular' flow organization, preferably with alternative routes, should be considered . Moreover, space for emergency vehicles and operations should be reserved.\n\n- Fences are not good everywhere. They may turn into obstacles and create dangerous situations. Therefore, the use of fences (or cordons) to stop large numbers of people needs to be carefully considered, as they may be ineffective or deteriorate the situation. In many cases, it is safer to keep people moving (e.g. by re-routing people) rather than stopping them.\n\n- Situational awareness and well-functioning communication are crucial. Quick information feedback about the situation in any relevant place and about any relevant factor must be ensured. It is important to have an efficient information flow between the different people and institutions involved (organizers, police, emergency forces, crowd, ...).\n\n- In case of problems, the corresponding contingency plan should be applied, and the situation should be continuously (re-)assessed to check for the plausibility of the situational analysis, considering possible alternatives.\n\n- It should be considered to give police and emergency forces more autonomous (local) decision-making power and responsibility, particularly when communication is interrupted or quick action is needed.\n\n- Communication must work (both, from a technical and an organizational perspective). It is key to detect, avoid, and respond to critical situations. Communication is also crucial for the capacity to reduce undesirable interaction effects and to stop dangerous cascading effects.\n\n- Finally, a safety culture must be actively promoted, reminding everyone that problems can always happen. The motto should be: \"Don't take it easy. Always expect the unexpected!\" Preparations for all sorts of surprising situations (including a sudden change of the weather) should be made as much as possible.\n\n## Some Common Misconceptions\n\nAs discussed before, our study questions a number of common views about crowd disasters. This concerns the following points:\n\n1. The word 'pushing' suggests that people would relentlessly push forward towards their goal, not caring at all about others.\n\n2. The concept of 'mass panic' sees a stampede as the origin of the crowd disaster, resulting from a contagious mass psychological effect. It also assumes that the crowd behaves unreasonably.\n\n3. The term 'crushing' suggests that the cause of the crowd disaster is an uncontrolled pushing of a crowd towards a bottleneck, which creates densities so high that the bodies of people are crushed.\n\n4. The word 'trampling' suggests that people walk carelessly over others.\n\nSuch views tend to blame the crowd for the disaster rather than drawing suitable consequences regarding the organization of mass events, the crowd management and communication. Therefore, recurring disasters may be a consequence of misconceptions about them. In contrast to the above interpretations, our analysis of the crowd disaster in Duisburg suggests the following:\n\n1. *It is the 'queuing effect' which causes a denser and denser queue of people over time , and a lot of pushing in the crowd happens unintentionally.* This is, because physical forces start to add up when the density becomes so high that people start to have body contact\u0152. Aggravating factors, which may lead to intentional pushing are (1) long waiting times without food, water, facilities, and entertainment, (2) the absence of understandable, communicated reasons for the delays, and (3) threatening high-density conditions.\n\n2. *The main danger are the laws of physics, not psychology . People do normally not die because they panic\u2014they panic when their life is in danger.* We do not deny that people get impatient after long waiting times and that some of them also disrespect rules in order to get towards their goal (in particular if these rules do not appear justified to them). However, even under extremely critical conditions, people helped each other and behaved quite rationally. They overcame barriers, used slopes, staircases, poles and the container mainly, when this was necessary to evacuate themselves and reduce the density in the crowd. What might have appeared as an unreasonable crowd forcing its way into the festival area may be better interpreted as a crowd trying to find a way out of the dangerous trap it was in. However, despite a rather rational behavior altogether, some individuals suffered from 'tunnel vision', which is a phenomenon that can occur under conditions of stress. This becomes evident from the fact that those standing around the poles, staircase and container, hoping to get out, were not considering alternative emergency routes anymore, even when prompted to them by the police .\n\n3. *One must distinguish between a 'crush' and a 'crowd quake', and between (active) trampling and being trampled.* In a classical crush, people are moving towards a physical bottleneck and die in front of its narrowest point. In a 'crowd quake', there is typically no systematic flow directions, but people are pushed around by fluctuating forces in the crowd. In Duisburg, people's lives were endangered not by a stampede that crushed other people, but by high crowd pressures (defined as density times variability of body movements ). An extreme and fluctuating pressure builds up, when the densities become so high that they cause contact forces between bodies to add up. This ultimately implies the onset of 'crowd turbulence'. Under such conditions, the sizes and directions of forces acting on the bodies of visitors move them around in an uncontrolled way, and people have difficulties keeping their balance; when people stumble and fall, this can be the nucleus of a crowd disaster (see next point).\n\n4. *When trying to avoid the deadly 'domino effect', people may be forced to step on others* . In Duisburg, only a few people were relentlessly 'crawling' or walking over the heads or shoulders of others. This happened around 16:55, when the ultimate inferno of the crowd disaster happened and it was likely that (some) people had already died. Note, however, that many people probably stepped on others who were lying on the ground. Why did they do such a thing? In a dense and shaky crowd, fallen people have difficulties to get up on their feet again. This may cause a 'hole' in the crowd, so that the surrounding people are not anymore counter-balanced: they are pushed from behind, but not anymore from the front. As a consequence, the surrounding people may fall one after another like dominos, causing a pile of people . If they cannot get back on their feet quickly, they are likely to pass out or suffocate, since they cannot breathe anymore under the weight of others piling up on top of them. Therefore, to avoid falling when pushed around by the crowd, people might be forced to step on others. However, under these conditions, they are rather 'walked' than 'walking'. That is, while the passive verb \"being trampled\" is correct, the active form \"trampling\" is misleading.\n\n## Conclusion and \"Natural Laws\" of Crowd Behavior\n\nIt is obvious that situations such as the ones described above must be absolutely avoided. This requires the choice of a suitable location and an adequate preparation of the mass event, an appropriate organization and crowd management, and a quick response to early warning signs, for which information and communication play a key role. It is also important to understand that crowd behavior follows certain \"laws of nature\", which result from physical, physiological, psychological and social needs of humans such as sufficient space, food, water, and air, toilet facilities, feeling of safety, perceived progress towards the goal, information, communication, entertainment, etc. An insufficient consideration of such factors can promote disasters, particularly if shortcomings accumulate.\n\n## Advance Warning Signs of Crowd Disasters\n\nTo improve the situational awareness of crowd managers, police and emergency forces, Table lists a number of successive warning signs of increasingly critical crowd conditions.\n\n| | **Observation** | **Assessment** | **Required Action** |\n|:---|:---|:---|:---|\n| **0** | Densities are below 2-3 persons per square meter. | Normal operation at low risk. | Regularly verify normal operation, watch out for perturbations. Make sure that the flow does not exceed the safe value of 82 persons per minute and meter. |\n| **1** | People accumulate. Certain areas become progressively more crowded. | People slow down due to a bottleneck or stop for some reason. | Limit inflows to ensure that the expected extent of accumulation will not be exceeded. Gather information and determine the reasons for the accumulation. Prepare possible counter-measures. Move enough security to the respective area. Inform the responsible police and emergency units. |\n| **2** | Jams of people are forming and growing. | Insufficient outflows may cause serious problems over time (such as high densities), particularly in constraint spaces. | Communicate with the crowd. Promptly take appropriate flow reduction measures such as re-directing people. (Keep in mind that stopping people causes a growing pressure in the crowd and impatience.) Move police and emergency units towards the crowded area(s) in case help will be needed. |\n| **3** | Stop-and-go waves occur (this happens only in dense *moving* crowds). People are pushed. | The continuous flow has broken down. The outflow capacity is considerably reduced. The situation may escalate quickly. | Take suitable counter-measures. Pressure relief strategies (such as opening emergency routes and re-routing inflows) should be taken and people informed about them. Before, any obstacles (such as fences) in the way must be removed. A sufficient number of emergency units and police must be in the critical area and ready take over control in interaction with the crowd management. |\n| **4** | People cannot move freely and are squeezed between others. People are pushed around. | A critical density has built up in the crowd. Injuries can easily happen. | Police should take over control in close consultation with the crowd management. Appropriate contingency plans must be applied. Evacuation is strongly advised. Communication with the crowd is crucial. Emergency forces must be in the most crowded areas, in order to provide first aid whenever needed. |\n| **5** | People disrespect fences or try to get out of the area. | The situation is critical and likely to get out of control. | Communicate with the crowd and evacuate it. Provide help and first aid. Inform hospitals and additional emergency units about the possibility that the situation may get out of control. |\n| **6** | Crowd turbulence occurs. People scream or shout for help. | Injuries and fatalities are likely. A crowd disaster can happen any time. | Calm down the crowd and guide it. Continue to evacuate people. Watch out for the areas with the highest densities and largest crowd movements, to ensure support and first aid. Additional emergency vehicles must be called to ensure sufficient manpower, and hospitals must be informed about likely (and potentially many) injuries. |\n| **7** | People are falling to the ground. People raise arms into the air. | People are in big trouble. Many injuries are to be expected. A crowd disaster is (most likely) happening. | Immediate help and first aid are needed, probably for many people. Hospitals must be prepared to shift from routine to large-scale emergency operation. |\n| **8** | People crawl over others. | A crowd disaster has probably happened. | Apply rules for a state of serious emergency. |\n\nThis table is intended to help assess the level of criticality of the situation in the crowd and take proactive measures to avoid or at least mitigate crowd disasters. Note that at each of these levels, one must already take first preparations for the next one or two (as the situation may change quickly) and communicate the possible scenarios and their implications to all relevant stakeholders. The goal is to de-escalate the situation and get back to lower levels of criticality.\n\n## Emerging Relevance of Citizen Science and Further Conclusions\n\nIn the subsection above, we have presented science-based suggestions for the avoidance of crowd disasters and an organizational response to critical situations. Deriving these conclusions largely profited from the huge amounts of materials that volunteers have provided, collected, synchronized, and ordered (according to time, locations, content, etc.). This is, where 'citizen science' can play an important role. The documentation we have seen from volunteers appears to be more transparent and complete than the information provided by public institutions, and it is better accessible than news from many public and private media (where we often faced the issue that materials could not be retrieved anymore, at least not under the original links).\n\nAlso, scientific institutions would not have had enough resources to do all the documentation work that was performed by these volunteers. However, the collected materials are so voluminous that one can hardly see the wood for the trees. Therefore, citizen science can largely benefit from an interaction with academic experts. Specialized knowledge is needed to distinguish more relevant from less relevant factors, to interpret empirical evidence, and distinguish more likely from less likely explanations. Besides providing this knowledge, our work also highlights the general and systemic nature of crowd disasters, and it reveals the instabilities (amplification effects) and cascading effects leading to them.\n\nThe systemic nature of many crowd disasters makes their legal handling very difficult, since it is hard to determine the fraction of responsibility that different people and institutions had. However, without a proper response to such systemic failures, people are losing their trust in public institutions, and this undermines their legitimacy .\n\nCrowd disasters are not the only systemic risk, resulting from interactions and institutional settings that are not suitably designed. The financial crisis is another example , for which nobody seems to be willing to take responsibility. This is mainly, because the individual contributions to it cannot be well quantified. Also human history is full of examples of humanitarian disasters, which happened because nobody felt sufficiently responsible for them. The authors are convinced that the division of responsibility itself is the problem, and that this calls for political and regulatory attention. Scientists could perhaps make a major contribution to the cultural heritage of humanity, if they managed to find new ways to address this fundamental problem .\n\n# Acknowledgments\n\nWe would like to thank everyone who has publicly provided materials documenting the Love Parade, and those who have been carefully synchronizing, ordering, analyzing, and describing these materials in hundreds of hours of work. This work has ultimately contributed to the creation of a public good, namely better public safety at future mass events, as it allows many people to learn from mistakes made in the past.\n\n[^1]: 'Pushers' are people, who are supposed to put pressure on visitors to keep moving, in this case to ensure an efficient entering into the festival area in order to avoid an obstruction of other visitors trying to get in.\n\n[^2]: Note that the fallen people in the video recording are in the shadow. Therefore, one must use full screen mode to notice them, and one needs to watch out for arms raised in the air, seeking for help.\n\n[^3]: Moreover, video recordings of the situation around the emergency vehicle do not show clear evidence of turbulent motion in its immediate neighborhood .\n\n[^4]: or even several interrelated systemic instabilities (since the phenomenon of 'crowd turbulence' itself can be seen as outcome of an instability of visitor flows)\n\n[^5]: Some tolerable risks associated with the normal entering and leaving of the area are mentioned, but have not been investigated in detail by computer simulations.","meta":{"dup_signals":{"dup_doc_count":35,"dup_dump_count":29,"dup_details":{"curated_sources":2,"2023-40":2,"2023-14":1,"2022-49":1,"2022-40":2,"2022-21":1,"2021-49":2,"2021-43":1,"2021-39":1,"2021-31":1,"2021-25":1,"2021-21":1,"2021-17":1,"2021-10":1,"2021-04":1,"2020-45":1,"2020-40":1,"2020-34":2,"2020-24":1,"2020-16":1,"2020-10":1,"2020-05":1,"2019-51":1,"2019-47":1,"2019-43":2,"2019-39":1,"2019-35":1,"2024-18":1,"2024-30":1}},"filename":"out\/1206.5856_extract_loveparade_epj4finalprint2.tex.md"},"subset":"arxiv"} +{"text":"abstract: Refereeing is a crucial component of publishing astronomical research, but few professional astronomers receive formal training on how to effectively referee a manuscript. In this article, we lay out considerations and best practices for referees. This document is intended as a tool for early career researchers to develop a fair, effective, and efficient approach to refereeing. \nauthor: Michelle Ntampaka; Ana Bonaca; Sownak Bose; Daniel J.\u00a0Eisenstein; Boryana Hadzhiyska; Charlotte Mason; Daisuke Nagai; Joshua S. Speagle ( \n \n)\nbibliography: references.bib\ntitle: A Referee Primer for Early Career Astronomers\n\n# Introduction\n\nReferees are responsible for assessing the quality and novelty of research and to provide feedback to the editor on whether the result is appropriate for the journal. They are also responsible to provide feedback on the framing and presentation of the results. An effective referee will respond with a fair, kind, and actionable report to the authors and to the journal editor.\n\nBecause they provide support and feedback for assessing new scientific results, referees are a vital part of scientific publication. And though this is an important part of our scientific careers, few astronomers receive formal training in the process.\n\nIn this document, we discuss the process of refereeing, including: understanding ethical conflicts, framing the referee report, avoiding common referee pitfalls, navigating rejections, and framing the final referee report. See , and references therein, for additional information and complementary perspectives on refereeing best practices.\n\nThis reference is intended primarily for referees-in-training as they learn how to referee journal articles. It may also be useful to mentors as a tool for coaching their early-career mentees through the process of becoming an effective referee.\n\n# Understanding Ethical Conflicts\n\nBefore agreeing to referee a manuscript, consider ethical conflicts that might prevent a referee from being a fair referee or that might damage your professional reputation and relationships.\n\n1. Am I a close collaborator of any of the authors? Engaging in an anonymous, critical conversation with close collaborators may damage these professional relationships.\n\n2. Do I have a poor relationship with any of the authors? Even if you feel that you can give an unbiased assessment of their work, remember that there are other valid referees who would not be spending a large amount of emotional energy to fairly assess the work.\n\n3. Is it \"too close to comfort\"? If the research is so close to your own work in progress that you could not give honest feedback or suggestions for improvement, you should discuss this with the editor before proceeding.\n\n4. Do I have the expertise to properly assess this research? This can be particularly difficult in the case of work that draws from more than one discipline, and any concerns about interdisciplinary work should be directed to the editor before you agree to review the manuscript.\n\nIf you suspect that you cannot be a fair, objective, and fully invested referee for any reason, you should write a short note to the editor to discuss your concerns; the editor can help you to evaluate this.\n\n# Questions for Consideration\n\nExpectations and norms vary from journal to journal , and when possible, you should consult with the editorial staff about what qualities should be considered in your review. Assessment questions we have found useful to focus on are:\n\n1. Is the research sufficiently innovative? Does it bring something new to the field?\n\n2. Do the authors put the research in sufficient context? Is their literature review sufficient? Have they missed any citations?\n\n3. Are the data and methods clearly described? Is there enough information to recreate the authors' analysis?\n\n4. Have the authors provided sufficient information to make their results reproducible, including access to data and software? The referee should assess whether the authors have an adequate plan for sharing data and software with the community and should share this assessment with the editor.\n\n5. Are the methods used appropriately?\n\n6. Are there passages that are unclear or ambiguous?\n\n7. Have the authors provided sufficient evidence to support their claims? Have the authors made unsubstantiated claims? Have they overstated their results?\n\n8. Have the authors appropriately discussed caveats and limitations? Have they provided clear explanations for any results that seem too good to be true?\n\nWhile it is not necessary to answer each of these questions explicitly, these are just some of the assessments that will frame your report.\n\n# Avoiding Common Pitfalls\n\nAs a referee, you should not make unkind criticisms about topics outside of the research and manuscript you are assessing. Do not make statements about the authors or their qualifications.\n\nYou should not serve as a copy editor. If significant language issues make it difficult to assess the scientific content of the manuscript, pause your review. Recommending \"proofreading by a native speaker\" is inappropriate. Instead, you should immediately contact the editor, tell them your concerns, and let them reach out to the authors. If the main body of the manuscript is sufficiently understandable but there are typos or ambiguous passages, commenting on these is fine.\n\nAs a referee, you should not expand the scope of the manuscript or push the manuscript to deviate significantly from the authors' intentions. You are not the authors' academic advisor and it is inappropriate to change the direction or significantly expand the scope of the manuscript. There are two exceptions to this: 1. if the original scope is insufficient for publication, you can and should explain this and consider recommending ways to extend the work, and 2. If you see a golden opportunity for the authors to improve their work, you might offer this suggestion but be clear that this is not a requirement for publication.\n\nYou should not write meandering prose regarding your opinions on the topic. The authors will need to respond to each item in your report, and this report is most useful when it is focused on clear, actionable items.\n\n# Navigating Rejections\n\nIt can be more difficult to recommend rejection than to recommend revision and resubmission. In the case of rejections, you will need to write a clear and kind note to the authors explaining the shortfalls of the manuscript, including strong evidence supporting your claim that it is not appropriate for the journal. This does not need to be exhaustive, but you do need to be specific about some critical flaws. It can be difficult to write this in a way that is both kind and constructive, but this should be your goal. Feedback that is not constructive can be included in the confidential response to the editor.\n\n# Framing the Final Report\n\nBegin your review by explicitly describing the strengths, aims, and results of the manuscript. This sets a good tone for the constructive criticism that follows and it communicates to the authors that you understand their research goals. By including a summary of the manuscript and listing its strengths (rather than focusing only on the manuscript's shortcomings), you communicate fairness. Using \"the manuscript\" instead of \"the author\" or \"you\" in your feedback is one way to keep your feedback neutral.\n\nOrganize your feedback. The following template is commonly used:\n\n1. State the aims and key results of manuscript.\n\n2. Summarize the strengths of the manuscript.\n\n3. Summarize your constructive feedback or assessment of the manuscript.\n\n4. State your recommendation for publication.\n\n5. List the major weaknesses (e.g., methodological issues or overstated results).\n\n6. List the minor weaknesses (e.g., missing references or figure formatting issues).\n\nClearly articulate your feedback. Refer to sections and line numbers where applicable, and provide sufficient publication information (or arxiv or ADS links) for the authors to quickly find references.\n\n# Final Comments\n\nBefore you submit your report, you should edit the first draft of your report for tone. This is particularly important because emails often come across as more harsh than they are intended. Aim for polite discussion with constructive criticism and clear, actionable items. Your report should be courteous without compromising honesty.\n\nReferees provide an important service to our scientific community: evaluating new research and providing feedback to ensure that the research is clearly presented and appropriately framed. As a referee, your goal should be to provide the type of report that you would want to receive. Strive to write a report that is a fair, kind, and actionable assessment of the research.\n\nWe thank the anonymous member of the AAS Publications Editorial Staff for reviewing this article and for providing valuable feedback. We also thank N\u00e9stor , Susan Mullally, and Laura Watkins for providing thoughtful feedback on this document.","meta":{"dup_signals":{"dup_doc_count":12,"dup_dump_count":5,"dup_details":{"curated_sources":1,"2024-26":1,"2024-10":1,"2024-30":1,"unknown":8}},"filename":"out\/2205.14270_extract_main.tex.md"},"subset":"arxiv"} +{"text":"abstract: I examine the topic of training scientific generalists. To focus the discussion, I propose the creation of a new graduate program, analogous in structure to existing MD\/PhD programs, aimed at training a critical mass of scientific researchers with substantial intellectual breadth. In addition to completing the normal requirements for a PhD, students would undergo an intense, several year training period designed to expose them to the core vocabulary of multiple subjects at the graduate level. After providing some historical and philosophical context for this proposal, I outline how such a program could be implemented with little institutional overhead by existing research universities. Finally, I discuss alternative possibilities for training generalists by taking advantage of contemporary developments in online learning and open science.\nauthor: Gopal P. Sarma<\/span> \n*firstname.lastname@example.com*\ntitle: **Should we train scientific generalists?**\n\nIn the age of highly specialized science, the generalist is a long forgotten job description. We have come to assume that the role played by those intellectual titans of earlier eras, such as Da Vinci, Aristotle, or Gauss, to name just a few, is an impossibility given the massive explosion of scientific knowledge of recent decades and centuries. \nThere is a factual reality to this sentiment that is uncontroversial. Certainly, as a percentage of existing knowledge, one could not conceivably attain the breadth of understanding that one might have in previous centuries. However, it does seem worth considering if a more modest goal could be achieved which would serve an important stabilizing role for modern science and engineering. That goal would be to train a critical mass of scientific generalists, researchers, who in addition to the specialized training of an ordinary graduate program, would also have broad exposure to multiple subjects at the graduate level. \nWhile the need for specialization might have been something of an inevitability, it is also worth considering that there may be negative ramifications to this kind of stratification of knowledge. With so much to know, how can we be confident that we are allocating our intellectual capital efficiently? How can we be confident in our collective understanding of global trends in science? \nThere is no doubt that in the coming years, data analytics of the scientific corpus will play a significant role in contributing to the creation of precisely such a global view of the scientific enterprise. The digitization of journals, the availability of open API's for accessing scientific meta-data, and the integration of reference management with social networking are all poised to transform our understanding of the scientific process at a high-level. However, it seems naive to imagine that data mining techniques alone will allow us to conceive of and test the most important hypotheses about the global structure and dynamics of science without some amount of guiding intuition. To complement and maximally take advantage of the availability of massive data sets about science, as well as the computational tools to analyze those data sets, we need a critical mass of scientific generalists whose training has been designed to encourage hypothesis generation about the scientific process itself. \nFurthermore, another major trend in contemporary science is the move towards ambitious scientific agendas of substantially larger scope and project size . Whereas the pioneering theories of earlier eras were often crafted by solitary thinkers working in isolation, today's breakthroughs frequently come about from large international collaborations involving hundreds or thousands of people and research budgets in the billion dollar range. In this context, the question of how to ideally train an individual scientist might be re-conceptualized as the question of how to train a scientific team member. Scientific generalists could be pivotal members of such large collaborations and play critical organizational and leadership roles. \nThere are certainly scientific generalists today, although they are perhaps not thought of in this way. I would broadly (and informally) categorize them into thee types:\n\n- **The organic academic generalist** \n This is someone who has led a traditional academic career on the tenure track, and whose research has naturally led to developing significant breadth in multiple topics. Certainly many fields have researchers in this category. \n\n- **The academic-industrial wanderer** \n This is someone who has left academia, or possibly had extended post-doctoral or research scientist appointments in subjects different from their PhD, and ultimately came back to academia, or led significant efforts at major industrial research laboratories. For example, the growth of computational biology and theoretical neuroscience has been driven by many theoretical physicists who have gone on to do post-doctoral training in the biological sciences, or for example, physicists from the world of quantitative finance, who have returned to academia armed with a new set of skills quite different from their PhD training. \n\n- **The autodidact** \n The widespread availability of advanced scientific materials via the Internet has resulted in an organic trend towards the creation of generalists simply by lowering the barrier to accessing knowledge from a wide variety of fields, scientific or otherwise. Certainly, there are many brilliant scientists in industry and elsewhere who do not have PhD's and it is not uncommon these days to encounter truly first class thinkers on a variety of topics who are largely self-taught.\n\nThe question that motivates this essay is the following: should there be another category of generalist who has been trained from outset to play a different role in the modern scientific enterprise than researchers who set out to be specialists? \nAs a means to stimulate discussion, but as an idea unto its own as well, I propose the following: the creation of a new graduate program, roughly analogous in structure to an MD\/PhD, where in addition to the normal research requirements for completing a PhD, students complete 5 or more qualifying examinations in subjects of their choosing. For adequately prepared students, I believe that after completion of the requirements for their home department in their first or second year, students would be able pass 4 additional qualifying examinations over the course of 2-3 years, after which they would resume their PhD research and complete their degree.[^1] \nThe choice of the qualifying examination as the focal point for this program is that it encapsulates the basic vocabulary of a field, the core knowledge required to conduct in depth research. The aim of this program is emphatically *not* to train researchers who have in depth, specialized knowledge of 5 different subjects\u2013 that would be an unreasonable, if not outright impossible goal. Rather, the aim is to train students who understand the culture, the basic tools, and broad perspectives of multiple subjects, so that they can contribute to strengthening the very foundations of the scientific establishment. \nCertainly, universities who undertake the process of creating such a program might choose to begin with a fewer number of qualifying examinations. I chose this number because it would allow for individual students to engage with multiple, quite different subjects over the course of their graduate education, and because 6-8 month blocks per subject would create a program roughly on par with the length of an MD\/PhD. Part of the value in creating such a program would be the message and the vision it would send to younger students who are aspiring to life-long careers as scientists. Just as undergraduates who aspire to careers as physician-scientists must adequately prepare themselves with appropriate exposure to both research and clinical work, aspiring scientific generalists would need to prepare themselves with advanced coursework of sufficient breadth to tackle the challenging initial years of this graduate program. \nFor an ambitious program such as this one to maximally benefit both the student and the scientific establishment at large, there would need to be a strong culture to support those students who choose to undergo such a rigorous and extended training. In particular, in order for the knowledge gained by these students to develop into something much more rich and robust than a massive list of facts and problem solving techniques from 5 different subjects, they would need to be part of a mentoring program in which the process of learning each of these different subjects was accompanied by historical and philosophical discussion. During each qualifying examination block, students would ideally also attend regular seminars in the department, and perhaps nominally be affiliated with a research group and attend group meetings. There would need to be a culture among the students and faculty mentors that supported reflection about problem solving strategies, about the structural differences between the vocabulary and subject matter across different fields. Ultimately, these observations and insights, whether in raw or more developed form would need to be communicated more broadly. \nOne possibility might be to accompany the qualifying examination process with a historical essay exploring some topic of interest to the student in consultation with a faculty mentor. For instance, a student whose PhD was in theoretical condensed matter physics and who passed examinations in physics, mathematics, chemistry, biology, and computer science might write an in depth essay on the emergence of quantitative methods in the study of natural selection. A mathematics PhD student specializing in stochastic analysis and who passed qualifying examinations in mathematics, physics, statistics, computer science, and economics, might write about the contributions that mathematical finance pioneer Fisher Black made to macroeconomics. \nWhile this program may seem daunting, I would like to emphasize that individuals who pursue MD\/PhD degrees and ultimately become board certified in a medical specialty need to pass a similar array of hurdles\u2013 in addition to PhD requirements for their research training they have to pass multiple level of board examinations to become licensed physicians. \nIt is also important to keep in mind that the training program described here is a graduate level training program, and consequently, should be thought of as being the first step in a career-long trajectory. A person who completed this program is no more a mature scientific generalist than a person who completes an ordinary PhD program is a mature specialist. In order for the subsequent phases of growth and development to take place, there would need to be a supporting infrastructure overseeing the post-doctoral period of the students' training. Furthermore, it is certainly possible, and would be expected even, that a subset of students who successfully complete this training would simply choose to pursue tenure track jobs in their area of specialty. Again, the MD\/PhD is something of a guide\u2013 certainly many dual-degree graduates become purely clinical practitioners or pure researchers and do not actively build careers bridging the two. Students who pursue the more traditional routes will not be at a disadvantage and one would hope that the unique and rigorous educational experience they went through would inform the remainder of their scientific careers both as researchers and as teachers. But for those that wish to mature into the novel role of scientific generalists that I am proposing, there would need to be special post-doctoral programs providing generous several year funding that would give them the freedom to develop their vision. For the initial batch of students, there would inevitably be some amount of trial and error while both students and faculty developed an understanding of the strengths and weaknesses of the program. \nWhile one can only speculate about the contributions graduates of this program would ultimately come to make, I close by suggesting a few possibilities. We might imagine tenured professorships for generalists who have smaller research groups than they would otherwise have, but who are active members of several different groups led by other faculty. In addition to playing a critical organizational role, these faculty members would bring their considerable technical expertise and scientific breadth to each group in the capacity of something along the lines of a scientific consultant. \nVenture capital might be another place where scientific generalists could have a significant impact, playing the role of bridge builders between academia and industry, and perhaps actively managing their own portfolios and overseeing scientific startup incubators. \nOne of the most important roles generalists could play would be to aid in the development of younger scientific institutions, particularly in the developing world. The specific aim of this program is to train scientists who have significant exposure to the cultural elements of advanced science in multiple disciplines, whose training allowed them to be both scientists as well as participatory anthropologists of the scientific process. For both younger universities in the developed world, as well as new institutions in the developing world, scientific generalists could be critical leaders and agenda setters, and perhaps, will be in a position to identify important research trajectories, or important cultural elements for executing those trajectories, that existing institutions have overlooked. \nIt would be incomplete and short-sighted to discuss novel educational initiatives without considering important contemporary developments in online education and open science. Furthermore, given that another major contemporary theme in graduate education is the over production of PhD's relative to the availability of faculty positions, it would be understandable if a lengthy and extremely demanding variant of the PhD program is difficult to mobilize. One possibility to balance these different factors would be to create an open system for crediting a student for having passed a qualifying examination. Just as universities (and private companies) now offer certificates for coursework completed in a non-degree granting context, an open certification for anyone who is able to pass a qualifying examination would be a valuable credential that an individual could earn to demonstrate competency at the beginning graduate level. For this certification to be available across all disciplines would be one step towards many different forms of educational innovation in the research world, including the training of scientific generalists. \nBefore closing, let me re-examine the choice of the qualifying examination as the focal point for this particular proposal and consider alternatives. Although the qualifying examination is an important rite of passage in graduate education, many will correctly point out that it is hardly something that contributes to depth of research maturity. This is certainly a valid point, and in response, I would argue that the purpose of the program is not to train individuals who have achieved the maturity of the best specialists in multiple subjects, but rather individuals who can appreciate and communicate the knowledge of specialists, and who therefore would make strong collaborators, bridge builders, program managers, and journal editors etc. The purpose of organizing a program for training generalists around the qualifying examination is in a large part analogous to why we have such examinations in the first place- they do contribute to some amount of intellectual and technical maturity and are an important experience to have early in one's education. Furthermore, it is a simply stated idea that would require little institutional overhead, and would circumvent the inevitably controversial process of otherwise designing a curriculum.[^2] \nIt is not difficult to imagine alternatives, however. One possibility would be a several year post-doctoral program where fellows rotated through several different laboratories and research groups in succession. Or, in the spirit of the newly emerging trend of \"hacker schools\" and data-science boot camps, we could imagine creating analogously structured mini-courses designed by experts in the field targeted at advanced graduates whose training was in another field entirely. Indeed, many academic research areas have highly topic specific summer schools and winter schools and one could imagine a several year post-doctoral program built around a handful of different sessions spread across multiple subjects. Perhaps there should be a component both at the beginning of the PhD, like the hybrid qualifying examination system I outline above, as well as a post-doctoral component. Ultimately, it is difficult to imagine that some trial and error would not be required in the design of such a program. In addition, one thing is certain- successfully executing a program like this would require an organization to support students' growth for many years, and given the fundamentally experimental nature of such an effort, several years longer than we are accustomed to supporting graduate students. Perhaps then, the ideal path forward would be to set into motion multiple efforts aimed at the common goal of training scientific generalists, so that over time, we can learn from our successes and mistakes. To do so, of course, would require long-term institutional efforts to scientifically investigate the efficacy and impact of different training programs.\n\n## Acknowledgements\n\nI would like to thank Aaswath Raman, Doug Bemis, Venkatesh Narayanamurti, and Rob Spekkens for insightful discussions.\n\n[^1]: Over 60 years ago, in the essay \"The Education of a Scientific Generalist,\" Hendrik Bode, Frederick Mosteller, John Tukey, and Charles Winsor argued for a program of similar breadth, but at the undergraduate level . One wonders what their reaction would be to the current proposal given the enormous growth of science in the intervening decades.\n\n[^2]: It is worth mentioning that even within the same subject, there are many different types of qualifying examinations. In the context of this essay, perhaps Caltech's Computation and Neural Systems program (CNS) provides a possible template. The model they employ is to give students a list of 100 questions that they use as preparatory material in the year leading up to an oral qualifying examination with 5 faculty members . In a sense, the program is aimed at training \"generalists\" within the computational and neurobiological sciences. It seems natural to ask if this model could be extended to incorporate other subjects as well. That is, what if a list of 500 questions were to be assembled spanning multiple subjects and a set of oral qualifying examinations were conducted by faculty spanning a number of different departments? Or a few thousand questions from which a student selected some subset to prepare?","meta":{"dup_signals":{"dup_doc_count":11,"dup_dump_count":2,"dup_details":{"curated_sources":2,"unknown":9}},"filename":"out\/1410.4422.tex.md"},"subset":"arxiv"} +{"text":"abstract: The ABSTRACT is to be in fully-justified italicized text, at the top of the left-hand column, below the author and affiliation information. Use the word \"Abstract\" as the title, in 12-point Times, boldface type, centered relative to the column, initially capitalized. The abstract is to be in 10-point, single-spaced type. The abstract may be up to 3 inches (7.62 cm) long. Leave two blank lines after the Abstract, then begin the main text.\nauthor: Paolo Ienne \nSwiss Federal Institute of Technology \nMicrocomputing Laboratory \nIN-F Ecublens, 1015 Lausanne, Switzerland \nemail@example.com \n; Second Author \nInstitution2 \nFirst line of institution2 address \nSecond line of institution2 address \nemail@example.com \nbibliography: ieee.bib\ntitle: LaTeX\u00a0Author Guidelines for $8.5 \\times 11$-Inch Proceedings Manuscripts\n\n# Introduction\n\nPlease follow the steps outlined below when submitting your manuscript to the IEEE Computer Society Press. Note there have been some changes to the measurements from previous instructions.\n\n# Instructions\n\nPlease read the following carefully.\n\n## Language\n\nAll manuscripts must be in English.\n\n## Printing your paper\n\nPrint your properly formatted text on high-quality, $8.5 \\times 11$-inch white printer paper. A4 paper is also acceptable, but please leave the extra 0.5 inch (1.27 cm) at the BOTTOM of the page.\n\n## Margins and page numbering\n\nAll printed material, including text, illustrations, and charts, must be kept within a print area 6-7\/8 inches (17.5 cm) wide by 8-7\/8 inches (22.54 cm) high. Do not write or print anything outside the print area. Number your pages lightly, in pencil, on the upper right-hand corners of the BACKS of the pages (for example, 1\/10, 2\/10, or 1 of 10, 2 of 10, and so forth). Please do not write on the fronts of the pages, nor on the lower halves of the backs of the pages.\n\n## Formatting your paper\n\nAll text must be in a two-column format. The total allowable width of the text area is 6-7\/8 inches (17.5 cm) wide by 8-7\/8 inches (22.54 cm) high. Columns are to be 3-1\/4 inches (8.25 cm) wide, with a 5\/16 inch (0.8 cm) space between them. The main title (on the first page) should begin 1.0 inch (2.54 cm) from the top edge of the page. The second and following pages should begin 1.0 inch (2.54 cm) from the top edge. On all pages, the bottom margin should be 1-1\/8 inches (2.86 cm) from the bottom edge of the page for $8.5 \\times 11$-inch paper; for A4 paper, approximately 1-5\/8 inches (4.13 cm) from the bottom edge of the page.\n\n## Type-style and fonts\n\nWherever Times is specified, Times Roman may also be used. If neither is available on your word processor, please use the font closest in appearance to Times that you have access to.\n\nMAIN TITLE. Center the title 1-3\/8 inches (3.49 cm) from the top edge of the first page. The title should be in Times 14-point, boldface type. Capitalize the first letter of nouns, pronouns, verbs, adjectives, and adverbs; do not capitalize articles, coordinate conjunctions, or prepositions (unless the title begins with such a word). Leave two blank lines after the title.\n\nAUTHOR NAME(s) and AFFILIATION(s) are to be centered beneath the title and printed in Times 12-point, non-boldface type. This information is to be followed by two blank lines.\n\nThe ABSTRACT and MAIN TEXT are to be in a two-column format.\n\nMAIN TEXT. Type main text in 10-point Times, single-spaced. Do NOT use double-spacing. All paragraphs should be indented 1 pica (approx. 1\/6 inch or 0.422 cm). Make sure your text is fully justified\u2014that is, flush left and flush right. Please do not place any additional blank lines between paragraphs. Figure and table captions should be 10-point Helvetica boldface type as in\n\nLong captions should be set as in\n\nCallouts should be 9-point Helvetica, non-boldface type. Initially capitalize only the first word of section titles and first-, second-, and third-order headings.\n\nFIRST-ORDER HEADINGS. (For example, **1. Introduction**) should be Times 12-point boldface, initially capitalized, flush left, with one blank line before, and one blank line after.\n\nSECOND-ORDER HEADINGS. (For example, **1.1. Database elements**) should be Times 11-point boldface, initially capitalized, flush left, with one blank line before, and one after. If you require a third-order heading (we discourage it), use 10-point Times, boldface, initially capitalized, flush left, preceded by one blank line, followed by a period and your text on the same line.\n\n## Footnotes\n\nPlease use footnotes sparingly [^1] and place them at the bottom of the column on the page on which they are referenced. Use Times 8-point type, single-spaced.\n\n## References\n\nList and number all bibliographical references in 9-point Times, single-spaced, at the end of your paper. When referenced in the text, enclose the citation number in square brackets, for example\u00a0. Where appropriate, include the name(s) of editors of referenced books.\n\n## Illustrations, graphs, and photographs\n\nAll graphics should be centered. Your artwork must be in place in the article (preferably printed as part of the text rather than pasted up). If you are using photographs and are able to have halftones made at a print shop, use a 100- or 110-line screen. If you must use plain photos, they must be pasted onto your manuscript. Use rubber cement to affix the images in place. Black and white, clear, glossy-finish photos are preferable to color. Supply the best quality photographs and illustrations possible. Penciled lines and very fine lines do not reproduce well. Remember, the quality of the book cannot be better than the originals provided. Do NOT use tape on your pages!\n\n## Color\n\nThe use of color on interior pages (that is, pages other than the cover) is prohibitively expensive. We publish interior pages in color only when it is specifically requested and budgeted for by the conference organizers. DO NOT SUBMIT COLOR IMAGES IN YOUR PAPERS UNLESS SPECIFICALLY INSTRUCTED TO DO SO.\n\n## Symbols\n\nIf your word processor or typewriter cannot produce Greek letters, mathematical symbols, or other graphical elements, please use pressure-sensitive (self-adhesive) rub-on symbols or letters (available in most stationery stores, art stores, or graphics shops).\n\n## Copyright forms\n\nYou must include your signed IEEE copyright release form when you submit your finished paper. We MUST have this form before your paper can be published in the proceedings.\n\n## Conclusions\n\nPlease direct any questions to the production editor in charge of these proceedings at the IEEE Computer Society Press: Phone (714) 821-8380, or Fax (714) 761-1784.\n\n[^1]: Or, better still, try to avoid footnotes altogether. To help your readers, avoid using footnotes altogether and include necessary peripheral observations in the text (within parentheses, if you prefer, as in this sentence).","meta":{"dup_signals":{"dup_doc_count":11,"dup_dump_count":4,"dup_details":{"curated_sources":2,"2013-20":1,"2014-10":1,"unknown":7}},"filename":"out\/0805.1854_extract_ieee.tex.md"},"subset":"arxiv"} +{"text":"author: D\u00e1niel Kondor; M\u00e1rton P\u00f3sfai; Istv\u00e1n Csabai; G\u00e1bor Vattay\ndate: 2024-10-03\ntitle: Do the rich get richer? An empirical analysis of the Bitcoin transaction network\n\n# Abstract\n\nThe possibility to analyze everyday monetary transactions is limited by the scarcity of available data, as this kind of information is usually considered highly sensitive. Present econophysics models are usually employed on presumed random networks of interacting agents, and only some macroscopic properties (e.g.\u00a0the resulting wealth distribution) are compared to real-world data. In this paper, we analyze Bitcoin, which is a novel digital currency system, where the complete list of transactions is publicly available. Using this dataset, we reconstruct the network of transactions and extract the time and amount of each payment. We analyze the structure of the transaction network by measuring network characteristics over time, such as the degree distribution, degree correlations and clustering. We find that linear preferential attachment drives the growth of the network. We also study the dynamics taking place on the transaction network, i.e. the flow of money. We measure temporal patterns and the wealth accumulation. Investigating the microscopic statistics of money movement, we find that sublinear preferential attachment governs the evolution of the wealth distribution. We report a scaling law between the degree and wealth associated to individual nodes.\n\n# Introduction\n\nIn the past two decades, network science has successfully contributed to many diverse scientific fields. Indeed, many complex systems can be represented as networks, ranging from biochemical systems, through the Internet and the World Wide Web, to various social systems\u00a0. Economics also made use of the concepts of network science, gaining additional insight to the more traditional approach\u00a0. Although a large volume of financial data is available for research, information about the everyday transactions of individuals is usually considered sensitive and is kept private. In this paper, we analyze Bitcoin, a novel currency system, where the complete list of transactions is accessible. We believe that this is the first opportunity to investigate the movement of money in such detail.\n\nBitcoin is a decentralized digital cash system, there is no single overseeing authority\u00a0. The system operates as an online peer-to-peer network, anyone can join by installing a client application and connecting it to the network. The unit of the currency is one bitcoin (abbreviated as BTC), and the smallest transferable amount is $10^{-8} \\, \\textrm{BTC}$. Instead of having a bank account maintained by a central authority, each user has a Bitcoin address, that consists of a pair of public and private keys. Existing bitcoins are associated to the public key of their owner, and outgoing payments have to be signed by the owner using his private key. To maintain privacy, a single user may use multiple addresses. Each participating node stores the complete list of previous transactions. Every new payment is announced on the network, and the payment is validated by checking consistency with the entire transaction history. To avoid fraud, it is necessary that the participants agree on a single valid transaction history. This process is designed to be computationally difficult, so an attacker can only hijack the system if he possesses the majority of the computational power of participating parties. Therefore the system is more secure if more resources are devoted to the validation process. To provide incentive, new bitcoins are created periodically and distributed among the nodes participating in these computations. Another way to obtain bitcoins is to purchase them from someone who already has bitcoins using traditional currency; the price of bitcoins is completely determined by the market.\n\nThe Bitcoin system was proposed in 2008 by Satoshi Nakamoto, and the system went online in January 2009\u00a0. For over a year, it was only used by a few enthusiasts, and bitcoins did not have any real-world value. A trading website called MtGox was started in 2010, making the exchange of bitcoins and conventional money significantly easier. More people and services joined the system, resulting a steadily growing exchange rate. Starting from 2011, appearances in the mainstream media drew wider public attention, which led to skyrocketing prices accompanied by large fluctuations\u00a0(see Fig.\u00a0). Since the inception of Bitcoin over 17 million transactions took place, and currently the market value of all bitcoins in circulation exceeds 1 billion dollars. See the Methods section for more details of the system and the data used in our analysis.\n\nWe download the complete list of transactions, and reconstruct the transaction network: each node represents a Bitcoin address, and we draw a directed link between two nodes if there was at least one transaction between the corresponding addresses. In addition to the topology, we also obtain the time and amount of every payment. Therefore, we are able to analyze both the evolution of the network and the dynamical process taking place on it, i.e. the flow and accumulation of bitcoins. To characterize the underlying network, we investigate the evolution of basic network characteristics over time, such as the degree distribution, degree correlations and clustering. Concerning the dynamics, we measure the wealth statistics and the temporal patterns of transactions. To explain the observed degree and wealth distribution, we measure the microscopic growth statistics of the system. We provide evidence that preferential attachment is an important factor shaping these distributions. Preferential attachment is often referred to as the \"rich get richer\" scheme, meaning that hubs grow faster than low-degree nodes. In the case of Bitcoin, this is more than an analogy: we find that the wealth of already rich nodes increases faster than the wealth of nodes with low balance; furthermore, we find positive correlation between the wealth and the degree of a node.\n\n# Results\n\n## Evolution of the transaction network\n\nBitcoin is an evolving network: new nodes are added by creating new Bitcoin addresses, and links are created if there is a transaction between two previously unconnected addresses. The number of nodes steadily grows over time with some fluctuations; especially noticeable is the large peak which coincides with the first boom in the exchange rate in 2011 (Fig.\u00a0)\u00a0. After five years Bitcoin now has $N=13,086,528$ nodes and $L=44,032,115$ links. To study the evolution of the network we measure the change of network characteristics in function of time. We identify two distinct phases of growth: (i) The *initial phase* lasted until the fall of 2010, in this period the system had low activity and was mostly used as an experiment. The network measures are characterized by large fluctuations. (ii) After the initial phase the Bitcoin started to function as a real currency, bitcoins gained real value. The network measures converged to their typical value by mid-2011 and they did not change significantly afterwards. We call this period the *trading phase*.\n\nWe first measure the degree distribution of the network. We find that both the in- and the outdegree distributions are highly heterogeneous, and they can be modeled with power-laws\u00a0. Figures\u00a0 and\u00a0 show the distribution of indegrees and outdegrees at different points of time during the evolution of the Bitcoin network. In the initial phase the number of nodes is low, and thus fitting the data is prone to large error. In the trading phase, the exponents of the distributions do not change significantly, and they are approximated by power-laws $p_\\text{in}(k_\\text{in}) \\sim k_\\text{in}^{-2.18}$ and $p_\\text{out}(k_\\text{out}) \\sim k_\\text{out}^{-2.06}$.\n\nTo further characterize the evolution of the degree distributions we calculate the corresponding Gini coefficients in function of time (Fig.\u00a0). The Gini coefficient is mainly used in economics to characterize the inequality present in the distribution of wealth, but it can be used to measure the heterogeneity of any empirical distribution. In general, the Gini coefficient is defined as $$G = \\frac{2 \\sum_{i=1}^n i x_i}{n \\sum_{i=1}^n x_i} -\\frac{n+1}{n}$$ where $\\{x_i\\}$ is a sample of size $n$, and $x_i$ are monotonically ordered, i.e.\u00a0$x_i \\leq x_{i+1}$. $G=0$ indicates perfect equality, i.e.\u00a0every node has the same wealth; and $G=1$ corresponds to complete inequality, i.e.\u00a0the complete wealth in the system is owned by a single individual. For example, in the case of pure power-law distribution with $\\alpha \\geq 2$ exponent, the Gini coefficient is $G = 1 \/ (2 \\alpha - 3)$\u00a0. This shows the fact that smaller $\\alpha$ exponents yield more heterogeneous wealth distributions.\n\nIn the Bitcoin network we find that in the initial phase the Gini coefficient of the indegree distribution is close to 1 and for the outdegree distribution it is much lower. We speculate that in this phase a few users collected bitcoins, and without the possibility to trade, they stored them on a single address. In the second phase the coefficients quickly converge to $G^\\text{in}\\approx 0.629$ and $G^\\text{out}\\approx 0.521$, indicating that normal trade is characterized by both highly heterogeneous in- and outdegree distributions.\n\nTo characterize the degree correlations we measure the Pearson correlation coefficient of the out- and indegrees of connected node pairs: $$r = \\frac{\\sum_{e} ( j^{\\textrm{out}}_e-\\overline{j^{\\textrm{out}}} )\n ( k^{\\textrm{in}}_e-\\overline{k^{\\textrm{in}}} )}\n { L \\sigma_\\text{out}\\sigma_\\text{in} }.$$ Here $j^{\\textrm{out}}_i$ is the outdegree of the node at the *beginning* of link $e$, and $k^{\\textrm{in}}_i$ is the indegree of the node at the *end* of link $e$. The summation $\\sum_{e }\\cdot$ runs over all links, $\\overline{k^{\\textrm{in}}}= \\sum_{e} k^{\\textrm{in}}_e \/ L$ and $\\sigma_\\text{in}^2 = \\sum_{e} ( k^{\\textrm{in}}_e-\\overline{k^{\\textrm{in}}} )^2 \/ L$. We calculate $\\sigma_\\text{out}$ and $\\overline{j^{\\textrm{out}}}$ similarly.\n\nWe find that the correlation coefficient is negative, except for only a brief period in the initial phase. After mid-2010, the degree correlation coefficient stays between $-0.01$ and $-0.05$, reaching a value of $r\\approx-0.014$ by 2013, suggesting that the network is disassortative (Fig.\u00a0). However, small values of $r$ are hard to interpret: it was shown that for large purely scale-free networks $r$ vanishes as the network size increases\u00a0. Therefore we compute the average nearest neighbor degree function $k^\\text{in}_\\text{nn}(k^\\text{out})$ for the final network; $k^\\text{in}_\\text{nn}(k^\\text{out})$ measures the average indegree of the neighbors of nodes with outdegree $k^\\text{out}$. We find clear disassortative behavior (Fig.\u00a0).\n\nWe also measure the average clustering coefficient $$C = \\frac1N \\sum_v \\frac{\\Delta_v}{d_v (d_v-1)\/2},$$ which measures the density of triangles in the network. Here the sum $\\sum_v\\cdot$ runs over all nodes, and $\\Delta_v$ is the number of triangles containing node $v$. To calculate $\\Delta_v$ we ignored the directionality of the links; $d_v$ is the degree of node $v$ in the undirected network.\n\nIn the initial phase $C$ is high, fluctuating around $0.15$ (see Fig.\u00a0), possibly a result of transactions taking place between addresses belonging to a few enthusiasts trying out the Bitcoin system by moving money between their own addresses. In the trading phase, the clustering coefficient reaches a stationary value around $C\\approx 0.05$, which is still higher than the clustering coefficient for random networks with the same degree sequence ($C_\\text{rand} \\approx 0.0037(9)$).\n\nTo explain the observed broad degree distribution, we turn to the microscopic statistics of link formation. Most real complex networks exhibit distributions that can be approximated by power-laws. Preferential attachment was introduced as a possible mechanism to explain the prevalence of this property\u00a0. Indeed, direct measurements confirmed that preferential attachment governs the evolution of many real systems, e.g. scientific citation networks\u00a0, collaboration networks\u00a0, social networks\u00a0 or language use\u00a0. In its original form, preferential attachment describes the process when the probability of forming a new link is proportional to the degree of the target node\u00a0. In the past decade, several generalizations and modifications of the original model were proposed, aiming to reproduce further structural characteristics of real systems\u00a0. Here, we investigate the nonlinear preferential attachment model\u00a0, where the probability that a new link connects to node $v$ is $$\\label{eq:nonlinkernel}\n \\pi(k_v) = \\frac{k_v^\\alpha}{\\sum_w k_w^\\alpha},$$ where $k_v$ is the indegree of node $v$, and $\\alpha > 0$. The probability that the new link connects to *any* node with degree $k$ is $\\Pi(k)\\sim n_k (t) \\pi(k)$, where $n_k (t)$ is the number of nodes with $k$ degree at the time of the link formation. We cannot test directly our assumption, because $\\Pi(k)$ changes over time. To proceed we transform $\\Pi(k)$ to a uniform distribution by calculating the rank function $R(k,t)$ for each new link given $\\pi(k)$ and $n_k(t)$: $$R(k,t) = \\frac{ \\sum_{j=0}^k n_{j} (t) j^{\\alpha} }{ \\sum_{j=0}^{k_{\\mathrm{max}}} n_{j} (t) j^{\\alpha} } = %\n \\frac{ \\sum_{k_v < k} k_v^{\\alpha} }{ \\sum_v k_v^{\\alpha} } \\textrm{.}\n \\label{erank}$$\n\nIf Eq.\u00a0 holds, $R(k,t)$ is uniformly distributed in the interval $[0,1]$, independently of $t$. Therefore, if we plot the cumulative distribution function, we get a straight line for the correct exponent $\\alpha$. To determine the best exponent, we compare the empirical distribution of the $R$ values to the uniform distribution for different exponents by computing the Kolmogorov-Smirnoff distance between the two distributions.\n\nEvaluating our method for indegree distribution of the Bitcoin network, we find good correspondence between the empirical data and the presumed conditional probability function; the exponent giving the best fit is $\\alpha \\approx 1$ (Fig.\u00a0). This shows that the overall growth statistics agree well with the preferential attachment process. Of course, preferential attachment itself cannot explain the disassortative degree correlations and the high clustering observed in the network. We argue that preferential attachment is a key factor shaping the degree distribution, however, more detailed investigation of the growth process is necessary to explain the higher order correlations.\n\n## Dynamics of transactions\n\nIn the this section, we analyze the detailed dynamics of money flow on the transaction network. The increasing availability of digital traces of human behavior revealed that various human activities, e.g. mobility patterns, phone calls or email communication, are often characterized by heterogeneity\u00a0. Here we show that the handling of money is not an exception: we find heterogeneity in both balance distribution and temporal patterns. We also investigate the microscopic statistics of transactions.\n\nThe state of node $v$ at time $t$ is given by the balance of the corresponding address $b_v(t)$, i.e. the number of bitcoins associated to node $v$. The transactions are directly available, and we can infer the balance of each node based on the transaction list. Note that the overall quantity of bitcoins increases over time: Bitcoin rewards users devoting computational power to sustain the system.\n\nWe first investigate the temporal patterns of the system by measuring the distribution of inactivity times $T$. The inactivity time is defined as the time elapsed between two consecutive outgoing transactions from a node. We find a broad distribution that can be approximated by the power-law $P(T) \\sim 1 \/ T$ (Fig.\u00a0), in agreement with the behavior widely observed in various complex systems\u00a0.\n\nIt is well known that the wealth distribution of society is heterogeneous; the often cited \u2013and quantitatively not precise\u2013 80-20 rule of Pareto states that the top 20% of the population controls 80% of the total wealth. In line with this, we find that the wealth distribution in the Bitcoin system is also highly heterogeneous. The proper Pareto-like statement for the Bitcoin system would be that the 6.28% of the addresses posesses the 93.72% of the total wealth. We measure the distribution of balances at different points of time, and we find a stable distribution. The tail of wealth distribution is generally modeled with a power-law\u00a0, following this practice we find a power-law tail $\\sim x^{-1.984}$ for balances $\\gtrsim 50 \\textrm{BTC}$ (see Fig.\u00a0). However, visual inspection of the fit is not convincing: the scaling regime spans only the last few orders of magnitude, and fails to reproduce the majority of the distribution. Instead we find that the overall behavior is much better approximated by the stretched exponential distribution $P(b) \\sim b^{-\\gamma} e^{-(a b)^{1-\\gamma}}$, where $\\gamma = 0.873$ and $a = 8014 \\, \\textrm{BTC}^{-1}$.\n\nTo further investigate the evolution of the wealth distribution we measure the Gini coefficient over time. We find that the distribution is characterized by high values throughout the whole lifetime of the network, reaching a stationary value around $G\\approx 0.985$ in the trading phase (see Fig.\u00a0).\n\nTo understand the origin of this heterogeneity, we turn to the microscopic statistics of acquiring bitcoins. Similarly to the case of degree distributions, the observed heterogeneous wealth distributions are often explained by preferential attachment. Moreover, preferential attachment was proposed significantly earlier in the context of wealth distributions than complex networks\u00a0. In economics preferential attachment is traditionally called the Matthew effect or the \"rich get richer phenomenon\"\u00a0. It states that the growth of the wealth of each individual is proportional to the wealth of that individual. In line with this principle, several statistical models were proposed to account for the heterogeneous wealth distribution\u00a0.\n\nTo find evidence supporting this hypothesis, we first investigate the change of balances in fixed time windows. We calculate the difference between the balance of each address at the end and at the start of each month. We plot the differences in function of the starting balances (Fig.\u00a0). When the balance increases, we observe a positive correlation: the average growth increases in function of the starting balance, and it is approximated by the power-law $\\sim b^{0.857}$. This indicates the \"rich get richer\" phenomenon is indeed present in the system. For decreasing balances, we find that a significant number of addresses lose all their wealth in the time frame of one month. This phenomenon is specific to Bitcoin: due to the privacy concerns of users, it is generally considered a good practice to move unspent bitcoins to a new address when carrying out a transaction\u00a0.\n\nTo better quantify the preferential attachment, we carry out a similar analysis to the previous section. However, there is a technical difference: in the case of the evolution of the transaction network, for each event the degree of a node increases by exactly one. In the case of the wealth distribution there is no such constraint. To overcome this difficulty we consider the increment of a node's balance by one unit as an event, e.g. if after a transaction $b_v$ increased by $\\Delta b_v$, we consider it as $\\Delta b_v$ separate and simultaneous events. We only consider events when the balance associated to an address increases, i.e. the address receives a payment. We now calculate the rank function $R(b,t)$ defined in Eq.\u00a0, and plot the cumulative distribution function of the $R$ values observed throughout the whole time evolution of the Bitcoin network (Fig.\u00a0). Visual inspection shows that no single exponent provides a satisfying result, meaning that $\\pi(b_v)$ cannot be modeled by a simple power-law relationship like in Eq.\u00a0. However, we do find that the \"average\" behavior is best approximated by exponents around $\\alpha \\approx 0.8$, suggesting that $\\pi(b_v)$ is a sublinear function. In the context of network evolution, previous theoretical work found that sublinear preferential attachment leads to a stationary stretched exponential distribution\u00a0, in line with our observations.\n\nWe have investigated the evolution of both the transaction network and the wealth distribution separately. However, it is clear that the two processes are not independent. To study the connection between the two, we measure the correlation between the indegree and balance associated to the individual nodes. We plot the average balance of addresses as a function of their degrees on Fig.\u00a0. For degrees in the range of $1$\u2013$3000$ (over $99.99\\%$ of all nodes with nonzero balance), the average balance is a monotonously increasing function of the degree, and it is approximated by the power-law $b \\sim k_\\text{in}^{0.617}$, indicating that the accumulated wealth and the number of distinct transaction partners an individual has are inherently related. Similar scaling was reported by Tseng et al., who conducted an online experiment where volunteers traded on a virtual market\u00a0.\n\n# Methods\n\n## The Bitcoin network\n\nBitcoin is based on a peer-to-peer network of users connected through the Internet, where each node stores the list of previous transactions and validates new transactions based on a proof-of-work system. Users announce new transactions on this network, these transactions are formed into *blocks* at an approximately constant rate of one block per 10 minutes; blocks contain a varying number of transactions. These blocks form the block-chain, where each block references the previous block. Changing a previous transaction (e.g. double spending) would require the recomputation of all blocks since then, which becomes practically infeasible after a few blocks. To send or receive bitcoins, each user needs at least one address, which is a pair of private and public keys. The public key can be used for receiving bitcoins (users can send money to each other referencing the recipient's public key), while sending bitcoins is achieved by signing the transaction with the private key. Each transaction consists of one or more *inputs* and *outputs*. In Fig.\u00a0 we show a schematic view of a typical Bitcoin transaction. Readers interested in the technical details of the system can consult the original paper by Satoshi Nakamoto\u00a0 or the various resources available on the Internet\u00a0.\n\nAn important aspect of Bitcoin is how new bitcoins are created, and how new users can acquire bitcoins. New bitcoins are generated when a new block is formed as a reward to the users participating in block generation. The generation of a valid new block involves solving a reverse hash problem, whose difficulty can be set in a wide range. Participating in block generation is referred to as *mining* bitcoins. The nodes in the network regulate the block generation process by adjusting the difficulty to match the processing power currently available. As interest in the Bitcoin system grew, the effort required to generate new blocks, and thus receive the newly available bitcoins, has increased over 10 million fold; most miners today use specialized hardware, requiring significant investments. Consequently, an average Bitcoin user typically acquires bitcoins by either buying them at an exchange site or receiving them as compensation for goods or services.\n\nDue to the nature of the system, the record of all previous transactions since its beginning are publicly available to anyone participating in the Bitcoin network. From these records, one can recover the sending and receiving addresses, the sum involved and the approximate time of the transaction. Such detailed information is rarely available in financial systems, making the Bitcoin network a valuable source of empirical data involving monetary transactions. Of course, there are shortcomings: only the addresses involved in the transactions are revealed, not the users themselves. While providing complete anonymity is not among the stated goals of the Bitcoin project\u00a0, identifying addresses belonging to the same user can be difficult\u00a0, especially on a large scale. Each user can have an unlimited number of Bitcoin addresses, which appear as separate nodes in the transaction records. When constructing the network of users, these addresses would need to be joined to a single entity.\n\nAnother issue arises not only for Bitcoin, but for most online social datasets: It is hard to determine which observed phenomena are specific to the system, and which results are general. We do not know to what extent the group of people using the system can be considered as a representative sample of the society. In the case of Bitcoin for example, due to the perceived anonymity of the system, it is widely used for commerce of illegal items and substances\u00a0; these types of transactions are probably overrepresented among Bitcoin transactions. Ultimately, the validity of our results will be tested if data becomes available from other sources, and comparison becomes possible.\n\n## Data\n\nWe installed the open-source `bitcoind` client and downloaded the blockchain from the peer-to-peer network on May 7th, 2013. We modified the client to extract the list of all transactions in a human-readable format. We downloaded more precise timestamps of transactions from the `blockchain.info` website's archive. The data and the source code of the modified client program is available at the project's website\u00a0 or through the Casjobs web database interface\u00a0.\n\nThe data includes 235,000 blocks, which contain a total of 17,354,797 transactions. This dataset includes 13,086,528 addresses (i.e. addresses appearing in at least one transaction); of these, 1,616,317 addresses were active in the last month. The Bitcoin network itself does not store balances associated with addresses, these can be calculated from the sum of received and sent bitcoins for each address; preventing overspending is done by requiring that the input of a transaction corresponds to the output of a previous transaction. Using this method, we found that approximately one million addresses had nonzero balance at the time of our analysis.\n\n# Discussion\n\nWe have preformed detailed analysis of Bitcoin, a novel digital currency system. A key difference from traditional currencies handled by banks is the open nature of the Bitcoin: each transactions is publicly announced, providing unprecedented opportunity to study monetary transactions of individuals. We have downloaded and compiled the complete list of transactions, and we have extracted the time and amount of each payment. We have studied the structure and evolution of the transaction network, and we have investigated the dynamics taking place on the network, i.e. the flow of bitcoins.\n\nMeasuring basic network characteristics in function of time, we have identified two distinct phases in the lifetime of the system: (i) When the system was new, no businesses accepted bitcoins as a form of payment, therefore Bitcoin was more of an experiment than a real currency. This initial phase is characterized by large fluctuations in network characteristics, heterogeneous indegree- and homogeneous outdegree distribution. (ii) Later Bitcoin received wider public attention, the increasing number of users attracted services, and the system started to function as a real currency. This trading phase is characterized by stable network measures, dissasortative degree correlations and power-law in- and outdegree distributions. We have measured the microscopic link formation statistics, finding that linear preferential attachment drives the growth of the network.\n\nTo study the accumulation of bitcoins we have measured the wealth distribution at different points in time. We have found that this distribution is highly heterogeneous through out the lifetime of the system, and it converges to a stable stretched exponential distribution in the trading phase. We have found that sublinear preferential attachment drives the accumulation of wealth. Investigating the correlation between the wealth distribution and network topology, we have identified a scaling relation between the degree and wealth associated to individual nodes, implying that the ability to attract new connections and to gain wealth is fundamentally related.\n\nWe believe that the data presented in this paper has great potential to be used for evaluating and refining econophysics models, as not only the bulk properties, but also the microscopic statistics can be readily tested. To this end, we make all the data used in this paper available online to the scientific community in easily accessible formats\u00a0.\n\n# Acknowledgments\n\nThe authors thank Andr\u00e1s Bodor and Philipp H\u00f6vel for many useful discussions and suggestions. This work has been supported by the European Union under grant agreement No. FP7-ICT-255987-FOC-II Project. The authors thank the partial support of the European Union and the European Social Fund through project FuturICT.hu (grant no.: TAMOP-4.2.2.C-11\/1\/KONV-2012-0013), the OTKA 7779 and the NAP 2005\/KCKHA005 grants. EITKIC_12-1-2012-0001 project was partially supported by the Hungarian Government, managed by the National Development Agency, and financed by the Research and Technology Innovation Fund and the MAKOG Foundation.","meta":{"dup_signals":{"dup_doc_count":18,"dup_dump_count":16,"dup_details":{"curated_sources":2,"2023-23":2,"2021-49":1,"2021-21":1,"2020-10":1,"2019-43":1,"2019-39":1,"2019-09":1,"2018-47":1,"2018-22":1,"2018-09":1,"2017-43":1,"2017-39":1,"2016-07":1,"2024-10":1,"2024-18":1}},"filename":"out\/1308.3892_extract_bitcoin_arxiv.tex.md"},"subset":"arxiv"} +{"text":"author: B.-O. Demory$^{1}$, M. Gillon$^{2}$, D. Deming$^3$, D. Valencia$^1$, S. Seager$^1$, B. Benneke$^1$, C. Lovis$^4$, P. Cubillos$^5$, J. Harrington$^5$, K. B. Stevenson$^5$, M. Mayor$^4$, F. Pepe$^4$, D. Queloz$^4$, D. S\u00e9gransan$^4$, S. Udry$^4$\ndate: Received 3 May 2011 \/ Accepted 31 July 2011\ntitle: Detection of a transit of the super-Earth 55\u2006Cnc\u2006e with warm\u00a0*Spitzer*[^1]\n\n# Introduction\n\nRadial velocity (RV), microlensing and transit surveys have revealed the existence in our Galaxy of a large population of planets with a mass of a few to $\\sim$``{=html}20 Earth masses (Lovis et al. 2009; Sumi et al. 2010; Borucki et al. 2011). Based on their mass (or minimum mass for RV planets), these planets are loosely classified as \"super-Earths\" ($M_p \\le10\\: M_\\oplus$) and \"Neptunes\" ($M_p > 10\\: M_\\oplus$). This classification is based on the theoretical limit for gravitational capture of H\/He, $\\sim$``{=html}10 $M_\\oplus$ (e.g., Rafikov 2006), and thus implicitly assumes that Neptunes are predominantly ice giants with a significant H\/He envelope, and that most super-Earths are massive terrestrial planets. Still, the diversity of this planetary population is probably much larger than sketched by this simple division, as we can expect from the stochastic nature of planetary formation.\n\nThe first transit of one of these low-mass planets, GJ\u2006436\u2006b, was detected in 2007 (Gillon et al. 2007). Thanks to its transiting nature, the actual mass ($M_p= 23.2 \\pm 0.8\\: M_\\oplus$) and radius ($R_p=4.22 \\pm 0.10\\: R_\\oplus$) of GJ\u2006436\u2006b could be accurately determined (Torres, 2007), indicating for this \"hot Neptune\" a mass, radius and density indeed very similar to the ice giant planets Uranus and Neptune. More recently, several other transiting low-mass planets were detected. While many more planet candidates detected by the $Kepler$ mission are waiting for confirmation (Borucki et al. 2011), the first confirmed low-mass transiting planets already show a large diversity. Some of these planets, like HAT-P-11\u2006b (Bakos et al. 2010) and Kepler-4\u2006b (Borucki et al. 2010b), are similar to Neptune and GJ\u2006436\u2006b. Kepler-11\u2006c (Lissauer et al. 2011) seems to be a smaller version of Neptune, while HAT-P-26\u2006b (Hartman et al. 2010) has a much lower density (0.4 $\\pm$ 0.10 g\u2006cm$^{-3}$ $vs$ 1.64 g\u2006cm$^{-3}$ for Neptune) that is consistent with a significantly larger H\/He fraction. The super-Earths CoRoT-7\u2006b (L\u00e9ger et al. 2009, Hatzes et al. 2010) and Kepler-10\u2006b (Batalha et al. 2011) are probably massive rocky planets formed in the inner part of their protoplanetary disks. The super-Earth GJ\u20061214\u2006b (Charbonneau et al. 2009) is still mysterious in nature. Its large radius ($R_p = 2.44 \\pm 0.21\\: R_{\\oplus}$, Carter et al. 2011) suggests a significant gaseous envelope that could originate from the outgassing of the rocky\/icy surface material of a terrestrial planet or that could be of primordial origin, making it a kind of \"mini-Neptune\" (Rogers & Seager 2010). Recent transit transmission spectrophotometric measurements for GJ\u20061214\u2006b seem to rule out a cloud-free atmosphere composed primarily of hydrogen (Bean et al. 2010, D\u00e9sert et al. 2011), but more atmospheric measurements are needed to determine the exact nature of its envelope. The case of GJ\u20061214\u2006b shows nicely that understanding the true nature of a low-mass exoplanet could require not only precise measurements of its mass and radius, but also a study of its atmospheric properties.\n\nAmong all the low-mass transiting exoplanets detected so far, only GJ\u20061214\u2006b and GJ\u2006436\u2006b, and to a lesser extent HAT-P-11\u2006b and HAT-P-26\u2006b, orbit around stars small enough and bright enough in the infrared to make possible a thorough atmospheric characterization with existing or future facilities like JWST (e.g., Shabram et al. 2011). Improving our understanding of the low-mass planet population orbiting around solar-type stars requires that such planets are detected in transit in front of much nearer\/brighter host stars than the targets of surveys like CoRoT (Barge et al. 2008) or *Kepler* (Borucki et al. 2010a). This is the main goal of two ambitious space mission projects in development: PLATO (Catala et al. 2010) and TESS (Ricker et al. 2010). Still, another and more straightforward possibility exists. Doppler surveys target bright nearby stars, and they have now detected enough nearby low-mass planets to make highly probable that a few of them transit their parent stars. This motivated us to search with *Spitzer* for transits of low-mass Doppler planets having the highest transit probability. In a previous paper (Gillon et al. 2010, hereafer G10), we described the reasons that have led us to conclude that *Spitzer* and its Infra-Red Array Camera (IRAC, Fazio et al. 2004) were the best instrumental choice for this transit search, and presented the results of our *Spitzer* cycle 5 program targeting HD\u200640307\u2006b (Mayor et al. 2009). The rest of our program consist of a cycle 6 DDT program (ID 60027) of 100 hours that targeted ten other low-mass planets. *Spitzer*'s cryogen was depleted at the end of cycle 5, and these observations were thus carried out in non-cryogenic mode (\"warm *Spitzer*\").\n\nThe recent announcement of 55 Cnc e transits detection by the *MOST* satellite (Winn et al. 2011) motivated the publication of this paper. Our initial analysis of the warm *Spitzer* data taken in last January concluded a transit detection but also that several sources of instrumental effects needed to be fully characterized before securing the detection. We only recently obtained a satisfactory instrumental model for warm *Spitzer* photometry, through a global analysis of calibration data and of all the observations of our cycle 6 program (Gillon et al., in prep.). Once applied to our 55 Cnc data, this instrumental model leads not only to the firm detection of the transit of , but also to a precise determination of its transit parameters.\n\nSection 2 presents our derivation of the transit ephemeris from the published RVs. In Section\u00a03, we present our data and their analysis that reveals the transiting nature of the planet. We discuss our transit detection and its implications in Section\u00a04.\n\n# Transit ephemeris estimation\n\nWe performed a global analysis of all the available RVs for 55\u2006Cnc to estimate the most reliable transit ephemeris for 55\u2006Cnc\u2006e. This analysis was done with the adaptative Markov Chain Monte-Carlo (MCMC) algorithm described in G10. We assumed Keplerian orbits for the five planets orbiting 55\u2006Cnc, after having checked that planet-planet interactions had negligible influence on our solutions, using for this purpose the *Systemic Console* software package (Meschiari et al. 2009). Our analysis was based on the orbital solution recently presented for 55\u2006Cnc\u2006e by Dawson & Fabrycky (2010). As shown by these authors, the orbital period value initially reported for this planet, 2.8 days (McArthur et al. 2004; Fischer et al. 2008), was an alias of the true period, 0.74 day. We verified this result by making two independent MCMC analyses of the RVs, one assuming $P\\sim 0.74$ day and the other assuming $P\\sim 2.8$ days.\n\nUsing the Bayesian Information Criterion (BIC; e.g. Carlin & Louis 2008) to estimate the marginal likelihood of both models, and assuming that these models have the same prior probability, we obtained an odds ratio of $\\sim10^{16}$ in favor of the P = 0.74 day model, indicating a decisive strength of evidence for this model (Jeffreys 1961). The best-fitting model obtained from this analysis was used to estimate the jitter noise in the RV datasets. 6.0 for Lick, 4.3 for Keck, 5.5 for HET and 15 for ELODIE were added in quadrature to the published error bars to derive the uncertainties on the physical parameters of 55\u2006Cnc\u2006e presented in Table\u00a01.\n\nIn addition to some basic parameters for the host star, the origin of the RVs used as input data, and a description of our warm *Spitzer* observations, Table\u00a01 provides the most relevant results of our MCMC analysis for 55\u2006Cnc\u2006e. The large transit probability, $\\sim$``{=html}29%, and the very well constrained transit ephemeris (1$\\sigma$ error $<$ 1 hour in 2011) of this super-Earth ($M_p = 7.8 \\pm 0.6\\: M_\\oplus$) made it an extremely interesting target for our transit search program.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Basic data for the star 55 Cnc, relevant results of our MCMC analysis of the RVs, and description of the data (RVs + warm Spitzer<\/em><\/span> observations) used in this work. 1<\/sup><\/span>Van Leeuwen (2007). 2<\/sup><\/span>Turon et al. (1993). 3<\/sup><\/span>Skrutskie et al. (2006). 4<\/sup><\/span>Fischer et al. (2008). 5<\/sup><\/span>Valenti & Fischer (2005). 6<\/sup><\/span>von Braun et al. (2011). 7<\/sup><\/span>Mac Arthur et al. (2004). 8<\/sup><\/span>Naef et al. (2004). a<\/em><\/sup><\/span>Assuming M<\/em>p<\/em><\/sub>sin\u2006i<\/em>\u2004=\u2004M<\/em>p<\/em><\/sub><\/span>. The minimum and maximum values correspond, respectively, to a pure iron and a pure hydrogen planet (Seager et al. 2007). b<\/em><\/sup><\/span>Assuming a null albedo and a heat distribution factor f<\/em>\u2032<\/span> = 1\/4 (Seager 2010). c<\/em><\/sup><\/span>AOR = Astronomical Observation Request = Spitzer<\/em><\/span> observing sequence. d<\/em><\/sup><\/span>BCD = Basic Calibrated Data = block of 64 subarray exposures. <\/caption>\n

Star<\/p><\/td>\n

55\u2006Cnc<\/td>\n<\/tr>\n

Distance d<\/em><\/span> [parsec]<\/p><\/td>\n

12.34\u2005\u00b1\u20050.12<\/span>1<\/sup><\/span><\/td>\n<\/tr>\n

V<\/em><\/span> magnitude<\/p><\/td>\n

5.96\u2005\u00b1\u20050.01<\/span>2<\/sup><\/span><\/td>\n<\/tr>\n

K<\/em><\/span> magnitude<\/p><\/td>\n

4.02\u2005\u00b1\u20050.03<\/span>3<\/sup><\/span><\/td>\n<\/tr>\n

Spectral type<\/p><\/td>\n

K0V - G8V4<\/sup><\/span><\/td>\n<\/tr>\n

Effective temperature [K]<\/p><\/td>\n

5234\u2005\u00b1\u2005305<\/sup><\/span><\/td>\n<\/tr>\n

Surface gravity <\/p><\/td>\n

4.45\u2005\u00b1\u20050.085<\/sup><\/span><\/td>\n<\/tr>\n

Metallicity Fe\/H [dex]<\/p><\/td>\n

+0.31\u2005\u00b1\u20050.045<\/sup><\/span><\/td>\n<\/tr>\n

Mass M<\/em>*<\/sub><\/span> [M<\/em>\u2299<\/sub><\/span>]<\/p><\/td>\n

0.905\u2005\u00b1\u20050.0156<\/sup><\/span><\/td>\n<\/tr>\n

Radius R<\/em>*<\/sub><\/span> [R<\/em>\u2299<\/sub><\/span>]<\/p><\/td>\n

0.943\u2005\u00b1\u20050.0106<\/sup><\/span><\/td>\n<\/tr>\n

RV data<\/p><\/td>\n

<\/td>\n<\/tr>\n
<\/td>\n250 Lick4<\/sup><\/span><\/td>\n<\/tr>\n
<\/td>\n70 Keck4<\/sup><\/span><\/td>\n<\/tr>\n
<\/td>\n119 HET7<\/sup><\/span><\/td>\n<\/tr>\n
<\/td>\n48 ELODIE8<\/sup><\/span><\/td>\n<\/tr>\n

Planet (MCMC results)<\/p><\/td>\n

55\u2006Cnc\u2006e<\/td>\n<\/tr>\n

Minimal Mass M<\/em>p<\/em><\/sub>sin\u2006i<\/em><\/span> [M<\/em>\u2295<\/sub><\/span>]<\/p><\/td>\n

7.80\u2005\u00b1\u20050.56<\/span><\/td>\n<\/tr>\n

Expected Radius R<\/em>p<\/em><\/sub><\/span> [R<\/em>\u2295<\/sub><\/span>]a<\/em><\/sup><\/span><\/p><\/td>\n

1.3 - 5.7<\/td>\n<\/tr>\n

Expected Area ratio (R<\/em>p<\/em><\/sub>\/R<\/em>*<\/sub>)2<\/sup><\/span> [ppm]<\/p><\/td>\n

150 - 3000<\/td>\n<\/tr>\n

Equilibrium temperature T<\/em>e<\/em>q<\/em><\/sub><\/span> [K]b<\/em><\/sup><\/span><\/p><\/td>\n

1958\u2005\u00b1\u200515<\/span><\/td>\n<\/tr>\n

T<\/em>t<\/em>r<\/em>a<\/em>n<\/em>s<\/em>i<\/em>t<\/em><\/sub>\u2005\u2212\u20052450000<\/span> [HJD]<\/p><\/td>\n

5568.011\u2005\u00b1\u20050.025<\/span><\/td>\n<\/tr>\n

T<\/em>o<\/em>c<\/em>c<\/em>u<\/em>l<\/em>t<\/em>a<\/em>t<\/em>i<\/em>o<\/em>n<\/em><\/sub>\u2005\u2212\u20052450000<\/span> [HJD]<\/p><\/td>\n

5568.368\u2005\u00b1\u20050.030<\/span><\/td>\n<\/tr>\n

Orbital period P<\/em><\/span> [d]<\/p><\/td>\n

0.7365437\u2005\u00b1\u20050.0000052<\/span><\/td>\n<\/tr>\n

Central transit duration W<\/em>b<\/em>\u2004=\u20040<\/sub><\/span> [min]<\/p><\/td>\n

98\u2005\u00b1\u20052<\/span><\/td>\n<\/tr>\n

RV semi-amplitude K<\/em><\/span> []<\/p><\/td>\n

5.93\u2005\u00b1\u20050.42<\/span><\/td>\n<\/tr>\n

Semi-major axis a<\/em><\/span> [AU]<\/p><\/td>\n

0.01544\u2005\u00b1\u20050.00009<\/span><\/td>\n<\/tr>\n

Eccentricity e<\/em><\/span><\/p><\/td>\n

0.061\u22120.043<\/sub>+0.065<\/sup><\/span><\/td>\n<\/tr>\n

Argument of periastron \u03c9<\/em><\/span> [deg]<\/p><\/td>\n

202\u221270<\/sub>+88<\/sup><\/span><\/td>\n<\/tr>\n

P<\/em>r<\/em>i<\/em>o<\/em>r<\/em><\/span> transit probability [%]<\/p><\/td>\n

28.9\u2005\u00b1\u20051.5<\/span><\/td>\n<\/tr>\n

P<\/em>r<\/em>i<\/em>o<\/em>r<\/em><\/span> occultation probability [%]<\/p><\/td>\n

29.3\u2005\u00b1\u20051.8<\/span><\/td>\n<\/tr>\n

Warm Spitzer<\/em><\/span> data<\/p><\/td>\n

<\/td>\n<\/tr>\n

Channel [\u03bc<\/em><\/span>m]<\/p><\/td>\n

4.5<\/td>\n<\/tr>\n

AORc<\/em><\/sup><\/span><\/p><\/td>\n

39524608<\/td>\n<\/tr>\n

Exposure time [s]<\/p><\/td>\n

0.01<\/td>\n<\/tr>\n

N<\/em>B<\/em>C<\/em>D<\/em><\/sub><\/span>d<\/em><\/sup><\/span><\/p><\/td>\n

5240<\/td>\n<\/tr>\n

Duration [hr]<\/p><\/td>\n

5<\/td>\n<\/tr>\n
<\/td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n# Data analysis\n\n## Data description\n\n55\u2006Cnc was observed by *Spitzer* on 6 January 2011 from 9h41 to 14h39 UT. The data consist of 5240 sets of 64 individual subarray images obtained by the IRAC detector at 4.5 $\\mu$m with an integration time of 0.01s, and calibrated by the *Spitzer* pipeline version S18.18.0. They are available on the *Spitzer* Heritage Archive database[^2] under the form of 5240 Basic Calibrated Data (BCD) files. We first converted fluxes from the *Spitzer* units of specific intensity (MJy\/sr) to photon counts, then performed aperture photometry on each subarray image with the `IRAF\/DAOPHOT`[^3] software (Stetson, 1987). We tested different aperture radii and background annuli, the best result being obtained with an aperture radius of 3 pixels and a background annulus extending from 11 to 15.5 pixels from the PSF center. The center of the PSF was measured by fitting a Gaussian profile to each image. We discarded the first ten minutes of data to allow the detector to stabilize. The $x$-$y$ distribution of the measurements was then looked at, and we discarded the few measurements having a very different position than the bulk of the data. For each block of 64 subarray images, we then discarded measurements with discrepant values of flux, background, $x$ and $y$ positions using a $\\sigma$ median clipping (5$\\sigma$ for the flux and 10$\\sigma$ for the other parameters), and the resulting values were averaged, the photometric error being taken as the error on the average flux measurement. At this stage, a 50$\\sigma$ slipping median clipping was used on the resulting light curve to discard totally discrepant fluxes.\n\nFigure 1 shows the resulting raw light curve, and the time-series for the background and the $x$ and $y$ positions. As can be seen in Fig.\u00a01 and Fig.\u00a02, the measured background showed an unusual evolution during the run. It remained stable during $\\sim$``{=html}3.5 hrs, then it increased abruptly of a few %, and finally its scatter increased largely. Such a behavior is most probably of instrumental origin. We included this instrumental effect in our data modeling (see below).\n\n## Modeling the systematics\n\nThe IRAC 3.6 and 4.5 $\\mu$m detectors are composed of InSb arrays that show a strong intrapixel quantum efficiency (QE) variability, the QE being maximal in the middle of the pixel and decreasing towards the edges. The full-width at half maximum (FWHM) of the point-spread function (PSF) is $\\sim$``{=html}1.7 pixels. This undersampling of the PSF combined with the QE intrapixel variability leads to a strong dependance of the measured stellar flux on the exact location of the PSF center in a pixel. As *Spitzer*'s pointing wobbles with an amplitude of $\\sim$``{=html}0.1 pixel and a period of $\\sim$``{=html}1h, this leads to a severe systematic effect in the photometric time-series acquired at 3.6 and 4.5 $\\mu$m, known as the \"pixel-phase\" effect. This effect was already present in the cryogenic part of the *Spitzer* mission and is very well-documented (e.g., Knutson et al. 2008 and references therein). It is the main limit to the photometric precision of warm *Spitzer* (e.g., Ballard et al. 2010). From a comprehensive analysis, the *Spitzer* engineering team identified recently the cause of the *Spitzer* pointing wobble as the thermal cycling of a heater used to keep a battery within its nominal temperature range[^4]. After extensive testing and review, it was decided to reduce by a factor of two the thermal amplitude of the cycling while increasing its frequency to make it differ more from the typical frequency of planetary transits and occultations. Our data were obtained after this heater change. The correlation between the measured fluxes and the stellar image position is clearly noticeable in the raw light curve, the resulting periodic pattern having a typical period $\\sim$ 35 min, corresponding to a cycle of the heater after the engineering change. We modeled this \"pixel-phase\" effect with the following 2$^{nd}$-order $x$ and $y$ position polynomial: $$\\begin{aligned}\nA(dx,dy) & = & a_1+a_2dx +a_3dx^2+a_4dy+a_5dy^2 \\nonumber\\\\\n & & +a_6dxdy \\textrm{,}\n\\end{aligned}$$ where $dx$ and $dy$ are the distance of the PSF center to the center of the pixel. This model for the \"pixel-phase\" effect is quite classical in the exoplanet literature (e.g., Knutson et al. 2008, D\u00e9sert et al. 2009). Correcting the light curve with the best-fit \"pixel-phase\" model lead to the light curve visible in Fig.\u00a03. It shows a drop of brightness with an amplitude compatible with a transit of 55\u2006Cnc\u2006e. It also shows some other low-amplitude flux modulations that are caused by other warm *Spitzer* systematic effects (see below).\n\nOne could argue that the transit-like pattern could be caused by the imperfect correction of the \"pixel-phase\" effect by the function shown in Eq.\u00a01. This is very unlikely, as the duration of the transit-like structure does not correspond to the one of the wobbles of the stellar position on the chip. To discard firmly this possibility, we tried 3$^{rd}$ and 4$^{th}$-order version of Eq.\u00a01 that led to very similar light curves. We also corrected the \"pixel-phase\" effect by a totally different method that relies only on the data themselves and not on any numerical function. We divided the pixel area sampled by the PSF center into $33\\times33$ small boxes. If at least 5 subarray measurements felt into a given box, and if these measurements sampled at least 0.14 days (70% of the duration of the run) the corresponding measurements were divided by their mean value. If these two conditions were not met for a given box, its measurements were discarded. The reduction procedure was then identical to the one described above. The light curve obtained after this correction by an \"intrapixel flatfield\" was totally similar (pattern, scatter) to the one visible in Fig.\u00a03. To assess the dependancy of the observed transit-like structure on the details of the reduction procedure several independent reductions of the data were performed by four of us (M. G., B.-O. D., D. D., P. C.), all using different reduction and detrending procedures. We also tested performing photometry on the 5240 images resulting from the averaging of the 64 subarray images of each BCD file, using a median filter to reject outlying pixels. Finally, we inspected the light curves obtained without background subtraction. In all cases, the obtained light curves were very similar to the one shown in Fig.\u00a03, confirming the independence of the obtained photometry on the details of the reduction procedure.\n\nAt this stage, we performed a thorough MCMC analysis of our photometry to deduce the transit detection significance, using as input data the raw light curve obtained with an aperture of 3 pixels. Our model assumed a mass of $0.905 \\pm 0.015 M_\\odot$ for 55\u2006Cnc (von Braun et al. 2011), and a circular orbit with $P=0.7365437$ days for 55\u2006Cnc\u2006e (Sect.\u00a02). We used the model of Mandel & Agol (2002) for the transit, in addition to the following model for the photometric variations of instrumental and stellar origin: $$\\begin{aligned}\nA(dx,dy,dt) & = & a_1+a_2dx +a_3dx^2+a_4dy+a_5dy^2 \\nonumber\\\\\n & & + a_6dxdy + a_7dt \\nonumber\\\\\n & & + a_8\\sin \\bigg( \\frac{dt - a_{9}}{a_{10}} \\bigg) \\nonumber\\\\\n & & + a_{11} \\log{dt} + a_{12} \\log{dt}^2 \\textrm{,}\n\\end{aligned}$$ where $dt$ is the time elapsed since 2455568.05 BJD, i.e. the time at which the background increases sharply (Fig.\u00a01 & 2). The $a_{11}$ and $a_{12}$ terms were only applied for $dt > 0$. The six first terms of this equation correspond to the \"pixel-phase\" model (Eq.\u00a01). The purpose of the linear term in $dt$ is to model a possible smooth variation of the stellar brightness. The other terms result from our extensive analysis of our entire set of warm *Spitzer* data (Gillon et al., in prep.) and of available calibration data that lead us to conclude to a low-amplitude periodic variability of the effective gain of the detector, its typical period lying between 30 and 60 minutes and its amplitude being in average of a few dozens of ppm. Considering the challenging photometric precision required by our program, it is very important to take it into account, justifying the sine term in Eq.\u00a02. We are currently working with the *Spitzer* engineering team to find the origin of this periodic variation. We also notice that a \"background explosion\" such as the one affecting the last part of our data is correlated to a sharp increase of the effective gain of the detector that is very well modeled by the last two terms of Eq.\u00a02. The MCMC uses the whole dataset to simultaneously fit for the transit model and the baseline function presented in Eq.\u00a02.\n\n## MCMC analysis and model comparison\n\nThe following parameters were jump parameters[^5] in our analysis: the planet\/star area ratio $(R_p \/R_s )^2$, the transit width (from first to last contact) $W$, the impact parameter $b = a \\cos{i}\/R_\\ast$, and the time of minimum light $T_0$. We assumed a uniform prior distribution for these jump parameters, but we imposed a Gaussian prior for the stellar radius $R_\\ast$ based on $R_\\ast = 0.943 \\pm 0.010 R_\\odot$ (Table 1). We assumed a quadratic limb-darkening law with coefficients $u_1=0.0706$ and $u_2=0.1471$. These values were drawn from the theoretical tables of Claret & Bloemen (2011) for the IRAC 4.5 $\\mu$m bandpass and for = 5250 K, =4.5 and \\[Fe\/H\\]=+0.3. The 12 coefficients of the baseline models (Eq.\u00a02) were determined by least-square minimization at each steps of the Markov chains (see G10 and references therein for details). The correlated noise present in the LC was taken into account as described by G10, i.e., a scaling factor $\\beta_r$ was determined from the standard deviation of the binned and unbinned residuals of a preliminary MCMC analysis, and it was applied to the error bars. Several binning intervals ranging from 10 to 90 minutes were tried in preliminary short Markov Chains, and the maximal value for $\\beta_r$, 1.35, was used in our analysis.\n\nWe performed two new MCMC analyses, one with a transit of 55\u2006Cnc\u2006e, and one without. Figure 4 shows the resulting best-fit transit model and its residuals. The odds ratio (Eq.\u00a02 alone) $vs$ (Eq.\u00a02 + transit) is $\\sim 10^8$ in favor of the transit model. The transit of 55\u2006Cnc\u2006e is thus firmly detected. The period of the sinusoid ($a_{10}$) derived from the MCMC is 51 minutes, well decoupled from the transit duration (96 minutes) and significantly longer than the pixel-phase timescale (35 minutes). Its amplitude is 115$\\pm$``{=html}27 ppm. We show in Fig.\u00a05 the different contributions of the spatially- and time-dependent terms of Eq.\u00a02. This shows that the time-dependent terms are well decoupled from the transit pattern. Table\u00a02 presents the resulting transit and physical parameters and 1$\\sigma$ error limits derived for 55\u2006Cnc\u2006e.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Median and 1\u03c3<\/em><\/span> limits of the posterior distributions derived for 55\u2006Cnc\u2006e from our MCMC analysis of our warm Spitzer<\/em><\/span> photometry. The mass and mean density are derived from the parameters in Table\u00a01.<\/caption>\n

(R<\/em>p<\/em><\/sub>\/R<\/em>*<\/sub>)2<\/sup><\/span> [ppm]<\/p><\/td>\n

410\u2005\u00b1\u200563<\/span><\/td>\n<\/tr>\n

b<\/em>\u2004=\u2004a<\/em>cos\u2006i<\/em>\/R<\/em>*<\/sub><\/span> [R<\/em>*<\/sub><\/span>]<\/p><\/td>\n

0.16\u22120.10<\/sub>+0.13<\/sup><\/span><\/td>\n<\/tr>\n

Transit width W<\/em><\/span> [d]<\/p><\/td>\n

0.0665\u22120.0019<\/sub>+0.0011<\/sup><\/span><\/td>\n<\/tr>\n

T<\/em>0<\/sub>\u2005\u2212\u20052450000<\/span> [BJD]<\/p><\/td>\n

5568.0265\u22120.0010<\/sub>+0.0015<\/sup><\/span><\/td>\n<\/tr>\n

R<\/em>p<\/em><\/sub>\/R<\/em>*<\/sub><\/span><\/p><\/td>\n

0.0202\u22120.0016<\/sub>+0.0015<\/sup><\/span><\/td>\n<\/tr>\n

a<\/em>\/R<\/em>*<\/sub><\/span><\/p><\/td>\n

3.517\u22120.040<\/sub>+0.041<\/sup><\/span><\/td>\n<\/tr>\n

Inclination i<\/em><\/span> [deg]<\/p><\/td>\n

87.3\u22122.1<\/sub>+1.7<\/sup><\/span><\/td>\n<\/tr>\n

Radius R<\/em>p<\/em><\/sub><\/span> [R<\/em>\u2295<\/sub><\/span>]<\/p><\/td>\n

2.08\u22120.17<\/sub>+0.16<\/sup><\/span><\/td>\n<\/tr>\n

Mass M<\/em>p<\/em><\/sub><\/span> [M<\/em>\u2295<\/sub><\/span>]<\/p><\/td>\n

7.81\u22120.53<\/sub>+0.58<\/sup><\/span><\/td>\n<\/tr>\n

Mean density \u03c1<\/em>p<\/em><\/sub><\/span> [g\u2006cm\u22123<\/sup><\/span>]<\/p><\/td>\n

4.78\u22121.20<\/sub>+1.31<\/sup><\/span><\/td>\n<\/tr>\n
<\/td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\nWe also conducted a residual permutation bootstrap analysis, known as the prayer bead method (Gillon et al. 2006) to obtain an additional estimation of the residual correlated noise. We used for this purpose the lightcurve corrected from the systematic effects described in Eq.\u00a02. The resulting parameters are in excellent agreement with the ones derived from the MCMC analysis (Table 2), while their error bars are significantly smaller. This result indicates that the error budget is dominated by the uncertainties on the parameters of the complex baseline model, and not by the residual correlated noise.\n\nTo test the robustness of our transit detection and of the resulting transit parameters, we performed $\\sim$ 10 additional MCMC analyses as described above, each of them assuming a different set of time-dependent terms presented in Eq.\u00a02. We used a binned lightcurve (per 30s) for the purpose of this comparison to speed up the analysis. Table\u00a03 presents the baseline model, derived depth, BIC (Bayesian Information Criterion) and Bayes factor obtained for 4 of those MCMC analyses.\n\n```latex\n\\begin{table*}[h]\\begin{center}\n\\begin{tabular}{lccccc}\n\\hline \\noalign {\\smallskip}\n & $p$ & $p + t$ & $p + s + t$ & $p + j + t$ & $p + s + j + t$ \\\\ \\noalign {\\smallskip}\n\\hline \\noalign {\\smallskip}\n$(R_p\/R_\\ast)^2 $ [ppm] & $590 \\pm 72$ & $665 \\pm 70$ & $683 \\pm 87$ & $428 \\pm 62$ & $410 \\pm 63$ \\\\ \\noalign {\\smallskip} \nBIC & 804 & 758 & 758 & 746 & 732 \\\\ \\noalign {\\smallskip} \nBayes factor & & $9.8\\times10^9$ & $9.8\\times10^{9}$ & $3.9\\times10^{12}$ & $4.3\\times10^{15}$ \\\\ \\noalign {\\smallskip} \n\\hline \\noalign {\\smallskip}\n\\end{tabular}\n\\caption{Transit depth, Bayesian Information Criterion (BIC) and Bayes factor from the MCMC obtained for 5 different model baselines. Model terms are described as follows : $p$ is the pixel phase correction ($a_2$, $a_3$, $a_4$, $a_5$ and $a_6$ in Eq.~2), $t$ is the time-dependent linear trend ($a_7$), $s$ is the sinusoidal ($a_8$, $a_9$ and $a_{10}$) and $j$ is the jump model ($a_{11}$ and $a_{12}$). The Bayes factor given in the table is relative to the $p$ model. Our adopted model described in Eq.~2 is the rightmost one.}\n\\end{center}\n\\end{table*}\n```\n\nWhile none of these models revealed to be better than our nominal model for representing our warm *Spitzer* data (Bayes factor between $10^3$ and $10^{15}$), each of these models lead to a decisive detection of the transit of 55\u2006Cnc\u2006e (Bayes factor between $10^{10}$ and $10^{50}$). For all these alternatives models, the deduced values for the transit parameters agreed well with the ones deduced in our nominal analysis, except for the transit depth when the jump is not included. Nevertheless, our Bayesian model comparison makes these alternative models $>4\\times10^5$ times less probable than our nominal model. Table 3 also illustrates how the jump and the sinusoidal variation terms improve the baseline model. We thus not only conclude to our firm detection of a transit of 55\u2006Cnc\u2006e, but also to the robustness of the deduced results shown in Table\u00a02.\n\n# Discussion and conclusions\n\n## Photometric precision of warm *Spitzer*\n\nBecause we have to take into account three different instrumental effects in addition to a possible smooth variation of the stellar flux, the complexity of our photometric baseline model is large (12 free parameters, Eq.\u00a02). This illustrates well the challenge of ultra-precise time-series IR photometry, especially with a detector that is no longer actively cooled. By modeling this baseline in addition to the transit in our MCMC analysis, we naturally take into account its uncertainties and their impact on the deduced transit parameters. Despite the complexity of the baseline model, we reach a very good precision on these transit parameters. This is due not only to the extensive characterization of the warm *Spitzer* instrumental effects performed by the exoplanet community and the *Spitzer* Science Center, but also by the extremely high-cadence made possible by the IRAC subarray mode. Indeed, we have here more than 5,000 photometric measurements to constrain thoroughly the 12+4 parameters of our global model. We show here that warm *Spitzer* can not only detect an eclipse of a few hundreds of ppm (Ballard et al. 2010), it can also measure its depth with a precision of $\\sim60$ ppm, leading to the conclusion that this space telescope has still an important role to play for the detection and characterization of transiting planets.\n\n## Planetary radius\n\nOur MCMC results (see Table\u00a02) yield a planetary radius of $2.08_{-0.16}^{+0.17}\\: R_{\\oplus}$ as measured in IRAC 4.5$\\mu$m channel. The error bars are determined from the posterior distribution function produced by the MCMC and includes the error on the stellar radius. The current planetary radius uncertainty is dominated by the error on the transit depth.\n\nOn its side, the radius of the star itself is now extremely well constrained, thanks to recent interferometric observations of 55 Cancri performed by van Braun et al. (2011) using the CHARA array. The resulting updated stellar radius value (Table\u00a01) now yields a negligible contribution from the stellar radius to the planetary size uncertainty.\n\nIn the preprint version of this paper, we reported a planetary radius 30% larger than the one initially obtained by Winn et al. (2011) in the visible with MOST. After we submitted our paper, a new version of the Winn et al. (2011) analysis was made available that yields good agreement with our results (at the 1$\\sigma$ level).\n\n## Composition of 55\u2006Cnc\u2006e\n\nWe used the internal structure model described in Valencia et al. (2010) suitable for rocky and gaseous planets. We considered four different rocky compositions that span the possible range in radius. The upper bound for the radius is set by the lightest rocky composition, which is one where there is no iron. A planet with a radius larger than this upper limit necessarily has volatiles. The lower bound for the radius is set by a pure iron composition. Both extreme compositions are unlikely to exist given that 1) iron, magnesium and silicate have similar condensation temperatures, with the latter two making up most of the mantle of the Earth (i.e. Mg$_{0.9}$Fe$_{0.1}$O+ SiO$_{2}$) and 2) a pure iron composition is unlikely even with maximal collisional stripping (Marcus et al. 2010). The other two rocky compositions are an Earth-like one (33% iron core, 67% silicate mantle with 0.1 of iron by mol) and a \"super-Mercury\" (63% iron core, 37% silicate mantle no iron). We also consider volatile compositions in which we added different amounts of H$_{2}$O or hydrogen and helium (H-He) at an equilibrium temperature of $\\sim$``{=html}2000 K above an earth-like nucleus.\n\nThe data obtained in this study for the radius ($2.08^{+0.16}_{-0.17}\\: R_\\oplus$), and mass ($7.81_{-0.53}^{+0.58}\\: M_\\oplus$) place the composition of 55\u2006Cnc\u2006e intersecting the threshold line between planets that necessarily require volatiles (above the \"no-iron\" line), and the ones that may be rocky (below the \"no-iron\" line). However, most of the combinations of mass and radius lie above the upper limit of a rocky planet, requiring that 55\u2006Cnc\u2006e have volatiles in its composition. We find that an envelope of a few parts in 10,000 of H-He or of order of $\\sim$``{=html}10% water above an Earth-like core can fit the data well. In Fig.\u00a06 we show the mass-radius relationships for the different compositions considered and the different known transiting super-Earths. Based on the same arguments proposed by Valencia et al. (2010) for CoRoT-7\u2006b, the timescale for evaporation of a H-He envelope would be too short ($\\sim$ a few million years) for it to be considered as a plausible composition, whereas the timescale for water evaporation is of the same order of magnitude than that of the age of the system ($\\sim$ a few billion years). Thus, according to the *Spitzer* data analysed in this study, we favor a composition of an envelope of supercritical water above a solid, perhaps earth-like, nucleus. The exact amount of volatiles will depend on the composition of the solid nucleus, with a reasonable estimate around $\\sim$``{=html}15%. However, a pure rocky composition cannot be ruled out, in which case the planet would be depleted in iron with respect to Earth.\n\nSimilarly, the data for 55\u2006Cnc\u2006e reported by Winn et al. 2011 also lies at the threshold of these two types of planets, albeit with a denser composition.\n\nFigure 7 shows the density as a function of mass of several transiting super-Earths. While CoRoT-7\u2006b and Kepler-10\u2006b have practically the same composition, 55\u2006Cnc\u2006e, with its similar effective temperature and mass, has a much lighter composition. It lies between the high-density \"super-Mercuries\" and the volatile-rich planets Kepler-11\u2006b and GJ\u20061214\u2006b. Within a small range of masses, 4-9 $M_\\oplus$, the known transiting super-Earths span a relatively large variety in compositions.\n\n[^1]: The photometric time series used in this work are available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (18.104.22.168) or via http:\/\/cdsweb.u-strasbg.fr\/cgi-bin\/qcat?J\/A+A\/\n\n[^2]: http:\/\/sha.ipac.caltech.edu\/applications\/Spitzer\/SHA\n\n[^3]: IRAF is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.\n\n[^4]: http:\/\/ssc.spitzer.caltech.edu\/warmmission\/news\/ 21oct2010memo.pdf\n\n[^5]: Jump parameters are the parameters that are randomly perturbed at each step of the MCMC.","meta":{"dup_signals":{"dup_doc_count":14},"filename":"out\/1105.0415_extract_17178.tex.md"},"subset":"arxiv"} +{"text":"author: L.\u00a0D.\u00a0Faddeev\ntitle: Modern Mathematical Physics: \n what it should be\n\nWhen somebody asks me, what I do in science, I call myself a specialist in mathematical physics. As I have been there for more than 40 years, I have some definite interpretation of this combination of words: \"mathematical physics.\" Cynics or purists can insist that this is neither mathematics nor physics, adding comments with a different degree of malice. Naturally, this calls for an answer, and in this short essay I want to explain briefly my understanding of the subject. It can be considered as my contribution to the discussion about the origin and role of mathematical physics and thus to be relevant for this volume.\n\nThe matter is complicated by the fact that the term \"mathematical physics\" (often abbreviated by MP in what follows) is used in different senses and can have rather different content. This content changes with time, place and person.\n\nI did not study properly the history of science; however, it is my impression that, in the beginning of the twentieth century, the term MP was practically equivalent to the concept of theoretical physics. Not only Henri Poincar\u00e9, but also Albert Einstein, were called mathematical physicists. Newly established theoretical chairs were called chairs of mathematical physics. It follows from the documents in the archives of the Nobel Committee that MP had a right to appear both in the nominations and discussion of the candidates for the Nobel Prize in physics . Roughly speaking, the concept of MP covered theoretical papers where mathematical formulae were used.\n\nHowever, during an unprecedented bloom of theoretical physics in the 20s and 30s, an essential separation of the terms \"theoretical\" and \"mathematical\" occurred. For many people, MP was reduced to the important but auxiliary course \"Methods of Mathematical Physics\" including a set of useful mathematical tools. The monograph of P.\u00a0Morse and H.\u00a0Feshbach is a classical example of such a course, addressed to a wide circle of physicists and engineers.\n\nOn the other hand, MP in the mathematical interpretation appeared as a theory of partial differential equations and variational calculus. The monographs of R.\u00a0Courant and D.\u00a0Hilbert and S.\u00a0Sobolev are outstanding illustrations of this development. The theorems of existence and uniqueness based on the variational principles, a priori estimates, and imbedding theorems for functional spaces comprise the main content of this direction. As a student of O.\u00a0Ladyzhenskaya, I was immersed in this subject since the 3rd year of my undergraduate studies at the Physics Department of Leningrad University. My fellow student N.\u00a0Uraltseva now holds the chair of MP exactly in this sense.\n\nMP in this context has as its source mainly geometry and such parts of classical mechanics as hydrodynamics and elasticity theory. Since the 60s a new impetus to MP in this sense was supplied by Quantum Theory. Here the main apparatus is functional analysis, including the spectral theory of operators in Hilbert space, the mathematical theory of scattering and the theory of Lie groups and their representations. The main subject is the Schr\u00f6dinger operator. Though the methods and concrete content of this part of MP are essentially different from those of its classical counterpart, the methodological attitude is the same. One sees the quest for the rigorous mathematical theorems about results which are understood by physicists in their own way.\n\nI was born as a scientist exactly in this environment. I graduated from the unique chair of Mathematical Physics, established by V.I.\u00a0Smirnov at the Physics Department of Leningrad University already in the 30s. In his venture V.I.\u00a0Smirnov got support from V.\u00a0Fock, the world famous theoretical physicist with very wide mathematical interests. Originally this chair played the auxiliary role of being responsible for the mathematical courses for physics students. However in 1955 it got permission to supervise its own diploma projects, and I belonged to the very first group of students using this opportunity. As I already mentioned, O.A.\u00a0Ladyzhenskaya was our main professor. Although her own interests were mostly in nonlinear PDE and hydrodynamics, she decided to direct me to quantum theory. During last two years of undergraduate studies I was to read the monograph of K.O.\u00a0Friedrichs, \"Mathematical Aspects of Quantum Field Theory,\" and relate it to our group of 5 students and our professor on a special seminar. At the same time my student friends from the chair of Theoretical Physics were absorbed in reading the first monograph on Quantum Electrodynamics by A.\u00a0Ahieser and V.\u00a0Berestevsky. The difference in attitudes and language was striking and I was to become accustomed to both.\n\nAfter my graduation O.A.\u00a0Ladyzhenskaya remained my tutor but she left me free to choose research topics and literature to read. I read both mathematical papers (i.e.\u00a0on direct and inverse scattering problems by I.M.\u00a0Gelfand and B.M.\u00a0Levitan, V.A.\u00a0Marchenko, M.G.\u00a0Krein, A.Ya.\u00a0Povzner) and \"Physical Review\" (i.e.\u00a0on formal scattering theory by M.\u00a0Gell-Mann, M.\u00a0Goldberger, J.\u00a0Schwinger and H.\u00a0Ekstein) as well. Papers by I.\u00a0Segal, L.\u00a0Van-Hove and R.\u00a0Haag added to my first impressions on Quantum Field Theory taken from K.\u00a0Friederichs. In the process of this self-education my own understanding of the nature and goals of MP gradually deviated from the prevailing views of the members of the V.\u00a0Smirnov chair. I decided that it is more challenging to do something which is not known to my colleagues from theoretical physics rather than supply theorems of substantiality. My first work on the inverse scattering problem especially for the many-dimensional Schr\u00f6dinger operator and that on the three body scattering problem confirm that I really tried to follow this line of thought.\n\nThis attitude became even firmer when I began to work on Quantum Field Theory in the middle of the 60s. As a result, my understanding of the goal of MP drastically modified. I consider as the main goal of MP the use of mathematical intuition for the derivation of really new results in the fundamental physics. In this sense, MP and Theoretical Physics are competitors. Their goals in unraveling the laws of the structure of matter coincide. However, the methods and even the estimates of the importance of the results of work may differ quite significally.\n\nHere it is time to say in what sense I use the term \"fundamental physics.\" The adjective \"fundamental\" has many possible interpretations when applied to the classification of science. In a wider sense it is used to characterize the research directed to unraveling new properties of physical systems. In the narrow sense it is kept only for the search for the basic laws that govern and explain these properties.\n\nThus, all chemical properties can be derived from the Schr\u00f6dinger equation for a system of electrons and nuclei. Alternatively, we can say that the fundamental laws of chemistry in a narrow sense are already known. This, of course, does not deprive chemistry of the right to be called a fundamental science in a wide sense.\n\nThe same can be said about classical mechanics and the quantum physics of condensed matter. Whereas the largest part of physical research lies now in the latter, it is clear that all its successes including the theory of superconductivity and superfluidity, Bose-Einstein condensation and quantum Hall effect have a fundamental explanation in the nonrelativistic quantum theory of many body systems.\n\nAn unfinished physical fundamental problem in a narrow sense is physics of elementary particles. This puts this part of physics into a special position. And it is here where modern MP has the most probable chances for a breakthrough.\n\nIndeed, until recent time, all physics developed along the traditional circle: experiment \u2014 theoretical interpretation \u2014 new experiment. So the theory traditionally followed the experiment. This imposes a severe censorship on the theoretical work. Any idea, bright as it is, which is not supplied by the experimental knowledge at the time when it appeared is to be considered wrong and as such must be abandoned. Characteristically the role of censors might be played by theoreticians themselves and the great L.\u00a0Landau and W.\u00a0Pauli were, as far as I can judge, the most severe ones. And, of course, they had very good reason.\n\nOn the other hand, the development of mathematics, which is also to a great extent influenced by applications, has nevertheless its internal logic. Ideas are judged not by their relevance but more by esthetic criteria. The totalitarianism of theoretical physics gives way to a kind of democracy in mathematics and its inherent intuition. And exactly this freedom could be found useful for particle physics. This part of physics traditionally is based on the progress of accelerator techniques. The very high cost and restricted possibilities of the latter soon will become an uncircumventable obstacle to further development. And it is here that mathematical intuition could give an adequate alternative. This was already stressed by famous theoreticians with mathematical inclinations. Indeed, let me cite a paper by P.\u00a0Dirac from the early 30s:\n\n> The steady progress of physics requires for its theoretical formulation a mathematics that gets continually more advanced. This is only natural and to be expected. What, however, was not expected by the scientific workers of the last century was the particular form that the line of advancement of the mathematics would take, namely, it was expected that the mathematics would get more complicated, but would rest on a permanent basis of axioms and definitions, while actually the modern physical developments have required a mathematics that continually shifts its foundations and gets more abstract. Non-euclidean geometry and non-commutative algebra, which were at one time considered to be purely fictions of the mind and pastimes for logical thinkers, have now been found to be very necessary for the description of general facts of the physical world. It seems likely that this process of increasing abstraction will continue in the future and that advance in physics is to be associated with a continual modification and generalization of the axioms at the base of mathematics rather than with logical development of any one mathematical scheme on a fixed foundation.\n>\n> There are at present fundamental problems in theoretical physics awaiting solution, *e.g.*, the relativistic formulation of quantum mechanics and the nature of atomic nuclei (to be followed by more difficult ones such as the problem of life), the solution of which problems will presumably require a more drastic revision of our fundamental concepts than any that have gone before. Quite likely these changes will be so great that it will be beyond the power of human intelligence to get the necessary new ideas by direct attempts to formulate the experimental data in mathematical terms. The theoretical worker in the future will therefore have to proceed in a more inderect way. The most powerful method of advance that can be suggested at present is to employ all the resources of pure mathematics in attempts to perfect and generalise the mathematical formalism that forms the existing basis of theoretical physics, and *after* each success in this direction, to try to interpret the new mathematical features in terms of physical entities.\n\nSimilar views were expressed by C.N.\u00a0Yang. I did not find a compact citation, but all spirit of his commentaries to his own collection of papers shows this attitude. Also he used to tell this to me in private discussions.\n\nI believe that the dramatic history of setting the gauge fields as a basic tool in the description of interactions in Quantum Field Theory gives a good illustration of the influence of mathematical intuition on the development of the fundamental physics. Gauge fields, or Yang\u2013Mills fields, were introduced to the wide audience of physicists in 1954 in a short paper by C.N.\u00a0Yang and R.\u00a0Mills , dedicated to the generalization of the electromagnetic fields and the corresponding principle of gauge invariance. The geometric sense of this principle for the electromagnetic field was made clear as early as in the late 20s due to the papers of V.\u00a0Fock and H.\u00a0Weyl . They underlined the analogy of the gauge (or gradient in the terminology of V.\u00a0Fock) invariance of the electrodynamics and the equivalence principle of the Einstein theory of gravitation. The gauge group in electrodynamics is commutative and corresponds to the multiplication of the complex field (or wave function) of the electrically charged particle by a phase factor depending on the space\u2013time coordinates. Einstein's theory of gravity provides an example of a much more sophisticated gauge group, namely the group of general coordinate transformation. Both H.\u00a0Weyl and V.\u00a0Fock were to use the language of the moving frame with spin connection, associated with local Lorentz rotations. Thus the Lorentz group became the first nonabelian gauge group and one can see in essentially all formulas characteristics of nonabelian gauge fields. However, in contradistinction to the electromagnetic field, the spin connection enters the description of the space-time and not the internal space of electric charge.\n\nIn the middle of the 30s, after the discovery of the isotopic spin in nuclear physics, and forming the Yukawa idea of the intermediate boson, O.\u00a0Klein tried to geometrise these objects. His proposal was based on his 5-dimensional picture. Proton and neutron (as well as electron and neutrino, there were no clear distinction between strong and weak interactions) were put together in an isovector and electromagnetic field and charged vector meson comprised a $2 \\times 2$ matrix. However the noncommutative $SU(2)$ gauge group was not mentioned.\n\nKlein's proposal was not received favorably and N.\u00a0Borh did not recommend him to publish a paper. So the idea remained only in the form of contribution to proceedings of Warsaw Conference \"New Theories in Physics\" .\n\nThe noncommutative group, acting in the internal space of charges, appeared for the first time in the paper of C.N.\u00a0Yang and R.\u00a0Mills in 1954. There is no wonder that Yang received a cool reaction when he presented his work at Princeton in 1954. The dramatic account of this event can be found in his commentaries . Pauli was in the audience and immediately raised the question about mass. Indeed the gauge invariance forbids the introduction of mass to the vector charged fields and masslessness leads to the long range interaction, which contradicts the experiment. The only known massless particles (and accompaning long range interactions) are photon and graviton. It is evident from Yang's text, that Pauli was well acquainted with the differential geometry of nonabelian vector fields but his own censorship did not allow him to speak about them. As we know now, the boldness of Yang and his esthetic feeling finally were vindicated. And it can be rightly said, that C.N.\u00a0Yang proceeded according to mathematical intuition.\n\nIn 1954 the paper of Yang and Mills did not move to the forefront of high energy theoretical physics. However, the idea of the charged space with noncommutative symmetry group acquired more and more popularity due to the increasing number of elementary particles and the search for the universal scheme of their classification. And at that time the decisive role in the promotion of the Yang\u2013Mills fields was also played by mathematical intuition.\n\nAt the beginning of the 60s, R.\u00a0Feynman worked on the extension of his own scheme of quantization of the electromagnetic field to the gravitation theory of Einstein. A purely technical difficulty \u2014 the abundance of the tensor indices \u2014 made his work rather slow. Following the advice of M.\u00a0Gell-Mann, he exercised first on the simpler case of the Yang\u2013Mills fields. To his surprise, he found that a naive generalization of his diagrammatic rules designed for electrodynamics did not work for the Yang-Mills field. The unitarity of the $S$-matrix was broken. Feynman restored the unitarity in one loop by reconstructing the full scattering amplitude from its imaginary part and found that the result can be interpreted as a subtraction of the contribution of some fictitious particle. However his technique became quite cumbersome beyond one loop. His approach was gradually developed by B.\u00a0De-Witt . It must be stressed that the physical senselessness of the Yang\u2013Mills field did not preclude Feynman from using it for mathematical construction.\n\nThe work of Feynman became one of the starting points for my work in Quantum Field Theory, which I began in the middle of the 60s together with Victor Popov. Another point as important was the mathematical monograph by A.\u00a0Lichnerowitz , dedicated to the theory of connections in vector bundles. From Lichnerowitz's book it followed clearly that the Yang\u2013Mills field has a definite geometric interpretation: it defines a connection in the vector bundle, the base being the space-time and the fiber the linear space of the representation of the compact group of charges. Thus, the Yang\u2013Mills field finds its natural place among the fields of geometrical origin between the electromagnetic field (which is its particular example for the one-dimensional charge) and Einstein's gravitation field, which deals with the tangent bundle of the Riemannian space-time manifold.\n\nIt became clear to me that such a possibility cannot be missed and, notwithstanding the unsolved problem of zero mass, one must actively tackle the problem of the correct quantization of the Yang\u2013Mills field.\n\nThe geometric origin of the Yang\u2013Mills field gave a natural way to resolve the difficulties with the diagrammatic rules. The formulation of the quantum theory in terms of Feynman's functional integral happened to be most appropriate from the technical point of view. Indeed, to take into account the gauge equivalence principle one has to integrate over the classes of gauge equivalent fields rather than over every individual configuration. As soon as this idea is understood, the technical realization is rather straightforward. As a result V.\u00a0Popov and I came out at the end of 1966 with a set of rules valid for all orders of perturbation theory. The fictitious particles appeared as auxiliary variables giving the integral representation for the nontrivial determinant entering the measure over the set of gauge orbits.\n\nCorrect diagrammatic rules of quantization of the Yang-Mills field, obtained by V.\u00a0Popov and me in 1966\u20131967 , did not attract immediate the attention of physicists. Moreover, the time when our work was done was not favorable for it. Quantum Field Theory was virtually forbidden, especially in the Soviet Union, due to the influence of Landau. \"The Hamiltonian is dead\" \u2014 this phrase from his paper , dedicated to the anniversary of W.\u00a0Pauli \u2014 shows the extreme of Landau's attitude. The reason was quite solid, it was based not on experiment, but on the investigation of the effects of renormalization, which led Landau and his coworkers to believe that the renormalized physical coupling constant is inevitably zero for all possible local interactions. So there was no way for Victor Popov and me to publish an extended article in a major Soviet journal. We opted for the short communication in \"Physics Letters\" and were happy to be able to publish the full version in the preprint series of newly opened Kiev Institute of Theoretical Physics. This preprint was finally translated into English by B.\u00a0Lee as a Fermilab preprint in 1972, and from the preface to the translation it follows that it was known in the West already in 1968.\n\nA decisive role in the successful promotion of our diagrammatic rules into physics was played by the works of G.\u00a0't\u00a0Hooft , dedicated to the Yang\u2013Mills field interacting with the Higgs field (and which ultimately led to a Nobel Prize for him in 1999) and the discovery of dimensional transmutation (the term of S.\u00a0Coleman ). The problem of mass was solved in the first case via the spontaneous symmetry breaking. The second development was based on asymptotic freedom. There exists a vast literature dedicated to the history of this dramatic development. I refer to the recent papers of G.\u00a0't\u00a0Hooft and D.\u00a0Gross , where the participants in this story share their impressions of this progress. As a result, the Standard Model of unified interactions got its main technical tool. From the middle of the 70s until our time it remains the fundamental base of high energy physics. For our discourse it is important to stress once again that the paper based on mathematical intuition preceded the works made in the traditions of theoretical physics.\n\nThe Standard Model did not complete the development of fundamental physics in spite of its unexpected and astonishing experimental success. The gravitational interactions, whose geometrical interpretation is slightly different from that of the Yang\u2013Mills theory, is not included in the Standard Model. The unification of quantum principles, Lorentz\u2013Einstein relativity and Einstein gravity has not yet been accomplished. We have every reason to conjecture that the modern MP and its mode of working will play the decisive role in the quest for such a unification.\n\nIndeed, the new generation of theoreticians in high energy physics have received an incomparably higher mathematical education. They are not subject to the pressure of old authorities maintaining the purity of physical thinking and\/or terminology. Futhermore, many professional mathematicians, tempted by the beauty of the methods used by physicists, moved to the position of the modern mathematical physics. Let use cite from the manifesto, written by P.\u00a0MacPherson during the organization of the Quantum Field Theory year at the School of Mathematics of the Institute for Advanced Study at Princeton:\n\n> The goal is to create and convey an understanding, in terms congenial to mathematicians, of some fundamental notions of physics, such as quantum field theory. The emphasis will be on developing the intuition stemming from functional integrals.\n>\n> One way to define the goals of the program is by negation, excluding certain important subjects commonly pursued by mathematicians whose work is motivated by physics. In this spirit, it is not planned to treat except peripherally the magnificient new applications of field theory, such as Seiberg-Witten equations to Donaldson theory. Nor is the plan to consider fundamental new constructions within mathimatics that were inspired by physics, such as quantum groups or vertex operator algebras. Nor is the aim to discuss how to provide mathematical rigor for physical theories. Rather, the goal is to develop the sort of intuition common among physicists for those who are used to thought processes stemming from geometry and algebra.\n\nI propose to call the intuition to which MacPherson refers that of mathematical physics. I also recommend the reader to look at the instructive drawing by P.\u00a0Dijkgraaf on the dust cover of the volumes of lectures given at the School .\n\nThe union of these two groups constitutes an enormous intellectual force. In the next century we will learn if this force is capable of substituting for the traditional experimental base of the development of fundamental physics and pertinent physical intuition.","meta":{"dup_signals":{"dup_doc_count":76,"dup_dump_count":36,"dup_details":{"curated_sources":2,"2022-05":1,"2021-39":1,"2021-17":1,"2021-04":1,"2020-45":1,"2020-40":1,"2020-29":1,"2020-16":1,"2020-05":1,"2019-43":1,"2019-35":1,"2019-26":1,"2019-22":1,"2019-04":2,"2018-51":2,"2018-47":4,"2018-39":4,"2018-34":2,"2018-30":4,"2018-26":3,"2018-22":4,"2018-13":4,"2018-09":5,"2018-05":1,"2017-51":5,"2017-47":2,"2017-43":6,"2017-39":2,"2017-34":2,"2017-26":2,"2017-22":2,"2017-17":1,"2017-09":1,"2022-27":1,"2017-13":2}},"filename":"out\/math-ph0002018.tex.md"},"subset":"arxiv"} +{"text":"abstract: This article describes the motivation, design, and progress of the Journal of Open Source Software (*JOSS*). *JOSS* is a free and open-access journal that publishes articles describing research software. It has the dual goals of improving the quality of the software submitted and providing a mechanism for research software developers to receive credit. While designed to work within the current merit system of science, *JOSS* addresses the dearth of rewards for key contributions to science made in the form of software. *JOSS* publishes articles that encapsulate scholarship contained in the software itself, and its rigorous peer review targets the software components: functionality, documentation, tests, continuous integration, and the license. A *JOSS* article contains an abstract describing the purpose and functionality of the software, references, and a link to the software archive. The article is the entry point of a *JOSS* submission, which encompasses the full set of software artifacts. Submission and review proceed in the open, on GitHub. Editors, reviewers, and authors work collaboratively and openly. Unlike other journals, *JOSS* does not reject articles requiring major revision; while not yet accepted, articles remain visible and under review until the authors make adequate changes (or withdraw, if unable to meet requirements). Once an article is accepted, *JOSS* gives it a digital object identifier (DOI), deposits its metadata in Crossref, and the article can begin collecting citations on indexers like Google Scholar and other services. Authors retain copyright of their *JOSS* article, releasing it under a Creative Commons Attribution 4.0 International License. In its first year, starting in May 2016, *JOSS* published 111 articles, with more than 40 additional articles under review. *JOSS* is a sponsored project of the nonprofit organization NumFOCUS and is an affiliate of the Open Source Initiative (OSI).\nauthor: Arfon M.\u00a0Smith[^1]; Kyle E.\u00a0Niemeyer; Daniel S.\u00a0Katz; Lorena A.\u00a0Barba; George\u00a0Githinji; Melissa Gymrek; Kathryn D.\u00a0Huff; Christopher R.\u00a0Madan; Abigail Cabunoc Mayes; Kevin M.\u00a0Moerman; Pjotr Prins; Karthik Ram; Ariel Rokem; Tracy K.\u00a0Teal; Roman Valls Guimera; Jacob\u00a0T.\u00a0Vanderplas\nbibliography: references.bib\ndate: January 2018\ntitle: Journal of Open Source Software (JOSS): design and first-year review\n\n# Introduction\n\nModern scientific research produces many outputs beyond traditional articles and books. Among these, research software is critically important for a broad spectrum of fields. Current practices for publishing and citation do not, however, acknowledge software as a first-class research output. This deficiency means that researchers who develop software face critical career barriers. The *Journal of Open Source Software* (*JOSS*) was founded in May 2016 to offer a solution within the existing publishing mechanisms of science. It is a developer-friendly, free and open-access, peer-reviewed journal for research software packages. By its first anniversary, *JOSS* had published more than a hundred articles. This article discusses the motivation for creating a new software journal, delineates the editorial and review process, and summarizes the journal's first year of operation via submission statistics. We expect this article to be of interest to three core audiences: (1) researchers who develop software and could submit their work to *JOSS*, (2) those in the community with an interest in advancing scholarly communications who may appreciate the technical details of the *JOSS* journal framework, and (3) those interested in possibilities for citing software in their own research publications.\n\nThe sixteen authors of this article are the members of the *JOSS* Editorial Board at the end of its first year (May 2017). Arfon Smith is the founding editor-in-chief, and the founding editors are Lorena A.\u00a0Barba, Kathryn Huff, Daniel Katz, Christopher Madan, Abigail Cabunoc Mayes, Kevin Moerman, Kyle Niemeyer, Karthik Ram, Tracy Teal, and Jake Vanderplas. Five new editors joined in the first year to handle areas not well covered by the original editors, and to help manage the large and growing number of submissions. They are George Githinji, Melissa Gymrek, Pjotr Prins, Ariel Rokem, and Roman Valls Guimera. (Since then, we added three more editors: Jason Clark, Lindsey Heagy, and Thomas Leeper.)\n\nThe *JOSS* editors are firm supporters of open-source software for research, with extensive knowledge of the practices and ethics of open source. This knowledge is reflected in the *JOSS* submission system, peer-review process, and infrastructure. The journal offers a familiar environment for developers and authors to interact with reviewers and editors, leading to a citable published work: a software article. The article describes the software at a high level, and the software itself includes both source code and associated artifacts such as tests, documentation, and examples. With a Crossref digital object identifier (DOI), the article is able to collect citations, empowering the developers\/authors to gain career credit for their work. *JOSS* thus fills a pressing need for computational researchers to advance professionally, while promoting higher quality software for science. *JOSS* also supports the broader open-science movement by encouraging researchers to share their software openly and follow best practices in its development.\n\n# Background and motivation\n\nA 2014 study of UK Russell Group Universities\u00a0 reports that $\\sim$``{=html}90% of academics surveyed said they use software in their research, while more than 70% said their research would be impractical without it. About half of these UK academics said they develop their own software while in the course of doing research. Similarly, a 2017 survey of members of the US National Postdoctoral Association found that 95% used research software, and 63% said their research would be impractical without it\u00a0.\n\nDespite being a critical part of modern research, software lacks support across the scholarly ecosystem for its publication, acknowledgement, and citation\u00a0. Academic publishing has not changed substantially since its inception. Science, engineering, and many other academic fields still view research articles as the key indicator of research productivity, with research grants being another important indicator. Yet, the research article is inadequate to fully describe modern, data-intensive, computational research. *JOSS* focuses on research software and its place in the scholarly publishing ecosystem.\n\n## Why publish software?\n\nMost academic fields still rely on a one-dimensional credit model where academic articles and their associated citations are the dominant factor in the success of a researcher's career. Software creators, in order to increase the likelihood of receiving career credit for their work, often choose to publish \"software articles\" that act as placeholder publications pointing to their software. At the same time, recent years have seen a push for sharing open research software\u00a0.\n\nBeyond career-credit arguments for software creators, publishing research software enriches the scholarly record. Buckheit and Donoho paraphrased Jon Claerbout, a pioneer of reproducible research, as saying: \"An article about a computational result is advertising, not scholarship. The actual scholarship is the full software environment, code and data, that produced the result.\"\u00a0. The argument that articles about computational science are not satisfactory descriptions of the work, needing to be supplemented by code and data, is more than twenty years old! Yet, despite the significance of software in modern research, documenting its use and including it in the scholarly ecosystem presents numerous challenges.\n\n## Challenges of publishing software\n\nThe conventional publishing mechanism of science is the research article, and a researcher's career progression hinges on collecting citations for published works. Unfortunately, software citation\u00a0 is in its infancy (as is data citation\u00a0). Publishing the software itself and receiving citation credit for it may be a better long-term solution, but this is still impractical. Even when software (and data) are published so that they can be cited, we do not have a standard culture of peer review for them. This leads many developers today to publish software articles.\n\nThe developer's next dilemma is where to publish, given the research content, novelty, length and other features of a software article. Since 2012, Neil Chue Hong has maintained a growing list of journals that accept software articles\u00a0. He includes both generalist journals, accepting software articles from a variety of fields, and domain-specific journals, accepting both research and software articles in a given field. For many journals, particularly the domain-specific ones, a software article must include novel results to justify publication.\n\nFrom the developer's point of view, writing a software article can involve a great deal of extra work. Good software includes documentation for both users and developers that is sufficient to make it understandable. A software article may contain much of the same content, merely in a different format, and developers may not find value in rewriting their documentation in a manner less useful to their users and collaborators. These issues may lead developers to shun the idea of software articles and prefer to publish the software itself. Yet, software citation is not common and the mostly one-dimensional credit model of academia (based on article citations) means that publishing software often does not \"count\" for career progression\u00a0.\n\n# The Journal of Open Source Software\n\nTo tackle the challenges mentioned above, the *Journal of Open Source Software* (*JOSS*) launched in May 2016\u00a0 with the goal of drastically reducing the overhead of publishing software articles. *JOSS* offers developers a venue to publish their complete research software wrapped in relatively short high-level articles, thus enabling citation credit for their work. In this section we describe the goals and principles, infrastructure, and business model of *JOSS*, and compare it with other software journals.\n\n## Goals and principles\n\n*JOSS* articles are deliberately short and only include an abstract describing the high-level functionality of the software, a list of the authors of the software (with their affiliations), a list of key references, and a link to the software archive and software repository. Articles are not allowed to include other content often found in software articles, such as descriptions of the API (application programming interface) and novel research results obtained using the software. The software API should already be described in the software documentation, and domain research results do not belong in *JOSS*\u2014these should be published in a domain journal. Unlike most journals, which ease discoverability of new research and findings, *JOSS* serves primarily as a mechanism for software developers\/authors to improve and publish their research software. Thus, software discovery is a secondary feature.\n\nThe *JOSS* design and implementation are based on the following principles:\n\n- Other than their short length, *JOSS* articles are conventional articles in every other sense: the journal has an ISSN, articles receive Crossref DOIs with high-quality submission metadata, and articles are appropriately archived.\n\n- Because software articles are \"advertising\" and simply pointers to the *actual* scholarship (the software), short abstract-length submissions are sufficient for these \"advertisements.\"\n\n- Software is a core product of research and therefore the software itself should be archived appropriately when submitted to and reviewed in *JOSS*.\n\n- Code review, documentation, and contributing guidelines are important for open-source software and should be part of any review. In *JOSS*, they are the focus of peer review. (While a range of other journals publish software, with various peer-review processes, the focus of the review is usually the submitted article and reviewers might not even look at the code.) The *JOSS* review process itself, described in \u00a7, was based on the on-boarding checklist for projects joining the rOpenSci collaboration\u00a0.\n\nAcceptable *JOSS* submissions also need to meet the following criteria:\n\n- The software must be open source by the Open Source Initiative (OSI) definition ([opensource.org](https:\/\/opensource.org)).\n\n- The software must have a research application.\n\n- The submitter should be a major contributor to the software they are submitting.\n\n- The software should be a significant new contribution to the available open-source software that either enables some new research challenge(s) to be addressed or makes addressing research challenges significantly better (e.g., faster, easier, simpler).\n\n- The software should be feature-complete, i.e., it cannot be a partial solution.\n\n## How *JOSS* works\n\n*JOSS* is designed as a small collection of open-source tools that leverage existing infrastructure such as GitHub, Zenodo, and Figshare. A goal when building the journal was to minimize the development of new tools where possible.\n\n### The *JOSS* web application and submission tool\n\nThe *JOSS* web application and submission tool is hosted at . It is a simple Ruby on Rails web application\u00a0 that lists accepted articles, provides the article submission form (see Figure\u00a0), and hosts journal documentation such as author submission guidelines. This application also automatically creates the review issue on GitHub once a submission has been pre-reviewed by an editor and accepted to start peer review in *JOSS*.\n\n### Open peer review on GitHub\n\n*JOSS* conducts reviews on the `joss-reviews` GitHub repository\u00a0. Review of a submission begins with the opening of a new GitHub issue, where the editor-in-chief assigns an editor, the editor assigns a reviewer, and interactions between authors, reviewer(s), and editor proceed in the open. Figure\u00a0 shows an example of a recent review for the (accepted) `hdbscan` package\u00a0. The actual review includes the code, software functionality\/performance claims, test suite (if present), documentation, and any other material associated with the software.\n\n### Whedon and the Whedon-API\n\nMany of the tasks associated with *JOSS* reviews and editorial management are automated. A core RubyGem library named `Whedon`\u00a0 handles common tasks associated with managing the submitted manuscript, such as compiling the article (from its Markdown source) and creating Crossref metadata. An automated bot, `Whedon-API`\u00a0, handles other parts of the review process (such as assigning editors and reviewers based on editor input) and leverages the `Whedon` RubyGem library. For example, to assign the editor for a submission, one may type the following command in a comment box within the GitHub issue: `@whedon assign @danielskatz as editor`. Similarly, to assign a reviewer, one enters: `@whedon assign @zhaozhang as reviewer` (where the reviewer and editor GitHub handles identify them). The next section describes the review process in more detail.\n\n## Business model and content licensing\n\n*JOSS* is designed to run at minimal cost with volunteer labor from editors and reviewers. The following fixed costs are currently incurred:\n\n- Crossref membership: \\$275. This is a yearly fixed cost for the *JOSS* parent entity\u2014*Open Journals*\u2014so that article DOIs can be registered with Crossref.\n\n- Crossref article DOIs: \\$1. This is a fixed cost per article.\n\n- *JOSS* web application hosting (currently with Heroku): \\$19 per month\n\nAssuming a publication rate of 100 articles per year results in a core operating cost of $\\sim$\\$6 per article. With 200 articles per year\u2014which seems possible for the second year\u2014the cost drops to $\\sim$\\$3.50 per article: $$\\begin{aligned}\n\\label{costs}\n(\\$275 + (\\$1 \\times 100) + (\\$19 \\times 12)) \/ 100 &= \\$6.03 \\\\\n(\\$275 + (\\$1 \\times 200) + (\\$19 \\times 12)) \/ 200 &= \\$3.51 \\;.\n\\end{aligned}$$\n\nSubmitting authors retain copyright of *JOSS* articles and accepted articles are published under a Creative Commons Attribution 4.0 International License\u00a0. Any code snippets included in *JOSS* articles are subject to the MIT license\u00a0 regardless of the license of the submitted software package under review, which itself must be licensed under an OSI-approved license (see [opensource.org\/licenses\/alphabetical](https:\/\/opensource.org\/licenses\/alphabetical) for a complete list).\n\n## Comparison with other software journals\n\nA good number of journals now accept, review, and publish software articles\u00a0, which we group into two categories. The first category of journals include those similar to *JOSS*, which do not focus on a specific domain and only consider submissions of software\/software articles: the *Journal of Open Research Software* (*JORS*, [openresearchsoftware.metajnl.com](http:\/\/openresearchsoftware.metajnl.com)), *SoftwareX* ([journals.elsevier.com\/softwarex\/](https:\/\/www.journals.elsevier.com\/softwarex\/)), and now *JOSS*. Both *JORS*\u00a0 and *SoftwareX*\u00a0 now review both the article text and the software. In *JOSS*, the review process focuses mainly on the software and associated material (e.g., documentation) and less on the article text, which is intended to be a brief description of the software. The role and form of peer review also varies across journals. In *SoftwareX* and *JORS*, the goal of the review is both to decide if the article is acceptable for publication and to improve it iteratively through a non-public, editor-mediated interaction between the authors and the anonymous reviewers. In contrast, *JOSS* has the goal of accepting most articles after improving them as needed, with the reviewers and authors communicating directly and publicly through GitHub issues.\n\nThe second category includes domain-specific journals that either accept software articles as a special submission type or exclusively consider software articles targeted at the domain. For example, *Collected Algorithms* (CALGO, [acm.org\/calgo](http:\/\/www.acm.org\/calgo\/)) is a long-running venue for reviewing and sharing mathematical algorithms associated with articles published in *Transactions on Mathematical Software* and other ACM journals. However, CALGO authors must transfer copyright to ACM and software is not available under an open-source license\u2014this contrasts with *JOSS*, where authors retain copyright and software must be shared under an open-source license. *Computer Physics Communications* ([journals.elsevier.com\/computer-physics-communications](https:\/\/www.journals.elsevier.com\/computer-physics-communications)) and *Geoscientific Model Development* ([geoscientific-model-development.net](https:\/\/www.geoscientific-model-development.net\/)) publish full-length articles describing application software in computational physics and geoscience, respectively, where review primarily focuses on the article. Chue Hong maintains a list of journals in both categories\u00a0.\n\n# Peer review in *JOSS*\n\nIn this section, we illustrate the *JOSS* submission and review process using a representative example, document the review criteria provided to authors and reviewers, and explain a fast-track option for already-reviewed rOpenSci contributions.\n\n## The *JOSS* process\n\nFigure\u00a0 shows a typical *JOSS* submission and review process, described here in more detail using the `hdbscan` package\u00a0 as an example:\n\n1. Leland McInnes submitted the `hdbscan` software and article to *JOSS* on 26 February 2017 using the web application and submission tool. The article is a Markdown file named `paper.md`, visibly located in the software repository (here, and in many cases, placed together with auxiliary files in a `paper` directory).\n\n2. Following a routine check by a *JOSS* administrator, a \"pre-review\" issue was created in the `joss-reviews` GitHub repository\u00a0. In this pre-review issue, an editor (Daniel S.\u00a0Katz) was assigned, who then identified and assigned a suitable reviewer (Zhao Zhang). Editors generally identify one or more reviewers from a pool of volunteers based on provided programming language and\/or domain expertise.[^2]\n\n The editor then asked the automated bot `Whedon` to create the main submission review issue via the command `@whedon start review magic-word=bananas`. (\"`magic-word=bananas`\" is a safeguard against accidentally creating a review issue prematurely.)\n\n3. The reviewer then conducted the submission review\u00a0 (see Figure\u00a0) by working through a checklist of review items, as described in \u00a7. The author, reviewer, and editor discussed any questions that arose during the review, and once the reviewer completed their checks, they notified the submitting author and editor. Compared with traditional journals, *JOSS* offers the unique feature of holding a discussion\u2014in the open within a GitHub issue\u2014between the reviewer(s), author(s), and editor. Like a true conversation, discussion can go back and forth in minutes or seconds, with all parties contributing at will. This contrasts with traditional journal reviews, where the process is merely an exchange between the reviewer(s) and author(s), via the editor, which can take months for each communication, and in practice is limited to one or two, perhaps three in some cases, exchanges due to that delay\u00a0.\n\n Note that *JOSS* reviews are subject to a code of conduct\u00a0, adopted from the Contributor Covenant Code of Conduct\u00a0. Both authors and reviewers must confirm that they have read and will adhere to this Code of Conduct, during submission and with their review, respectively.\n\n4. After the review was complete, the editor asked the submitting author to make a permanent archive of the software (including any changes made during review) with a service such as Zenodo or Figshare, and to post a link to the archive in the review thread. This link, in the form of a DOI, was associated with the submission via the command `@whedon set 10.5281\/zenodo.401403 as archive`.\n\n5. The editor-in-chief used the `Whedon` RubyGem library on his local machine to produce the compiled PDF, update the *JOSS* website, deposit Crossref metadata, and issue a DOI for the submission ([10.21105\/joss.00205](https:\/\/doi.org\/10.21105\/joss.00205)).\n\n6. Finally, the editor-in-chief updated the review issue with the *JOSS* article DOI and closed the review. The submission was then accepted into the journal.\n\nAuthors can also first submit a pre-submission inquiry via an issue in the main *JOSS* repository\u00a0 if they have questions regarding the suitability of their software for publication, or for any other questions.\n\n## *JOSS* review criteria\n\nAs previously mentioned, the *JOSS* review is primarily concerned with the material in the software repository, focusing on the software and documentation. We do not ask authors to use their software in a research study or include research results in their article beyond as examples; submissions focused on results rather than software should be submitted to research journals. The specific items in the reviewer checklist are:\n\n- Conflict of interest\n\n - As the reviewer I confirm that I have read the *JOSS* [conflict of interest policy](https:\/\/github.com\/openjournals\/joss\/blob\/master\/COI.md) and that there are no conflicts of interest for me to review this work.\n\n- Code of Conduct\n\n - I confirm that I read and will adhere to the [*JOSS* code of conduct](http:\/\/joss.theoj.org\/about#code_of_conduct).\n\n- General checks\n\n - **Repository**: Is the source code for this software available at the repository URL?\n\n - **License**: Does the repository contain a plain-text LICENSE file with the contents of an OSI-approved software license?\n\n - **Version**: Does the release version given match the GitHub release?\n\n - **Authorship**: Has the submitting author made major contributions to the software?\n\n- Functionality\n\n - **Installation**: Does installation proceed as outlined in the documentation?\n\n - **Functionality**: Have the functional claims of the software been confirmed?\n\n - **Performance**: Have any performance claims of the software been confirmed?\n\n- Documentation\n\n - **A statement of need**: Do the authors clearly state what problems the software is designed to solve and who the target audience is?\n\n - **Installation instructions**: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.\n\n - **Example usage**: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems)?\n\n - **Functionality documentation**: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?\n\n - **Automated tests**: Are there automated tests or manual steps described so that the function of the software can be verified?\n\n - **Community guidelines**: Are there clear guidelines for third parties wishing to 1) contribute to the software, 2) report issues or problems with the software, and 3) seek support?\n\n- Software paper\n\n - **Authors**: Does the `paper.md` file include a list of authors with their affiliations?\n\n - **A statement of need**: Do the authors clearly state what problems the software is designed to solve and who the target audience is?\n\n - **References**: Do all archival references that should have a DOI list one (e.g., papers, datasets, software)?\n\n## Fast track for reviewed rOpenSci contributions\n\nFor submissions of software that has already been reviewed under rOpenSci's rigorous onboarding guidelines\u00a0, *JOSS* does not perform further review. The editor-in-chief is alerted with a note \"This submission has been accepted to rOpenSci. The review thread can be found at `[LINK TO ONBOARDING ISSUE]`,\" allowing such submissions to be fast-tracked to acceptance.\n\n# A review of the first year\n\nBy the end of May 2017, *JOSS* published 111 articles since its inception in May 2016, and had an additional 41 articles under consideration. Figure\u00a0 shows the monthly and cumulative publication rates; on average, we published 8.5 articles per month, with some (nonstatistical) growth over time.\n\nFigure\u00a0 shows the numbers of days taken for processing and review of the 111 published articles (i.e., time between submission and publication), including finding a topic editor and reviewer(s). Since the journal's inception in May 2016, articles spent on average 45.5 days between submission and publication (median 32 days, interquartile range 52.3 days) The shortest review took a single day, for `Application Skeleton`\u00a0, while the longest review took 190 days, for `walkr`\u00a0. In the former case, the rapid turnaround can be attributed to the relatively minor revisions needed (in addition to quick editor, reviewer, and author actions and responses). In contrast, the latter case took much longer due to delays in selecting an editor and finding an appropriate reviewer, and a multimonth delay between selecting a reviewer and receiving reviews. In other cases with long review periods, some delays in responding to requests for updates may be attributed to reviewers (or editors) missing GitHub notifications from the review issue comments. We have already taken steps to improve the ability of authors, reviewers, and editors to keep track of their submissions, including a prompt to new reviewers to unsubscribe from the main `joss-reviews` repository\u00a0 (to reduce unnecessary notifications) and a weekly digest email for *JOSS* editors to keep track of their submissions. In the future we may collect the email addresses of reviewers so we can extend this functionality to them.\n\nFigure\u00a0 shows the frequency of programming languages appearing in *JOSS* articles. Python appears the most with over half of published software articles (54), while R is used in nearly one-third of articles (29). We believe the popularity of Python and R in *JOSS* submissions is the result of (1) the adoption of these languages (and open-source practices) in scientific computing communities and (2) our relationship with the rOpenSci project.\n\nEach article considered by *JOSS* undergoes review by one or more reviewers. The set of 111 published articles have been reviewed by 93 unique reviewers. The majority of articles received a review by one reviewer (average of $1.11\\pm 0.34$), with a maximum of three reviewers. Based on available data in the review issues, on average, editors reached out to 1.85$\\pm$``{=html}1.40 potential reviewers (at most 8 in one case) via mentions in the GitHub review issue. This does not include external communication, e.g., via email or Twitter. Overall, *JOSS* editors contacted 1.65 potential reviewers for each actual review (based on means).\n\nInterestingly, the current reviewer list contains only 52 entries, as of this writing\u00a0. Considering the unique reviewer count of 93, we clearly have reached beyond those that volunteered to review a priori. Benefits of using GitHub's issue infrastructure and our open reviews include: 1) the ability to tag multiple people, via their GitHub handles, to invite them as potential reviewers; 2) the discoverability of the work so that people may volunteer to review without being formally contacted; 3) the ability to get additional, unprompted feedback and comments; and 4) the ability to find reviewers by openly advertising, e.g., on social media. Furthermore, GitHub is a well-known, commonly used platform where many (if not most) potential authors and reviewers already have accounts.\n\nFigure\u00a0 shows the numbers of articles managed by each of the *JOSS* editors. Editor-in-chief Arfon Smith stewarded the majority of articles published in the first year. This was somewhat unavoidable in the first three months after launch, as Smith served as the de facto sole editor for all submissions, with other members of the editorial board assisting. This strategy was not sustainable and, over time, we adopted the pre-review\/review procedure to hand off articles to editors. Also, authors can now select during submission the appropriate editor based on article topic.\n\nLastly, we analyzed the affiliations of the 286 authors associated with articles published in the first year. Figure\u00a0 shows the number of authors per country; we represented authors with multiple affiliations in different countries using their first affiliation. Authors with no affiliation, or where we could not identify the country, are shown as \"unknown.\" From the articles published in the first year, approximately 48% of authors live in the United States and approximately 40% live in Europe (including Switzerland). The remaining 12% come from the rest of the world, most notably Australia (6.6%) and Canada (2.1%). Moving forward, we hope to receive submissions from authors in more countries that even better represent who develops research software around the world; one strategy to achieve this involves continuing to expand our editorial board.\n\nIn its first year, *JOSS* also developed formal relationships with two US-based nonprofit organizations. In March 2017, *JOSS* became a community affiliate of the Open Source Initiative ([opensource.org](https:\/\/opensource.org)), the steward of the open-source definition, which promotes open-source software and educates about appropriate software licenses. And, in April 2017, *JOSS* became a fiscally sponsored project of NumFOCUS ([numfocus.org](https:\/\/www.numfocus.org)), a 501(c)(3) charity that supports and promotes \"world-class, innovative, open source scientific computing.\" Being associated with these two prominent community organizations increases the trust of the community in our efforts. Furthermore, as a NumFOCUS project, *JOSS* will be able to raise funding to sustain its activities and grow.\n\n# The second year for *JOSS*\n\nOur focus for the second year will be on continuing to provide a high-quality experience for submitting authors and reviewers, and making the best use of the editorial board. In our first year, we progressed from a model where the editor-in-chief handled most central functions to one with more distributed roles for the editors, particularly that of ensuring that reviews are useful and timely. Editors can now select and self-assign to submissions they want to manage, while the editor-in-chief only assigns the remaining submissions. As *JOSS* grows, the process of distributing functions across the editorial board will continue to evolve\u2014and more editors may be needed.\n\nIn the second year, we plan to complete a number of high-priority improvements to the *JOSS* toolchain. Specifically, we plan on automating the final steps for accepting an article. For example, generating Crossref metadata and compiling the article are both currently handled by the editor-in-chief on his local machine using the `Whedon` RubyGem library. In the future, we would like authors and reviewers to be able to ask the `Whedon-API` bot to compile the paper for them, and other editors should be able to ask the bot to complete the submission of Crossref metadata on their behalf. Other improvements are constantly under discussion on the *JOSS* GitHub repository ([github.com\/openjournals\/joss\/issues](https:\/\/github.com\/openjournals\/joss\/issues)). In fact, anyone is able to report bugs and suggest enhancements to the experience. And, since the *JOSS* tools are open source, we welcome contributions in the form of bug-fixes or enhancements via the usual pull-request protocols.\n\nBeyond roles and responsibilities for the editors, and improvements to the *JOSS* tools and infrastructure, we will take on the more tricky questions about publishing software, such as how to handle new software versions. Unlike traditional research articles that remain static once published, software usually changes over time, at least for maintenance and to avoid software rot\/collapse (where software stops working because of changes in the environment, such as dependencies on libraries or operating system). Furthermore, because all potential uses of the software are not known at the start of a project, the need or opportunity arises to add features, improve performance, improve accuracy, etc. After making one or more changes, software developers frequently update the software with a new version number. Over time, the culmination of these changes may result in a major update to the software, and with many new contributors a new version might correspond to a new set of authors if the software is published. However, this process may not translate clearly to *JOSS*. The editorial board will accept a new *JOSS* article published with each major version or even a minor version if the changes seem significant enough to the editor and reviewer(s), but we do not yet know if this will satisfy the needs of both developers and users (corresponding to *JOSS* authors and readers, respectively).\n\nThe discussion about new software versions also generally applies to software forks, where software is copied and, after some divergent development, a new software package emerges. Similar to how we handle new software versions, the *JOSS* editorial board will consider publication of an article describing a forked version of software if it includes substantial changes from a previously published version. Authorship questions may be more challenging when dealing with forks compared with new versions, since forks can retain varying amounts of code from the original projects. However, while a version control history generally makes it easy to suggest people who should be authors, deciding on authorship can be difficult and subjective, and is therefore ultimately project-dependent. We prefer to leave authorship decisions to the projects, with discussion taking place as needed with reviewers and editors.\n\n# Conclusions\n\nSoftware today encapsulates\u2014and generates\u2014important research knowledge, yet it has not entered the science publication ecosystem in a practical way. This situation is costly for science, through the lack of career progression for valuable personnel: research software developers. We founded *JOSS* in response to the acute need for an answer to this predicament. *JOSS* is a venue for authors who wish to receive constructive peer feedback, publish, and collect citations for their research software. By encouraging researchers to develop their software following best practices, and then share and publish it openly, *JOSS* supports the broader open-science movement. The number of submissions confirms the keen demand for this publishing mechanism: more than 100 accepted articles in the first year and more than 40 others under review. By the end of 2017, *JOSS* has published nearly 200 articles. Community members have also responded positively when asked to review submissions in an open and non-traditional format, contributing useful reviews of the submitted software.\n\nHowever, we are still overcoming initial hurdles to achieve our goals. *JOSS* is currently not fully indexed by Google Scholar, despite the fact that *JOSS* articles include adequate metadata and that we made an explicit request for inclusion in March 2017 (see GitHub [issue \\#130](https:\/\/github.com\/openjournals\/joss\/issues\/130)). Also, we may need to invest more effort into raising awareness of good practices for citing *JOSS* articles. That said, we have some preliminary citation statistics: according to Google Scholar, `corner.py`\u00a0 and `Armadillo`\u00a0 have been cited the most at 116 and 79 times, respectively. Crossref's Cited-by service\u2014which relies on publishers depositing reference information\u2014reports 45 and 28 citations for the same articles\u00a0. While most other articles have received no citations to-date, a few have been cited between one and five times. We have had at least two \"repeat\" submissions, i.e., submissions of a new version with major changes from a prior version. Clementi et al.\u00a0 published `PyGBe-LSPR`, a new version that added substantially new features over the original `PyGBe` of Cooper et al.\u00a0. Similarly, the software published by Sandersen and Curtin\u00a0 extended on (and cited) their earlier article\u00a0.\n\nThe journal cemented its position in the first year of operation, building trust within the community of open-source research-software developers and growing in name recognition. It also earned weighty affiliations with OSI and NumFOCUS, the latter bringing the opportunity to raise funding for sustained operations. Although publishing costs are low at \\$3\u20136 per article, *JOSS* does need funding, with the editor-in-chief having borne the expenses personally to pull off the journal launch. Incorporating a small article charge (waived upon request) may be a route to allow authors to contribute to *JOSS* in the future, but we have not yet decided on this change. Under the NumFOCUS nonprofit umbrella, *JOSS* is now eligible to seek grants for sustaining its future, engaging in new efforts like outreach, and improving its infrastructure and tooling.\n\nOutreach to other communities still unaware of *JOSS* is certainly part of our growth strategy. Awareness of the journal so far has mostly spread through word-of-mouth and social networking\u00a0, plus a couple of news articles\u00a0. As of August 2017, *JOSS* is also listed in the Directory of Open Access Journals (DOAJ) ([doaj.org\/toc\/2475-9066](https:\/\/doaj.org\/toc\/2475-9066)). We plan to present *JOSS* at relevant domain conferences, like we did at the 2017 SIAM Conference on Computational Science & Engineering\u00a0 and the 16th Annual Scientific Computing with Python Conference (SciPy 2017). We are also interested in partnering with other domain journals that focus on (traditional) research articles. In such partnerships, traditional peer review of the research would be paired with peer review of the software, with *JOSS* taking responsibility for the latter.\n\nFinally, the infrastructure and tooling of *JOSS* have unexpected added values: while developed to support and streamline the *JOSS* publication process, these open-source tools generalize to a lightweight journal-management system. The *JOSS* web application and submission tool, the `Whedon` RubyGem library, and the `Whedon-API` bot could be easily forked to create overlay journals for other content types (data sets, posters, figures, etc.). The original artifacts could be archived on other services such as Figshare, Zenodo, Dryad, arXiv, or engrXiv\/AgriXiv\/LawArXiv\/PsyArXiv\/SocArXiv\/bioRxiv. This presents manifold opportunities to expand the ways we assign career credit to the digital artifacts of research. *JOSS* was born to answer the needs of research software developers to thrive in the current merit traditions of science, but we may have come upon a generalizable formula for digital science.\n\n# Acknowledgements\n\nThis work was supported in part by the Alfred P.\u00a0Sloan Foundation. Work by K.\u00a0E.\u00a0Niemeyer was supported in part by the National Science Foundation (No.\u00a0ACI-1535065). Work by P.\u00a0Prins was supported by the National Institute of Health (R01 GM123489, 2017\u20132022). Work by K.\u00a0Ram was supported in part by The Leona M.\u00a0and Harry B.\u00a0Helmsley Charitable Trust (No.\u00a02016PG-BRI004). Work by A.\u00a0Rokem was supported by the Gordon & Betty Moore Foundation and the Alfred P.\u00a0Sloan Foundation, and by grants from the Bill & Melinda Gates Foundation, the National Science Foundation (No.\u00a01550224), and the National Institute of Mental Health (No.\u00a01R25MH112480).\n\n[^1]: Corresponding author, \n\n[^2]: Potential reviewers can volunteer via ","meta":{"dup_signals":{"dup_doc_count":22,"dup_dump_count":18,"dup_details":{"curated_sources":1,"2023-14":1,"2022-27":1,"2021-43":2,"2021-39":2,"2020-40":1,"2019-39":1,"2019-22":1,"2019-09":1,"2019-04":1,"2018-51":2,"2018-47":1,"2018-39":1,"2018-34":2,"2018-26":1,"2018-17":1,"2018-09":1,"2023-50":1}},"filename":"out\/1707.02264_extract_main.tex.md"},"subset":"arxiv"} +{"text":"author: Tommaso Urli [^1] \n[`email@example.com`](mailto:firstname.lastname@example.com)\nbibliography: paper.bib\ntitle: Technical Report<\/span> \n json2run: a tool for experiment design & analysis\n\n# Introduction\n\n**json2run** is a tool to automate the running, storage and analysis of experiments. It has been created in the first place to study different algorithms or different sets of values for algorithm parameters, but it is a general tool and can be used wherever it fits. The main advantage of **json2run** (over a home-brewed experiment suite) is that it allows to describe a set of experiments concisely as a [JSON](http:\/\/www.json.org)-formatted parameter tree, such as the following (note the presence of parameter definitions as well as logical operators to combine them)\n\nAn experiment file such as the one above describes the parameters that must be generated and passed over to the executable. We'll call a set of generated parameters a *configuration* or *parameter configuration*. Once the experiments have been described, **json2run** can parse the file and perform various operations with it, such as\n\n- printing the generated configurations as command line options,\n\n- running a batch of experiments based on the generated configurations,\n\n- store the results of the experiments in a database,\n\n- running a parameter race (see ) to find out the configurations (or configuration) that optimize a function of quality,\n\n- retrieving the results of the experiments from the database.\n\nIn the first case, the outcome of our above example would be something like this (**json2run** comes in form of a command line tool called `j2r`):\n\n $ j2r -i experiments.json\n --a foo --b1 0.0\n --a foo --b1 0.25\n ...\n --a baz --b2 8.0\n --a baz --b2 10.0\n\n**json2run** supports a number of different types of nodes in the JSON tree, including: nodes for generating parameters from a discrete set of variables, nodes for generating parameters from the content of a directory or a file, nodes for sampling values from an interval, and so on. If something cannot be expressed with simple parameter generators, a number of **post-processors** allow you to mix, merge and discard the generated parameters in extremely flexible ways. Most post-processors were created because something couldn't be expressed with simple logical operators, but **json2run** slowly converged to something complete now.\n\nThe experiments results are stored on a (MongoDB) database, in order to be accessed later for analysis. The choice of MongoDB comes from the necessity of comparing algorithms which can have different (and a different number of) parameters, and a using a tabular storage (as in most relational database) would make queries much more difficult.\n\nFinally, **json2run** comes with a very general but handy R script, which allows to gather data from the database, and do whatever kind of statistical analysis over it.\n\n# Installation\n\nBeing packaged as a python module, the installation of **json2run** should be quite straightforward, just ensure that the `bson` python module is not installed on your system (if this is the case, run `sudo pip uninstall bson`) since `pymongo` comes with its own `bson.*` classes and conflicts may occur. Then, clone the (Mercurial) repository and run the python installer\n\nor, if you plan to update **json2run** often (i.e.\u00a0if you plan to customize it, or update it from the repository), run\n\n sudo python setup.py develop\n\nthis will allow you to update to the latest version by just running\n\n hg pull -u\n\nin the root directory.\n\n# Usage\n\nSince **json2run** is designed to be very flexible (its only requirements being that you expose all the parameters of your executable and that you have access to a MongoDB instance), this also means that it comes with a lot of options. We will go through them in the following sections, but if you just need a quick reference type\n\n $ j2r --help\n\nBefore running anything, however, you will need to know how to write an experiment file.\n\n## Designing experiment files\n\nAs previously mentioned, **json2run** expects a description of the experiments in JSON format. JSON (which, by the way, stands for JavaScript Object Notation) is a concise and human-readable language for describing structured data. It is also the very language MongoDB uses to store its data and to make queries, which makes for a natural integration.\n\n### Basic JSON syntax\n\nThe basic components of a JSON documents are arrays, objects and scalars. You can use array and object to group more arrays and objects, or scalars.\n\n#### Scalars\n\nJSON has a number of native types for scalars:\n\n- numbers,\n\n- strings, and\n\n- booleans.\n\nNumbers can be integers or floats, and scientific notation is also supported.\n\n#### Arrays and objects\n\nArrays are lists of scalars separated by commas, e.g.:\n\n [1, 2, 3, \"foo\", 3.14, true]\n\nObjects can be seen as named arrays (similar to C++'s map, or dictionaries if you're into Python):\n\n { \n \"name\": \"John\",\n \"surname\": \"Boags\",\n \"profession\": \"beer maker\",\n \"age\": 150\n }\n\nNote that arrays and objects use a different kind of parenthesis, `{}` vs `[``]`.\n\n#### Comments\n\nJSON doesn't support comments, and most of the time you won't need them (the JSON contents should be sufficiently explanatory), but if you really need annotations, you can add fake entries to your objects, that won't be parsed by **json2run** and will serve as comments, such as `comment` in the following example:\n\n#### Specific syntax\n\nIn particular, **json2run** assumes that the experiment file is a representation of a tree in which each node is a JSON object with at least a `type` field describing its type, e.g.:\n\n {\n \"type\": \"\",\n ... \n }\n\nWe'll see the supported node types and their additional fields in the following sections.\n\n### Combinations and alternatives (inner nodes)\n\nTypically, an algorithm accepts multiple parameters, and we want to be able to compare alternative combinations of these parameters. `and` and `or` nodes are the way to accomplish this in **json2run** and we call them *inner* nodes. Each inner node has a list of descendants and when they are activated, they either combine them together (in case of an `and`), or pick between them (in case of an `or`).\n\n#### `and` nodes\n\nFor instance, suppose that we have a Simulated Annealing solver that accepts an initial temperature and a cooling schedule as command line parameters, and we would like to try all the possible combinations. In **json2run** this is expressed using an `and` node that combines the two parameters (ignore for now the syntax to describe discrete parameters, we'll come to that later).\n\n#### `or` nodes\n\nSometimes you have algorithms that accept different parameters and, possibly, a different number of them. Suppose your solver can operate either using Simulated Annealing and Tabu Search. You can easily encode this in an experiment file by using an `or` node.\n\nNote that no Tabu Search parameters appear in the Simulated Annealing configurations, and vice versa. By combining several `and` and `or` node, it is possible to express quite complex experiment designs.\n\n### Leaf nodes\n\nLeaf nodes are responsible for creating values for single parameters. They all come with a `type`, a `name` for the parameter, and an array (or object) of `values` describing the possible values that the parameter can take, e.g.:\n\n {\n \"type\": \"\",\n \"name\": \"\",\n \"values\": \n }\n\n#### `discrete` nodes\n\nDiscrete nodes are the simplest kind of leaf nodes. They come with two value definition styles: an *explicit* one, where parameters are listed explicitly, e.g.:\n\n {\n \"type\": \"discrete\",\n \"name\": \"num_of_reheats\",\n \"values\": [ 1, 2 ]\n }\n\nand an *implicit* one, which allows to define discrete numeric values from a `min`, a `max` and a `step`, e.g.:\n\nDuring the processing of the tree, implicit value definitions are transformed into explicit ones, and treated as such. A `discrete` node generates all the possible parameter values in order.\n\n#### `continuous` nodes\n\nContinuous nodes allow to define parameter values that are generated by sampling continuous parameter spaces. These spaces are defined in terms of a `min` and a `max`, e.g.:\n\n {\n \"type\": \"continuous\",\n \"name\": \"start_temperature\",\n \"values\": { \"min\": 0.0, \"max\": 10.0 }\n }\n\nHowever, continuous nodes can't generate parameter values by themselves. Instead, they need to be processed later on by a *post-processor* attached to a node upper in the tree hierarchy. This might seem over complicated, but there's a use case behind it.\n\nIn particular suppose that you want to study the interaction of two parameters on the performance of an algorithm. To do a proper sampling, the generated parameters must be picked from a 2-dimensional space, in a way that is as uniform a possible. One way to do this, would be to generate an `and` node containing several `discrete` nodes with different ranges. There are two problems with this approach (which is called *full-factorial*):\n\n1. the generated points are very regular, while one usually want to sample the parameter space randomly (but uniformly),\n\n2. the parameter combinations are the carthesian product of the values generated for each single parameter, which makes it difficult to control how much configurations are generated.\n\nWhile this can still be done with **json2run** (and indeed is often what one wants), we would like to treat the parameter space generated by the interaction of the two parameters as a *single* space, and sample 2-dimensional points uniformly inside it. **json2run** comes with a post-processor which is able to generate the Hammersley point set in a k-dimensional space. We will see in the section about post-processors how to attach one to a node, but for the moment just accept that `continuous` leaf nodes are treated in this special way.\n\n#### `file` and `directory` nodes\n\nFile (or directory) nodes are essentially `discrete` nodes, whose values are nor defined explicitly nor implicitly, but instead are generated from the content of a file (or a directory). The typical use for this kind of nodes is to generate experiments that run on a set of instances specified in a file e.g.:\n\n {\n \"type\": \"file\",\n \"name\": \"instance\",\n \"path\": \"selected_instances.txt\",\n \"match\": \".*\"\n }\n\nwhere the `path` field specifies the location of the file to be used as input, and the `match` field restrict the generated parameters to the lines of the file matching a given regular expression[^2]. This is useful when you want to restrict to certain instances, but most frequently will just be \".\\*\" (catch-all). As for directory nodes, they follow a similar semantic, the difference being that the generated values is the list of the content of the directory, filtered by the regular expression in the `match` field.\n\n {\n \"type\":\"directory\",\n \"path\": \"..\/instances\/comp\",\n \"name\": \"instance\",\n \"match\": \".*\\\\.ectt\"\n }\n\n#### `flag` nodes\n\nFlag nodes have a single parameter (the `name` of the generated flag) and generate value-less parameters. E.g.:\n\n {\n \"type\": \"flag\",\n \"name\": \"verbose\"\n } \n\nwill just add the `--verbose` flag on the generated command lines.\n\n### Post-processors\n\nPost-processors are tools to generate more complex combinations of parameters. They can be attached to **any** inner node by adding them to a field called `postprocessors` along with the descendants, e.g.:\n\n {\n \"type\": \"and\",\n \"descendants\": [\n ... \n ],\n \"postprocessors\": [\n ...\n ]\n }\n\nPost-processors have a `type` field and a number of other fields dependent on the specific post-processor type. They all operate in the following way:\n\n1. They take the list of parameters generated in the subtree (note that each execution of a subtree gives birth to a different parameter configuration) they are attached to,\n\n2. they process the list **as a whole** (e.g.\u00a0replacing parameters, modifying values, removing parameters, and so forth), and finally\n\n3. they return the new list, which replaces the old one.\n\nIn many cases they just apply the same function over all the elements of the list, but they might be designed to do more complex things or to just update certain kind of parameters. For instance, the `hammersley` post-processor only apply to `continuous` parameters (but possibly more than one of them at a time).\n\n**Note** order matters! Post-processors are applied in the order in which they are defined in the `postprocessors` list.\n\n#### `expression` processors\n\nExpression post processors are by far the most flexible ones. They allow to define a new parameter (either `discrete` or `continuous`, but **not** a flag) by evaluating a python expression and using the result as the value of the parameter. The processor is defined by a `match` regular expression, which captures the operands needed by the expression, and either\n\n1. an `expression` which will be evaluated to yield the value of the generated `discrete` parameter, or\n\n2. two expressions (`min` and `max`) that will yield the values for the minimum and maximum of the generated `continuous` parameter.\n\nThe type of the parameter is inferred by the presence of the `expression` field, while its name is defined by the `result` field. An example of the two syntaxes is the following:\n\n {\n \"type\": \"expression\",\n \"match\": \"||...\"\n \"result\": \"\",\n \"expression\": \"\"\n }\n\n {\n \"type\": \"expression\",\n \"match\": \"||...\",\n \"result\": \"\",\n \"min\": \"\",\n \"max\": \"\"\n }\n\n#### Expression syntax\n\nAny valid python expression that has a return value can be used as `expression`, `min` or `max`. To access the values of the captured operands it is sufficient to postfix their name with `.value`.\n\nAs for the available operations, functions from python's `math` and `json` modules are automatically imported. For instance, to generate a new parameter *p3* which is the power of two existing ones, *p1* and *p2*, we'll write:\n\n {\n \"type\": \"expression\",\n \"match\": \"p1|p2\",\n \"result\": \"p3\",\n \"expression\": \"pow(p1.value, p2.value)\"\n }\n\nWhile to generate a parameter which takes values in *\\[0.1\\*p1, 5\\*p2\\]*, we'll do:\n\n {\n \"type\": \"expression\",\n \"match\": \"p1|p2\",\n \"result\": \"p3\",\n \"min\": \"0.1*p1.value\",\n \"max\": \"5*p2.value\"\n }\n\n#### `ignore` processors\n\nIgnore post processors can be used to remove specific parameters from the list of generated ones. Typically, the are used to discard operands of an `expression` post-processor after they have been used. Following the previous example, we might not be interested in *p1* and *p2* at all, so:\n\n {\n \"type\": \"ignore\",\n \"match\": \"p1|p2\"\n }\n\nRemember that post-processors are applied in order, thus (in this case) the `ignore` must be defined after the `expression`.\n\n#### `sorting` processors\n\nSorting allows to define an ordering for a subset of parameters. These parameters will be put (if they exist) at the beginning of the generated list of parameters, and the others will follow. The syntax of the post-processor is the following:\n\n {\n \"type\": \"sorting\",\n \"order\": [ \"\", \"\", ... ]\n }\n\nWhere `order` is an array of ordered parameter names.\n\n#### `hammersley` processors\n\nThe Hammersley post-processor generates the scaled k-dimensional Hammersley point set from a set of k `continuous` parameters, and it's the preferential (also, the only) way to sample continuous parameter spaces. The syntax is the following:\n\n {\n \"type\": \"hammersley\",\n \"points\": \n }\n\nSo, assuming that your experiments file produces *k* continuous parameters, the `hammersley` post-processor generates a k-dimensional *cube* delimited by the `min` and `max` fields of your `continuous` parameters, and will generate `n` samples inside this cube, to use as parameter values.\n\n#### The Hammersley point set\n\nThis choice of the Hammersley point set has been driven by two properties of this sequence that make it favourable to parameter tuning. First, points from the Hammersley set exhibit *low discrepancy*, i.e.\u00a0they are well distributed across the parameter space despite being random-like. Second, the sequence is *scalable* both with respect to the number of points (`n`) to be sampled and to the number of dimensions (`k`) of the sampling space.\n\nSo, whenever you want to explore parameter spaces, use `continuous` parameters and the `hammersley` post-processor.\n\n#### `rounding` processors\n\nWhen sampling continuous parameters or using `expression` post-processors, the resulting values can end up being floats with many decimal digits. While this in general is not an issue, often this much precision is unneeded, and it's just more convenient to operate with less precise floats. The `rounding` post-processor allows to round down a parameter's values to a specific number of decimal digits. The syntax is the following:\n\n {\n \"type\": \"rounding\",\n \"match\": \"\"\n \"decimal_digits\": \n }\n\nWhere `n` is the number of decimal digits we want to retain (**note** the numbers are rounded, not truncated, to `n` digits after the floating point), and `match` is a regular expression describing the parameters we want to round down.\n\n#### Compact syntax\n\nRounding post-processors also support a compact syntax, to group roundings in a single post-processor. The syntax is the following:\n\n {\n \"type\": \"round\",\n \"round\": [\n \"\": ,\n \"\": ,\n ...\n ]\n }\n\nWhere `regex_k` are regular expressions describing one (or more) parameter, and `n_k` are the corresponding decimal digits we want to retain.\n\n#### Forcing precision\n\nBoth syntaxes support an optional field called `force_precision`, which can be either `true` or `false`, that forces the resulting value to have the specified number of decimal digits (regardless of any rounding to zero).\n\n#### `renaming` processors\n\nRename post-processors can be used to rename parameters. The syntax, somewhat similar to `rounding`'s compact one, is the following:\n\n {\n \"type\": \"renaming\",\n \"rename\": {\n \"old_1\": \"new_1\",\n \"old_2\": \"new_2\",\n ...\n }\n }\n\nWhere `old_`$k$ are the original name of the parameters we want to rename and `new_k` are the new ones. Note that unlike `rounding`'s compact syntax, here `rename` is an object, not an array. Also, `old_k` are plain strings, not regular expressions.\n\n## The `j2r` command line tool\n\nAll of **json2run** functionalities are accessed through a command line utility called `j2r`. The tool comes with a (large) number of options, activated by `--` followed by their long names, or equivalently `-` followed by their short names. Some of them have default values, some other must be provided. (See `j2r --help` for a summary of them.)\n\n### Input\n\nMost of the functionalities of **json2run** require that you provide an input file, i.e.\u00a0a JSON file describing experiments. This file is specified through the `--input` (or `-i`) option.\n\n $ j2r -i experiments.json\n\n### Available actions\n\nThe main switch in `j2r` is the `--action` (or `-a`) option. Actions allow to specify what you want **json2run** to do for you. The available actions are:\n\n- `print-cll` prints the generated experiments as a *command line list* (also, the default option)\n\n- `print-csv` print the generated experiments as a CSV file\n\n- `run-batch` start (or resumes) a batch for the experiments described in the input file\n\n- `run-race` starts (or resumes) a race among parameter configurations in order to find the best parameter assignment for the specified executable\n\n- `list-batches` list the batches on the database, and give summary information about their completion, machine on which they are being run, type of batch, etc.\n\n- `delete-batch` delete a batch from the database\n\n- `batch-info` provides detailed information about a batch or race, e.g.\u00a0repetitions completed, configurations that are still racing, experiment file, etc.\n\n- `rename-batch` rename a batch on the database\n\n- `show-winning` show winning configurations in a race\n\n- `set-repetitions` set the number of repetitions of the same experiments in a batch or a race\n\n- `dump-experiments` dump all the experiments data regarding a batch as a CSV file\n\n- `mark-unfinished` set a batch as unfinished, in order to restart it\n\n### Batch options\n\nSome of the actions require additional parameters, in particular, each action pertaining a batch on the database must also provide a mandatory `--batch-name` (or `-n`) to refer it (see it as a key for batches in the database).\n\n### Running options\n\nBoth `run-batch` and `run-race` have a number of additional parameters that tune the way in which the experiments are run.\n\n- `--executable` (or `-e`) specifies the executable to be run (mandatory)\n\n- `--parallel-threads` (or `-p`) specifies the maximum number of parallel processors to run the experiments onto (defaults to the number of cores on the machine where the experiments are run)\n\n- `--repetitions` (or `-r`) number of repetitions of the (exactly) same experiment to run (e.g.\u00a0to have a more reliable result)\n\n- `--greedy` (or `-g`) can be `true` or `false` and states if the batch or race can reuse experiments which are already on the database (but are possibly part of other batches and races)\n\n### Extra options for races\n\nWhen a race is run, a number of additional parameters must or can be passed.\n\n- `--instance-param` (or `-ip`) specifies the parameter which represents the instance\n\n- `--performance-param` (or `-pp`) specifies the statistic (output by the executable), that must be used to evaluate the quality of a configuration\n\n- `--seed` (or `-s`) seed to use to shuffle instances (defaults to 0)\n\n- `--confidence` confidence level for hypothesis testing (e.g.\u00a0to compare with p-values, defaults to *0.05*)\n\n**Note** **json2run** assumes that the executable outputs a valid JSON code, with a field for each statistic that we want to record. For instance, a solver could have a `cost` and a `time` statistic.\n\n {\n \"cost\": 161.12,\n \"time\": 500\n }\n\n### Database options\n\nWhen we're running batches or databases, we're implicitly assuming that we have a running and accessible MongoDB database. By default, **json2run** will look for MongoDB on the `localhost` and will try to connect to the database `j2r` with username `j2r` and password `j2r`. These are just convenient credentials, but one can specify its own connection parameters through the following options.\n\n- `--db-host` (or `-dh`) specifies the host onto which the MongoDB instance is running\n\n- `--db-database` (or `-dd`) specifies the database to use connecting\n\n- `--db-user` (or `-du`) specifies the username to use for connecting\n\n- `--db-pass` (or `-dx`) specifies the password to use for connecting\n\n- `--db-port` (or `-dp`) specifies the port to use for connecting\n\n### Logging info\n\nBy default `j2r` prints on the standard output most of its logging information. However this information can be redirected on a file if needed, and the log level can be set.\n\n- `--log-file` specifies the file where the log is written (default: None)\n\n- `--log-level` can be `warning`, `error`, 'info\n\n### Source code versioning\n\nAdditionally, **json2run** can record the code revision used for running a batch or a race. To enable this option one must pass the name of the source code manager of choice through the `--scm` option (currently supports `git` and `mercurial`).\n\n### Instances and configurations\n\nInstances and parameter configurations are described in the same experiments file.\n\n## Running examples\n\nHere are some of the most common operations that one can perform with **json2run**.\n\n### Running a batch of experiments\n\nRun a batch of experiments based on an experiment file (*experiments.json*) and an executable (*solver*), with 10 repetitions for each experiment and all the available cores.\n\n### Running a configuration race\n\nBased on the same file, and reckoning that the instance parameter is called *instance*, we can run a race to find out the best configuration. Suppose that the solver outputs some statistics in JSON (as in the example above) and that we want to compare the configurations based on the *cost* of the obtained solutions.\n\n### Resuming a batch or a race\n\nTo resume a previously stopped batch or race, it is sufficient to run\n\n $ j2r -a run-batch -n my_batch\n\nor\n\n $ j2r -a run-race -n my_race\n\n### Printing detailed data about a batch or race\n\nUse the `batch-info` action, passing the name of the race or batch.\n\n $ j2r -a batch-info -n my_race\n\nthe output is in JSON format (for easy parsing by other tools).\n\n### Print the list of winning (so far) configurations in a race\n\nUse the `show-winning` action, passing the name of the race or batch.\n\n $ j2r -a show-winning -n my_race\n\n### Delete a batch or a race from the database\n\nUse the `delete-batch` action, passing the name of the race or batch.\n\n $ j2r -a delete-batch -n my_race\n\n### List all the batches on the database\n\nUse the `list-batches` action.\n\n $ j2r -a list-batches\n\n## Analyzing the outcome\n\nThe outcome of a batch or race, i.e.\u00a0all the data about the experiments, can be retrieved from R by loading the R script `analysis.R` and using the following functions:\n\n source(\"analysis.R\")\n connect(\"host\") \n\n x <- getExperiments(\"my_race\", c(\"instance\")) \n\nThe `x` data frame will contain a row for each experiment in the batch or race, with information about whether the configuration was one of the winning ones (in case of a race).\n\n# Future\n\nA new major version of **json2run** is in the works. Among the upcoming features are:\n\n- launching of experiments on multiple machines,\n\n- web-service, i.e. RESTful, infrastructure will handle all **json2run** operations,\n\n- improved JSON syntax for all node types (fields and type of values will determine which kind of node are we dealing with), e.g.:\n\n\n\n {\n \"type\": \"and\",\n \"descendants\": [ ... ]\n }\n\nwill become:\n\n {\n \"and\": [ ... ]\n }\n\nand\n\nwill become:\n\n# Licensing\n\n**json2run** is open-source and distributed under the MIT license.\n\n# Acknowledgements\n\n**json2run** has been developed for and with the collaboration of Luca Di Gaspero, Sara Ceschia and Andrea Schaerf of the Scheduling and Time-Tabling Group of University of Udine. Thanks go to Tiago Januario from Universidade Federal de Minas Gerais, Belo Horizonte, Brasil for the many suggestions. Also, thanks go to the [Bitbucket](http:\/\/www.bitbucket.org) staff that hosts **json2run**'s code free of charge.\n\n[^1]: *Scheduling and Time-Tabling Group*, DIEGM - University of Udine, Via delle Scienze 206, 33100 \u2013 Udine (UD), Italy\n\n[^2]: **Note** regular expressions are in Python format, but strings must be escaped, e.g.\u00a0if you want to look for the .txt pattern, the string must be specified as \".\\*\\\\.txt\"","meta":{"dup_signals":{"dup_doc_count":17,"dup_dump_count":15,"dup_details":{"curated_sources":2,"2019-09":1,"2018-51":1,"2018-43":1,"2018-34":1,"2018-26":1,"2018-17":1,"2018-09":2,"2017-51":1,"2017-34":1,"2017-09":1,"2016-40":1,"2015-40":1,"2019-18":1,"2015-18":1}},"filename":"out\/1305.1112_extract_paper.tex.md"},"subset":"arxiv"} +{"text":"abstract: A clear and well-documented LaTeX document is presented as an article formatted for publication by CEUR-WS in a conference proceedings. Based on the \"ceurart\" document class, this article presents and explains many of the common variations, as well as many of the formatting elements an author may use in the preparation of the documentation of their work.\naddress: Peoples' Friendship University of Russia (RUDN University), 6 Miklukho-Maklaya St, Moscow, 117198, Russian Federation; Joint Institute for Nuclear Research, 6 Joliot-Curie, Dubna, Moscow region, 141980, Russian Federation; Vrije Universiteit Amsterdam, De Boelelaan 1105, 1081 HV Amsterdam, The Netherlands; University of Sk\u00f6vde, H\u00f6gskolev\u00e4gen 1, 541 28 Sk\u00f6vde, Sweden\nauthor: Dmitry S. Kulyabov; Ilaria Tiddi; Manfred Jeusfeld\nbibliography: sample-ceur.bib\ntitle: A better way to format your document for CEUR-WS\n\n\\[ orcid=0000-0002-0877-7063, firstname.lastname@example.com, url=https:\/\/yamadharma.github.io\/, \\]\n\n\\[ orcid=0000-0001-7116-9338, email@example.com, url=https:\/\/kmitd.github.io\/ilaria\/, \\]\n\n\\[ orcid=0000-0002-9421-8566, email@example.com, url=http:\/\/conceptbase.sourceforge.net\/mjf\/, \\]\n\n# Introduction\n\nCEUR-WS's article template provides a consistent LaTeX style for use across CEUR-WS publications, and incorporates accessibility and metadata-extraction functionality. This document will explain the major features of the document class.\n\nIf you are new to publishing with CEUR-WS, this document is a valuable guide to the process of preparing your work for publication.\n\nThe \"`ceurart`\" document class can be used to prepare articles for any CEUR-WS publication, and for any stage of publication, from review to final \"camera-ready\" copy with *very* few changes to the source.\n\nThis class depends on the following packages for its proper functioning:\n\n- `natbib.sty` for citation processing;\n\n- `geometry.sty` for margin settings;\n\n- `graphicx.sty` for graphics inclusion;\n\n- `hyperref.sty` optional package if hyperlinking is required in the document;\n\n- `fontawesome5.sty` optional package for bells and whistles.\n\nAll the above packages are part of any standard LaTeX installation. Therefore, the users need not be bothered about downloading any extra packages.\n\n# Modifications\n\nModifying the template \u2014 including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the `\\vspace` command to manually adjust the vertical spacing between elements of your work \u2014 is not allowed.\n\n# Template parameters\n\nThere are a number of template parameters which modify some part of the `ceurart` document class. This parameters are enclosed in square brackets and are a part of the `\\documentclass` command:\n\n \\documentclass[parameter]{ceurart}\n\nFrequently-used parameters, or combinations of parameters, include:\n\n- `twocolumn` : Two column layout.\n\n- `hf` : Enable header and footer[^1].\n\n# Front matter\n\n## Title Information\n\nThe titles of papers should be either all use the emphasizing capitalized style or they should all use the regular English (or native language) style. It does not make a good impression if you or your authors mix the styles.\n\nUse the `\\title` command to define the title of your work. Do not insert line breaks in your title.\n\n## Title variants\n\n`\\title` command have the below options:\n\n- `title`: Document title. This is default option.\n\n \\title[mode=title]{This is a title}\n\n You can just omit it, like as follows:\n\n \\title{This is a title}\n\n- `alt`: Alternate title.\n\n \\title[mode=alt]{This is a alternate title}\n\n- `sub`: Sub title.\n\n \\title[mode=sub]{This is a sub title}\n\n You can just use `\\subtitle` command, as follows:\n\n \\subtitle{This is a sub title}\n\n- `trans`: Translated title.\n\n \\title[mode=trans]{This is a translated title}\n\n- `transsub`: Translated sub title.\n\n \\title[mode=transsub]{This is a translated sub title}\n\n## Authors and Affiliations\n\nEach author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible.\n\n`\\author` command have the below options:\n\n- `style` : Style of author name (chinese)\n\n- `prefix` : Prefix\n\n- `suffix` : Suffix\n\n- `degree` : Degree\n\n- `role` : Role\n\n- `orcid` : ORCID\n\n- `email` : E-mail\n\n- `url` : URL\n\nAuthor names can have some kinds of marks and notes:\n\n- affiliation mark: `\\author[]`.\n\nThe author names and affiliations could be formatted in two ways:\n\n1. Group the authors per affiliation.\n\n2. Use an explicit mark to indicate the affiliations.\n\nAuthor block example:\n\n \\author[1,2]{Author Name}[%\n prefix=Prof.,\n degree=D.Sc.,\n role=Researcher,\n orcid=0000-0000-000-0000,\n firstname.lastname@example.com,\n url=https:\/\/name.example.com\n ]\n\n \\address[1]{Affiliation #1}\n \\address[2]{Affiliation #2}\n\n## Abstract and Keywords\n\nAbstract shall be entered in an environment that starts with `\\begin{abstract}` and ends with `\\end{abstract}`.\n\n \\begin{abstract}\n This is an abstract.\n \\end{abstract}\n\nThe key words are enclosed in a `keywords` environment. Use `\\sep` to separate keywords.\n\n \\begin{keywords}\n First keyword \\sep \n Second keyword \\sep \n Third keyword \\sep \n Fourth keyword\n \\end{keywords}\n\nAt the end of front matter add `\\maketitle` command.\n\n## Various Marks in the Front Matter\n\nThe front matter becomes complicated due to various kinds of notes and marks to the title and author names. Marks in the title will be denoted by a star ($\\star$) mark; footnotes are denoted by super scripted Arabic numerals, corresponding author by an Conformal asterisk (\\*) mark.\n\n### Title marks\n\nTitle mark can be entered by the command, `\\tnotemark[]` and the corresponding text can be entered with the command `\\tnotetext[]{}`. An example will be:\n\n \\title{A better way to format your document for CEUR-WS}\n\n \\tnotemark[1]\n \\tnotetext[1]{You can use this document as the template for preparing your\n publication. We recommend using the latest version of the ceurart style.}\n\n`\\tnotemark` and `\\tnotetext` can be anywhere in the front matter, but should be before `\\maketitle` command.\n\n### Author marks\n\nAuthor names can have some kinds of marks and notes:\n\n- footnote mark : `\\fnmark[]`\n\n- footnote text : `\\fntext[]{}`\n\n- corresponding author mark : `\\cormark[]`\n\n- corresponding author text : `\\cortext[]{}`\n\n### Other marks\n\nAt times, authors want footnotes which leave no marks in the author names. The note text shall be listed as part of the front matter notes. Class files provides `\\nonumnote` for this purpose. The usage\n\n \\nonumnote{}\n\nand should be entered anywhere before the `\\maketitle` command for this to take effect.\n\n# Sectioning Commands\n\nYour work should use standard LaTeX sectioning commands: `\\section`, `\\subsection`, `\\subsubsection`, and `\\paragraph`. They should be numbered; do not remove the numbering from the commands.\n\nSimulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is not allowed.\n\n# Tables\n\nThe \"`ceurart`\" document class includes the \"`booktabs`\" package \u2014 \u2014 for preparing high-quality tables.\n\nTable captions are placed *above* the table.\n\nBecause tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper \"floating\" placement of tables, use the environment `table` to enclose the table's contents and the table caption. The contents of the table itself must go in the `tabular` environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules.\n\nImmediately following this sentence is the point at which Table\u00a0 is included in the input file; compare the placement of the table here with the table in the printed output of this document.\n\n```latex\n\\begin{table*}\n \\caption{Frequency of Special Characters}\n \\label{tab:freq}\n \\begin{tabular}{ccl}\n \\toprule\n Non-English or Math&Frequency&Comments\\\\\n \\midrule\n \\O & 1 in 1,000& For Swedish names\\\\\n $\\pi$ & 1 in 5& Common in math\\\\\n \\$ & 4 in 5 & Used in business\\\\\n $\\Psi^2_1$ & 1 in 40,000& Unexplained usage\\\\\n \\bottomrule\n\\end{tabular}\n\\end{table*}\n```\n\nTo set a wider table, which takes up the whole width of the page's live area, use the environment `table*` to enclose the table's contents and the table caption. As with a single-column table, this wide table will \"float\" to a location deemed more desirable. Immediately following this sentence is the point at which Table\u00a0 is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document.\n\n| Command | A Number | Comments |\n|:--------------:|:--------:|:-----------------|\n| `'134``author` | 100 | Author |\n| `'134``table` | 300 | For tables |\n| `'134``table*` | 400 | For wider tables |\n\nSome Typical Commands\n\n# Math Equations\n\nYou may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections.\n\n## Inline (In-text) Equations\n\nA formula that appears in the running text is called an inline or in-text formula. It is produced by the `math` environment, which can be invoked with the usual `\\begin` \u2026`\\end` construction or with the short form `$` \u2026`$`. You can use any of the symbols and structures, from $\\alpha$ to $\\omega$, available in LaTeX\u00a0; this section will simply show a few examples of in-text equations in context. Notice how this equation: $ \\lim_{n\\rightarrow \\infty} \\frac{1}{n} = 0,$ set here in in-line math style, looks slightly different when set in display style. (See next section).\n\n## Display Equations\n\nA numbered display equation\u2014one set off by vertical space from the text and centered horizontally\u2014is produced by the `equation` environment. An unnumbered display equation is produced by the `displaymath` environment.\n\nAgain, in either environment, you can use any of the symbols and structures available in LaTeX; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: $$\\lim_{n\\rightarrow \\infty} \\frac{1}{n} = 0.$$ Notice how it is formatted somewhat differently in the `displaymath` environment. Now, we'll enter an unnumbered equation: $$S_{n} = \\sum_{i=1}^{n} x_{i} ,$$ and follow it with another numbered equation: $$\\lim_{x \\to 0} (1 + x)^{1\/x} = e$$ just to demonstrate LaTeX's able handling of numbering.\n\n# Figures\n\nThe \"`figure`\" environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below.\n\nYour figures should contain a caption which describes the figure to the reader. Figure captions go below the figure. Your figures should also include a description suitable for screen readers, to assist the visually-challenged to better understand your work.\n\nFigure captions are placed below the figure.\n\n# Citations and Bibliographies\n\nThe use of BibTeX for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete \u2014 use full first names (\"Donald E. Knuth\") not initials (\"D. E. Knuth\") \u2014 and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc.\n\nThe bibliography is included in your source document with these two commands, placed just before the `\\end{document}` command:\n\n \\bibliography{bibfile}\n\nwhere \"`bibfile`\" is the name, without the \"`.bib`\" suffix, of the BibTeX file.\n\n## Some examples\n\nA paginated journal article , an enumerated journal article , a reference to an entire issue , a monograph (whole book) , a monograph\/whole book in a series (see 2a in spec. document) , a divisible-book such as an anthology or compilation followed by the same example, however we only output the series if the volume number is given (so series should not be present since it has no vol. no.), a chapter in a divisible book , a chapter in a divisible book in a series , a multi-volume work as book , an article in a proceedings (of a conference, symposium, workshop for example) (paginated proceedings article) , a proceedings article with all possible elements , an example of an enumerated proceedings article , an informally published work , a doctoral dissertation , a master's thesis: , an online document \/ world wide web resource , a video game (Case 1) and (Case 2) and and (Case 3) a patent , work accepted for publication , prolific author and . Other cites might contain 'duplicate' DOI and URLs (some SIAM articles) . Multi-volume works as books and . A couple of citations with DOIs: . Online citations: .\n\n# Acknowledgments\n\nIdentification of funding sources and other support, and thanks to individuals and groups that assisted in the research and the preparation of the work should be included in an acknowledgment section, which is placed just before the reference section in your document.\n\nThis section has a special environment:\n\n \\begin{acknowledgments}\n These are different acknowledgments.\n \\end{acknowledgments}\n\nso that the information contained therein can be more easily collected during the article metadata extraction phase, and to ensure consistency in the spelling of the section heading.\n\nAuthors should not prepare this section as a numbered or unnumbered `\\section`; please use the \"`acknowledgments`\" environment.\n\n# Appendices\n\nIf your work needs an appendix, add it before the \"`\\end{document}`\" command at the conclusion of your source document.\n\nStart the appendix with the \"`\\appendix`\" command:\n\n \\appendix\n\nand note that in the appendix, sections are lettered, not numbered.\n\n# Online Resources\n\nThe sources for the ceur-art style are available via\n\n- [GitHub](https:\/\/github.com\/yamadharma\/ceurart),\n\n- [Overleaf template](https:\/\/www.overleaf.com\/latex\/templates\/template-for-submissions-to-ceur-workshop-proceedings-ceur-ws-dot-org\/pkfscdkgkhcq).\n\n[^1]: You can enable the display of page numbers in the final version of the entire collection. In this case, you should adhere to the end-to-end pagination of individual papers.","meta":{"dup_signals":{"dup_doc_count":14,"dup_dump_count":2,"dup_details":{"curated_sources":7,"unknown":7}},"filename":"out\/2307.09909_extract_sample-2col.tex.md"},"subset":"arxiv"} +{"text":"abstract: This is a review of the book **Quantum \\[Un\\]speakables: From Bell to Quantum Information**. Reinhold A. Bertlmann and Anton Zeilinger (editors). xxii + 483 pp. Springer-Verlag, 2002. \\$89.95.\nauthor: Nino Zangh\u0131\u0300\u00a0and Roderich Tumulka\ndate: July 7, 2003\ntitle: John Bell Across Space and Time\n\n*Ten years after his death, one of the sharpest minds in quantum physics was celebrated in a memorial conference.*\n\nJohn Stewart Bell (1928-1990) was one of the leading physicists of the 20th century, a deep and serious thinker. He worked at CERN in Geneva on the physics of particle accelerators, made a number of impressive contributions to quantum field theory, and became famous for the discovery of a phenomenon he called nonlocality. However, the most remarkable thing about him was perhaps that he was a realist.\n\nRealism is the philosophical view that the world out there actually exists, as opposed to the view that it is a mere hallucination. We are all born realists, but some of us change our minds as adults. Now it may seem to you that for physics to make any sense, a physicist would have to be, or at least pretend to be, a realist; after all, it would seem that physics is about finding out how the world out there works.\n\nBut, as a matter of fact, in the 1920s Niels Bohr, the leading quantum physicist of his time, began to advocate the idea that realism is childish and unscientific; he proposed instead what is now called the \"Copenhagen interpretation\" of quantum physics, a rather incoherent philosophical doctrine, which (according to Richard Feynman) \"nobody really understands.\" Part of this doctrine is the view that macroscopic objects, such as chairs and planets, do exist out there, but electrons and the other microscopic particles do not. Correspondingly, Copenhagen quantum theory refuses to provide any consistent story about what happens to microscopic objects, and instead prefers to make contradictory statements about them. According to the Copenhagen view, the world is divided into two realms, macro and micro, \"classical\" and \"quantum,\" logical and contradictory\u2014or, as Bell put it in one of his essays, into \"speakable\" and \"unspeakable.\"\n\nAlthough it is not clear where the border between the two realms should be, and how this duality could possibly be compatible with the fact that chairs consist of electrons and other particles, Bohr's view became the orthodoxy. That is, it became not merely the majority view among physicists, but rather the dogma. Ever since, being a realist has been rather dangerous for a quantum physicist, because it has been widely regarded as a sign of being too stupid to understand orthodox quantum theory\u2014which, as we've mentioned, nobody really understands.\n\nAlong with Albert Einstein, Erwin Schr\u00f6dinger, Louis de Broglie and David Bohm, Bell was one of the few people who felt compelled by his conscience to reject Bohr's philosophy. Bell emphasized that the empirical facts of quantum physics do not at all force us to renounce realism: There is a realist theory that accounts for all of these facts in a most elegant way\u2014Bohmian mechanics (also known as de\u00a0Broglie\u2013Bohm theory). It describes a world in which electrons, quarks and the like are point particles that move in a manner dictated by the wave function. It should be taught to students, Bell insisted, as a legitimate alternative to the orthodoxy. And in 1986, GianCarlo Ghirardi, Alberto Rimini, and Tullio Weber succeeded in developing a second kind of realist theory, encouraged by Bell and known as *spontaneous localization*. But overcoming prejudice and changing convictions takes more than one generation.\n\n*Quantum \\[Un\\]speakables* is the proceedings volume of a conference held at the University of Vienna in November 2000 to commemorate the 10th anniversary of Bell's death. The 30 articles written for this volume by 35 authors deal foremost with nonlocality and, of course, the meaning of quantum theory. The contributions focus very much on personal recollections and mostly presuppose that the reader is familiar with the relevant physics and mathematics. The recollections make this book a valuable source both on John Bell the man and on the history of quantum physics between 1950 and 1990. Among other things, several authors complain about the dogmatic aversion among physicists in the 1960s to even take note of Bell's nonlocality theorem.\n\n*Quantum \\[Un\\]speakables* also reflects the prevailing situation in the year 2000 in that it collects personal, diverging views about the meaning of quantum physics from a cross-section of physicist. The cross-section is biased, though, because researchers working on Bohmian mechanics, of which Bell was the leading proponent during the decades before his death, were simply not invited to the conference, and the realists are in the minority among the authors. Thus we recommend that readers be very cautious in regard to the conclusions drawn in this book about the foundations of quantum physics.\n\nThis warning concerns in particular the conclusions drawn from Bell's nonlocality theorem. Let us tell the story briefly here. Bohmian mechanics involves superluminal action-at-a-distance and thus violates the \"locality principle\" of relativity theory. This was considered, by the Copenhagen camp, an indication that Bohmian mechanics was on the wrong track. In 1964, Bell proved that any serious version of quantum theory (regardless of whether or not it is based on microscopic realism) must violate locality. This means that if nature is governed by the predictions of quantum theory, the \"locality principle\" is simply wrong, and our world is nonlocal. It also means that the nonlocality of Bohmian mechanics is not a sign of its being on the wrong track, but quite the contrary.\n\nThe Copenhagen view, in comparison, is indeed less local: It is nonlocal in cases that Bohmian mechanics can explain in a purely local way. (For example, for a particle in a quantum state that is a superposition of being in London and being in Tokyo, according to Copenhagenism there is no matter of fact about whether the particle actually is in London or in Tokyo prior to the first attempt at detection\u2014which presupposes a temporal ordering.) But it is also contradictory, vague and confusing enough for its adherents to claim it is completely local, and thus that nonlocality is a consequence of an attachment to *realism*. Therefore, so the argument goes, it was Bell who finally proved realism wrong! Bell, of course, emphatically rejected this incorrect interpretation of his nonlocality theorem.\n\nThe crucial experiments violating Bell's inequality and thus, according to Bell's theoretical analysis, demonstrating nonlocality have been performed many times since 1980, and have also lead to significant improvements in experimental techniques. Some of these techniques have now become valuable for quantum cryptography and the first steps towards the construction of a quantum computer. These two fields are usually summarized under the key word \"quantum information,\" and great hopes are expressed, also in *Quantum \\[Un\\]speakables*, that quantum information will provide new insights into the nature of the quantum world.\n\nBut we see no reason for such hopes. Quantum information theory is a straightforward application of the rules laid down in, for example, John von Neumann's classic 1932 book on the mathematical foundations of quantum mechanics. Any interpretation of quantum mechanics, to the extent that it succeeds in explaining these rules, also explains quantum computers and the like. And to the idea that quantum theory may after all be merely about information and nothing else, Bell responded with a crucial question: \"Information? Whose information? Information about what?\"\n\n**Reviewer information.** *Nino Zangh\u0131\u0300\u00a0is professor of theoretical physics at the Universit\u00e0 degli Studi di Genova, Italy. Roderich Tumulka is a post-doctoral research fellow at the physics department of the Universit\u00e0 degli Studi di Genova. The authors wish to thank Sheldon Goldstein for his critical reading of a draft for this article.*","meta":{"dup_signals":{"dup_doc_count":37,"dup_dump_count":9,"dup_details":{"curated_sources":1,"2017-13":2,"2015-18":2,"2015-11":2,"2015-06":2,"2014-10":2,"2013-48":1,"2013-20":2,"unknown":23}},"filename":"out\/quant-ph0309020.tex.md"},"subset":"arxiv"} +{"text":"# Abstract\n\nThe development of systemic approaches in biology has put emphasis on identifying genetic modules whose behavior can be modeled accurately so as to gain insight into their structure and function. However most gene circuits in a cell are under control of external signals and thus quantitative agreement between experimental data and a mathematical model is difficult. Circadian biology has been one notable exception: quantitative models of the internal clock that orchestrates biological processes over the 24-hour diurnal cycle have been constructed for a few organisms, from cyanobacteria to plants and mammals. In most cases, a complex architecture with interlocked feedback loops has been evidenced. Here we present first modeling results for the circadian clock of the green unicellular alga *Ostreococcus tauri*. Two plant-like clock genes have been shown to play a central role in *Ostreococcus* clock. We find that their expression time profiles can be accurately reproduced by a minimal model of a two-gene transcriptional feedback loop. Remarkably, best adjustment of data recorded under light\/dark alternation is obtained when assuming that the oscillator is not coupled to the diurnal cycle. This suggests that coupling to light is confined to specific time intervals and has no dynamical effect when the oscillator is entrained by the diurnal cycle. This intringuing property may reflect a strategy to minimize the impact of fluctuations in daylight intensity on the core circadian oscillator, a type of perturbation that has been rarely considered when assessing the robustness of circadian clocks.\n\n# Author Summary\n\nCircadian clocks keep time of day in many living organisms, allowing them to anticipate environmental changes induced by day\/night alternation. They consist of networks of genes and proteins interacting so as to generate biochemical oscillations with a period close to 24 hours. Circadian clocks synchronize to the day\/night cycle through the year principally by sensing ambient light. Depending on the weather, the perceived light intensity can display large fluctuations within the day and from day to day, potentially inducing unwanted resetting of the clock. Furthermore, marine organisms such as microalgae are subjected to dramatic changes in light intensities in the water column due to streams and wind. We showed, using mathematical modelling that the green unicellular marine alga Ostreococcus tauri has evolved a simple but effective strategy to shield the circadian clock from daylight fluctuations by localizing coupling to the light during specific time intervals. In our model, as in experiments, coupling is invisible when the clock is in phase with the day\/night cycle but resets the clock when it is out of phase. Such a clock architecture is immune to strong daylight fluctuation.\n\n# Introduction\n\nReal-time monitoring of gene activity now allow us to unravel the complex dynamical behavior of regulatory networks underlying cell functions\u00a0. However, understanding the collective behavior of even a few molecular actors defies intuition, as it depends not only on the topology of the interaction network but also on strengths and response times of its links\u00a0. A mathematical description of a regulatory network is thus necessary to qualitatively and quantitatively understand its dynamical behavior, but obtaining it is challenging. State variables and parameters are subject to large fluctuations\u00a0, which create artificial complexity and mask the actual network structure. Genetic modules are usually not isolated but coupled to a larger network, and a given gene can be involved in different modules and pathways\u00a0. It is thus important to identify gene circuits whose dynamical behavior can be modeled quantitatively, to serve as model circuits.\n\nOne strategy for obtaining such circuits has been to construct synthetic networks, which are isolated by design\u00a0. As recent experiments have shown, an excellent quantitative agreement can be obtained by incorporating when needed detailed descriptions of various biochemical processes (e.g., multimerization, transport, DNA looping, etc.)\u00a0.\n\nAnother strategy is to study natural gene circuits whose function makes them relatively autonomous and stable. The circadian clocks that drive biological processes around the day\/night cycle in many living organisms are natural candidates, as these genetic oscillators keep track of the most regular environmental constraint: the alternation of daylight and darkness caused by Earth rotation\u00a0. Informed by experiments, circadian clock models have progressively become more complex, evolving from single loops featuring a self-repressed gene\u00a0 to networks of interlocked feedback loops.\n\nHere we report surprisingly good agreement between the mathematical model of a single transcriptional feedback loop and expression profiles of two central clock genes of *Ostreococcus tauri*. This microscopic green alga is the smallest free-living eukaryote known to date and belongs to the Prasinophyceae, one of the most ancient groups of the green lineage. *Ostreococcus* displays a very simple cellular organization, with only one mitochondrion and one chloroplast\u00a0. Its small genome (12.6 Mbp) sequence revealed a high compaction (85% of coding DNA) and a very low gene redundancy\u00a0 (e.g., most cyclins and CDK are present as a single copy gene\u00a0).The cell division cycle of *Ostreococcus* is under control of a circadian oscillator, with cell division occurring at the end of the day in light\/dark cycles\u00a0. These daily rhythms in cell division meet the criteria characterizing a circadian clock, as they can be entrained to different photoperiods, persist under constant conditions and respond to light pulses by phase shifts that depend on internal time\u00a0.\n\nVery recently, some light has been shed on the molecular workings of *Ostreococcus* clock by Corellou *et al.* . Since the clock of closely related *Arabidopsis* has been extensively studied, they searched *Ostreococcus* genome for orthologs of higher plant clock genes and found only two, similar to *Arabidopsis* central clock genes *Toc1* and *Cca1*\u00a0. These two genes display rhythmic expression both under light\/dark alternation and in constant light conditions. A functional analysis by overexpression\/antisense strategy showed that *Toc1* and *Cca1* are important clock genes in *Ostreococcus*. Overexpression of *Toc1* led to increased levels of CCA1 while overexpression of *Cca1* resulted in lower levels of TOC1. Furthermore CCA1 was shown to bind to a conserved evening element sequence (EE) that is required for the circadian regulated activity of *Toc1* promoter. Whether *Toc1* and *Cca1* work in a negative feedback loop could not be inferred from this study since *Ostreococcus* clock appeared to rely on more than a simple *Toc1*\/*Cca1* negative feedback loop.\n\nInterestingly, *Arabidopsis* genes *Toc1* and *Cca1* were the core actors of the first plant clock model, based on a transcriptional loop where TOC1 activates *Cca1* and the similar gene *Lhy*, whose proteins dimerize to repress *Toc1*\u00a0. However, this model did not reproduce well expression peaks of *Toc1* and *Cca1* in *Arabidopsis*\u00a0 and was extended to adjust experimental data\u00a0. Current *Arabidopsis* clock models feature several interlocked feedback loops\u00a0. This led us to investigate whether the transcriptional feedback loop model where *Toc1* activates *Cca1* and is repressed by *Cca1* would be relevant for *Ostreococcus*.\n\nWe not only found that this two-gene loop model reproduces perfectly transcript profiles of *Ostreococcus* *Toc1* and *Cca1* but that excellent adjustment of data recorded under light\/dark alternation is obtained when no model parameter depends on light intensity. This counterintuitive finding suggests that the oscillator is not permanently coupled to light across the 24-hour cycle but only during specific time intervals, which is supported by numerical simulations. In this article, we propose that the invisibility of coupling in entrainment conditions reflects a strategy to shield the oscillator from natural fluctuations in daylight intensity.\n\n# Results\n\n## Experimental data and model adjustment\n\nTo characterize the temporal pattern of *Toc1* and *Cca1* expression in *Ostreococcus*, we used microarray data acquired in triplicate under 12:12 light\/dark cycle, as described in\u00a0 (Fig.\u00a0). One *Toc1* and two *Cca1* mRNA time courses had no aberrant point. Here, we use as target profiles the complete *Toc1* profile and the complete *Cca1* profile whose samples are obtained from the same microarray data as the *Toc1* profile. We checked that the results described in this work are robust to the biological variations observed. Corellou *et al.* have also carried out an extensive work of genetic transformation in *Ostreococcus*, leading to transcriptional and translational fusion lines allowing one to monitor transcriptional activity and protein dynamics in living cells\u00a0. However, luciferase kinetics in this organism is still not well known and we postpone the analysis of luminescence time series to a future work. Model adjustment has thus been carried out using microarray expression data, which reflect accurately the endogeneous levels of mRNA. Although seeking quantitative agreement with luminescence time series was premature at this stage, predicted protein concentration profiles were compared with data from translational fusion lines as an additional test.\n\nA minimal mathematical model of the two-gene feedback loop comprises four ordinary differential equations (Eq.\u00a0(), Methods) with 16 parameters. Since detailed models extending the basic 4-ODE model\u00a0() could only have led to better adjustment, we purposely neglected here effects such as compartmentalisation or delays due to transcription or translation so as to minimize the risk of overfitting and reliably assess the validity of the two-gene loop hypothesis.\n\nExperimental data are recorded under 12:12 Light\/Dark (LD) alternation so that the coupling which synchronizes the clock to the diurnal cycle must be hypothesized. Circadian models usually assume that some parameters depend on light intensity (e.g., a degradation rate increases in the dark), and thus take different values at day and night. Parameter space dimension then increases by the number of modulated parameters. Various couplings to light were considered, with 1 to 16 parameters depending on light intensity. We also tested adjustment to model\u00a0 with all parameters constant, which allowed us to quantify the relevance of coupling mechanisms by measuring the difference between best-fitting profiles in the coupled and uncoupled cases.\n\nThe free-running period (FRP) of the oscillator in constant day conditions was fixed at 24 hours, which was the mean value observed in experiments\u00a0, but we checked that our main results remain valid for other values of the FRP. In fact, we found that when FRP was freely adjustable, it usually converged to values close to or slightly below 24 hours. Fixing the FRP at exactly 24 hours is interesting in that coupling mechanisms are selected by adjustment only if they improve goodness of fit and not merely to achieve frequency locking.\n\n## A free-running model adjusts experimental data\n\nThe first result is that an excellent agreement between numerical and experimental profiles is obtained, with a root mean square (RMS) error of a few percent (Figs.\u00a0(A)-(B)). There is no point in extending model\u00a0 to improve adjustment of microarray data, which are compatible with the hypothesis of a *Toc1*-*Cca1* feedback loop. Moreover, the corresponding protein profiles (not adjusted) correlate well with luminescence signals from CCA1:Luc and TOC1:Luc translational fusion lines (Figs.\u00a0(C)-(F)).\n\nBut the more surprising is that a non-coupled model, where all parameters are kept constant, adjusts experimental data (Fig.\u00a0(B), RMS error 3.6%) essentially as well as a fully coupled model where all parameters are allowed to vary between day and night (Fig.\u00a0(A), RMS error 3.3%). The corresponding parameter values are given in Table\u00a0. When only one or a few parameters were modulated, goodness of fit significantly degraded compared to the uncoupled and fully coupled cases. This indicates that besides being biologically unrealistic, the model with all parameters modulated fits data merely because of its large parameter space dimension, and cannot be considered seriously. Moreover we simulated the transition from LD alternation to constant light (LL) or constant darkness (DD) conditions for this model and found that it still adjusted experimental data well in LL while displaying strongly damped oscillations in DD (Fig. S1). This confirms that adjustment relies on time profiles being close to free-running oscillator profiles and that adjustment by a fully coupled model is in fact accidental.\n\nOn the other hand the uncoupled model is equally unrealistic because it cannot be entrained to the day\/night cycle, whereas it is observed experimentally that upon a phase shift of the light\/dark cycle, CCA1 and TOC1 expression peaks quickly recover their original timings in the cycle. To verify that adjustment by a free-running oscillator model does not depend on the target profile used, we generated a large number of synthetic profiles whose samples where randomly chosen inside the interval of variation observed in biological triplicates, and adjusted a free-running oscillator model to them. In each case, we found that although RMS error slightly degraded compared our target profile (where mCCA1 and mTOC1 samples for a given time always come from the same microarray), it remained on average near 10 %, with visually excellent adjustment (Fig. S2). Last, it should be noted that assuming a FRP of 24 hours allows frequency locking to occur without coupling, but cannot induce best adjustment in this limiting case by itself.\n\nThus the paradoxical result that data points fall almost perfectly on the temporal profiles of a free-running oscillator is counterintuitive but must nevertheless be viewed as a signature of the clock architecture. As we will see, this in fact does not imply that the oscillator is uncoupled but only that within the class of models considered so far, where parameters of the TOC1\u2013CCA1 loop take day and night values, the uncoupled model is the one approaching experimental data best. Nothing precludes that there are more general coupling schemes that adjust data equally well.\n\nBefore unveiling such models, we discuss now whether the simple negative feedback loop described by model\u00a0 is a plausible autonomous gene oscillator. With two transcriptional regulations, it is a simpler circuit than the Repressilator, where three genes repress themselves circularly\u00a0. It is known that in this topology, oscillations become more stable as the number of genes along the loop increases. The two-gene feedback loop described by\u00a0() could therefore seem to be a less robust oscillator than the Repressilator, and thus a poor model for the core oscillator of a circadian clock.\n\nTo address this issue, we checked robustness of adjustment with respect to parameter variations. We found that the experimental profiles can be reproduced in a wide region of parameter space around the optimum, which is quite remarkable given the simplicity of the model (Fig. S3). Moreover, a distinctive feature of the best fitting parameter sets is a strongly saturated degradation, in particular for *Cca1* mRNA, with an extremely low value of $K_{M_C}$ equal to $0.6 \\%$ of the maximal *CCa1* mRNA concentration (see table\u00a0). In this situation, the number of molecules degraded per unit time is essentially constant and does not depend on the concentration except at very small values. This is consistent with the characteristic sawtooth shape of our target profile drawn in linear scale (Fig.\u00a0(B)).\n\nThe role of post-translational interactions in gene oscillators and circadian clocks has been recently emphasized (see, e.g.,\u00a0), and in particular saturated degradation has since long been known to favor oscillations\u00a0. Recently, it has been been shown to act as a delay\u00a0 and to be essential for inducing robust oscillations in simple synthetic oscillators\u00a0 (compare Fig.\u00a0(B) with Fig. 5 of ). Thus, strongly saturated degradation is very likely also a key dynamical ingredient of the natural gene oscillator studied here.\n\n## Adjustment by a model with gated coupling\n\nCircadian models are usually coupled to diurnal cycle by changing some parameter values between day and night\u00a0. This assumes that all molecular actors involved in light input pathways have been incorporated and that their properties (e.g., degradation rates) react directly to light. Such couplings act over the entire cycle except when light-sensitive actors are present only transiently. For example, models of *Arabidopsis* clock feature an intermediary protein PIF3 that is necessary for induction of CCA1 by light but is shortly degraded after dawn so that CCA1 transcription is only transiently activated. Gating of light input has been observed in several circadian clocks and may be important for maintaining proper timing under different photoperiods\u00a0.\n\nIn our case, light\/dark alternation has no detectable signature in the dynamics of *Toc1* and *Cca1* mRNA when the clock is phase-locked to the diurnal cycle. This suggests that the actors of the two-gene loop do not sense light directly, and are driven via unknown mediators, which modify their properties inside specific temporal intervals. Since the input pathway can have complex structure and dynamics, possibly featuring separate feedback loops, the windows of active coupling may be located anywhere inside the diurnal cycle and reflect light level at other times of the cycle. Coupling activation should depend both on time of day and on the intrinsic dynamics of the light input pathway, notwithstanding a possible feedback from the circadian core oscillator\u00a0.\n\nFor simplicity, we restrict ourselves to models in which some parameters of the TOC1\u2013CCA1 feedback loop are modified between two times of the day, measured relatively to dawn (ZT0). The start and end times of coupling windows are then model parameters instead of being fixed at light\/dark transitions. This assumes that the input pathway tracks diurnal cycle instantaneously, without loss of generality for understanding behavior in entrainment conditions. In this scheme, resetting of the two-gene oscillator can be studied by simply shifting the oscillator phase relatively to the coupling windows. The results so obtained will be sufficient to show that there exist coupling schemes which leave no signature on mRNA profiles, and to study their properties.\n\nWhat makes our approach original is not the gated coupling to diurnal cycle, which can be found in other models, but the fact that we do not try to model the actors of the input pathway, which can be complex. This is because we focus here on the TOC1\u2013CCA1 feedback loop, which mostly behaves as an autonomous oscillator. Thus we only need to know the action of the unknown mediators on TOC1 or CCA1, the details of their dynamics being irrelevant.\n\nWe systematically scanned the coupling window start and end times, adjusting model for each pair. This revealed that many coupling schemes are compatible with experimental data. For example, TOC1 degradation rate $\\delta_{P_T}$ can be modified almost arbitrarily in a large temporal window between ZT22.5 and ZT6.5 without degrading adjustment. This is shown in Figs.\u00a0(A)-(C), where $\\delta_{P_T}=3\\delta^0_{P_T}$ inside this window (here and below, $\\delta^0_X$ denotes the uncoupled degradation rate of variable $X$). Although the coupling is active for 8 hours, this coupling scheme generates mRNA and protein profiles which are indistinguishable from those of a free-running oscillator. Indeed, modifying TOC1 stability in a window where protein level is low, as is the case for any subinterval of the ZT22.5\u2013ZT6.5 window, does not perturb the oscillator.\n\nWe also found a family of time windows of different lengths centered around ZT13.33, inside which the CCA1 degradation rate $\\delta_{P_C}$ can be decreased without significantly modifying goodness of fit. In Figs.\u00a0(D)-(F), we show the effect of having $\\delta_{P_C}=\\delta^0_{P_C}\/2$ between ZT12.8 and ZT13.95. In this coupling scheme, mRNA profiles are not affected but coupling activation has a noticeable effect on CCA1 level, which rises faster than in the uncoupled case. After the window, however, CCA1 level relaxes in a few hours to the uncoupled profile, losing memory of the perturbation. Near this time of the day, the CCA1 protein level appears to be slaved by the other variables: the perturbation induced by modified degradation does not propagate to the other variables, and when coupling is switched off, the protein level relaxes to its value in the uncoupled solution. Thus, the effect of coupling is not only small but transient. An important consequence, which we will exploit later, is that the two coupling windows shown in Fig.\u00a0 can be combined without modifying adjustment, provided the perturbation induced by one window has vanished when the other window begins.\n\nIn these examples, adjustment is sensitive to the timing of these coupling windows: when the start time is modified slightly, the end time must be changed simultaneously so as to recover good adjustment. On the other hand, we found that adjustment error depends little on the coupling strength (measured by the ratio between degradation rates outside and inside the window), especially for short coupling windows.\n\nFig.\u00a0(A) shows how adjustment error varies as a function of coupling strength for the two coupling windows used in Fig.\u00a0 as well as for two other windows inside which the CCA1 protein degradation is reduced, one shorter and the other longer than the window in Fig.\u00a0(B). The window of accelated TOC1 degradation is totally insentitive to modifications of the TOC1 degradation rate, which is due to protein levels being very low in this window. Windows of CCA1 stabilization are all the more insensitive to variations in CCA1 degradation rate as they are shorter. To quantify the sensitivity of a given window we define $r_\\text{max}$ as the largest value of the ratio $r=\\delta^0_{P_C}\/\\delta_{P_C}$ such that adjustment RMS error remains below 10 % for any value of $r$ between $1$ and $r_\\text{max}$. The associated variations in mRNA profiles are visually undetectable and below experimental uncertainties. For the windows ZT12\u2013ZT15.47, ZT12.8\u2013ZT13.95 and ZT13\u2013ZT13.65, of respective durations 3.47, 1.15 and 0.65 hours, we find that the $r_\\text{max}$ index takes the value 1.5, 2.5 and 260 respectively.\n\nTo gain better insight into the effect of a coupling window, we must take into account the fact that the induced variation in the entrained oscillations can be decomposed as a displacement along the limit cycle (resulting in a phase shift) and a displacement transversely to the limit cycle (resulting in a deformation of the limit cycle). To this end, we apply a variable phase shift to the entrained time profile and optimize this phase shift so as to minimize the adjustment error. We define the waveform error as the minimal value of the latter, and the phase error the value of the phase shift for which it is obtained. A small waveform error indicates that we are following the same limit cycle as in the free-running case, possibly with a different phase than is observed experimentally. Waveform and phase errors for the three windows of CCA1 protein stabilization considered in Fig.\u00a0(A) are shown in Figs.\u00a0(B) and (C), respectively. It can be seen that only the largest window is associated with a deformation of the limit cycle for large values of $r$, and that it remains modest (RMS error of about 10 % for $r=20$). For the two shorter windows, degraded adjustment essentially results from a phase shift of the entrained solution as the modulation index is increased. It can also be seen that the phase error is in fact very small, approximately 7.5 and 2.5 minutes at $r=10$ for the two shorter windows. Thus it appears that for short enough windows, the effect of the light coupling mechanism can be entirely captured by studing the phase response induced by the mechanism and that a necessary property of a coupling window is that it induces a zero phase shift of the free-running limit cycle (or a phase shift corresponding to the mismatch between the natural and forcing periods in the general case that we will consider later).\n\n## Systematic characterization of gated coupling mechanisms\n\nBesides the two specific examples shown in Fig.\u00a0, other coupling schemes are compatible with experimental data. In this section, we undergo a systematic approach in order to determine those coupling schemes that do synchronize the free-running model to the day\/night cycle, while leaving no signature on mRNA profiles when the phase-locking regime is achieved. To this aim, a preliminary step is to identify those coupling schemes that synchronize in the limit of weak forcing using the tools of infinitesimal phase response curve, which can be defined in the framework of perturbation theory in the vicinity of periodic orbits\u00a0. Computation of the parametric impulse phase response curve \u00a0 ($Z_{piPRC}$) characterizing a light-coupling mechanism corresponding to parameter variation $\\mathbf{dp}$ allows one to determine time intervals specified by duration $\\tau$ and median position $t_m$ such that when the mechanism is applied in this time interval, it generates a zero phase shift and phase-locking is stable to small perturbations (see Supporting materials). Such intervals satisfy: $$\\left\\{ \\begin{array}{rll}\n\\int_{t_m-\\tau\/2}^{t_m+\\tau\/2} \\, Z_{piPRC}(u,\\mathbf{dp})\\, du=0 \\\\\n\\int_{t_m-\\tau\/2}^{t_m+\\tau\/2} \\, Z_{piPRC}'(u,\\mathbf{dp})\\, du<0\n\\end{array} \\right.\n\\label{Eq:z}$$\n\nFigure depicts the properties of various gated couplings in the case where the light-coupling mechanism is assumed to modulate specifically a single transcription-related or degradation-related kinetic parameter. For sufficiently weak positive or negative modulation of those eight parameters, a coupling window of specific width ($\\tau$) and position ($t_m$) can always be found to satisfy the Eq. (Figs. (A)\u2013(C)), thus being compatible with experimental data. However, the adjustment of these weak coupling schemes to data is expected to deteriorate progressively when coupling strength is increased, because (i) the locking phase may change, (ii) the modulation may deviate significantly the trajectory from that of the free-running oscillator or (iii) the entrained solution may loose its stability. Numerical simulations performed at different coupling strengths indicate that only a subset of coupling schemes determined in the limit of weak coupling keep a good adjustement irrespective of the coupling strength. Fig. (D) shows window timings such that adjustment error remains below 10 % when the kinetic parameter is multiplied or divided by 1.17 or 2. Such a goodness of fit can only be obtained if limit cycle deformation remains small.\n\nAs with the examples considered in the previous section, some coupling mechanisms have robust adjustment properties in that a good adjustment is obtained at the two different coupling strengths for the same timings, which coincide with the timings computed in the weak coupling limit. In these cases, adjustment is robust to variations in the coupling strength, which suggests that for these coupling mechanisms, the weak coupling approximation remains valid up to large coupling strengths. For instance, light coupling mechanisms that temporarily increase TOC protein degradation ($\\delta_{P_T}$) or CCA1 activation threshold ($P_{T0}$) in windows located during the day the night appear to be robust couplings. Similarly, decreasing CCA1 protein degradation ($\\delta_{P_C}$) or TOC repression threshold ($P_{C0}$) in windows occuring during night are robust light-coupling mechanisms. Some other mechanisms do not display the same robustness because either the window timings corresponding to good adjustment depend sensitively on coupling strength (e.g., for positive modulation of mTOC1 degradation rate) or because no good adjustment can be found except for very short windows (e.g., modulation of mCCA1 degradation rate). Other robust coupling mechanisms can be identified in Fig. S4, in which the coupling mechanisms not considered in Fig.\u00a0 are characterized.\n\nFigure\u00a0 provides a complementary illustration of the robustness of adjustment for models with gated modulation of CCA1 or TOC1 protein degradation rate. In these plots, the window center is kept fixed at the time determined from Eq.\u00a0() and shown in Fig.\u00a0(C) while coupling strength and window duration are freely varied. It can be seen that this timing is compatible with adjustment in a wide range of coupling strengths and window durations.\n\nOur analysis shows that several coupling mechanisms are compatible with the experimental data and that discriminating them requires more experimental data. In particular, monitoring gene expression in transient conditions will probably be crucial since the coupling mechanism leaves apparently no signature in the experimental data in entrainement conditions. For simplicity, we restrict ourselves in the following to models in which half-lives of TOC1 or CCA1 proteins are modified during a specific time interval that is determined in Fig (D).\n\n## Resetting\n\nOne may wonder about the purpose of coupling schemes with almost no effect on the oscillator. The key point is that our data have been recorded when the clock was entrained by the diurnal cycle and phase-locked to it. A natural question then is: how do such couplings behave when clock is out of phase and resetting is needed? We found that while the two mechanisms shown in Fig.\u00a0 have poor resetting properties when applied separately (Fig. S5), a combination of both can be very effective. In Fig.\u00a0(A)-(B), we show how the two-gene oscillator recovers from a sudden phase-shift of 12 hours using a two-window coupling scheme. As described above, we assume for simplicity that the two coupling windows remain fixed with respect to the day\/night cycle. The 12-hour phase shift is induced by initializing at dawn the oscillator state with the value it takes at dusk in the entrained regime. Figs.\u00a0(A)-(B) show that most of the lag is absorbed in the first 24 hours and the effect of the initial perturbation is hardly detectable after 48 hours.\n\nTo design this coupling, we utilized the fact that modifying coupling strengths inside windows hardly affects adjustment. We could therefore choose their values so as to minimize the maximal residual phase shift after three days for all possible initial lags (Fig.\u00a0(C)). Interestingly, we found that the best resetting behavior is obtained when the start time of the window of modified TOC degradation coincides with dawn. Phase locking in this example is globally stable. However, resetting becomes slow when the residual phase shift is under an hour and the residual phase shift is variable (RMS phase error after 5 days is 25 minutes and maximum phase error is 1 hour), and (Fig.\u00a0(C)). This inefficiency results in fact from the limitations of a model where the two parameters are modulated by a rectangular profile with fixed timing. Indeed, we will see later that impressive adjustment and resetting behavior can be simultaneously obtained when parameters are modulated with smooth profiles. Our numerical results thus show that a coupling scheme can at the same time be almost invisible when the oscillator is in phase with its forcing cycle and effective enough to ensure resetting when the oscillator is out of phase. By invisible, we mean that the time profile remains in a close neighborhood of the uncoupled one, so that the only effect of coupling is to fix the phase of the oscillation with respect to the day\/night cycle.\n\n## Robustness to daylight fluctuations\n\nWhy would it be beneficial for a circadian oscillator to be minimally affected by light\/dark alternation in normal operation? A tempting hypothesis is that while daylight is essential for synchronizing the clock, its fluctuations can be detrimental to time keeping and that it is important to shield the oscillator from them. If the entrained temporal profile remains close to that of an uncoupled oscillator at different values of the coupling parameter, then it will be naturally insensitive to fluctuations in this parameter. To gain insight into this fundamental question, we subjected the fully coupled and occasionally coupled clock models to fluctuating daylight.\n\nWith the light input pathway unknown, we must allow for the fact that light fluctuations may be strongly attenuated upon reaching the *Toc1*-*Cca1* loop. For example, the light signal could be transmitted through an ultrasensitive signaling cascade with almost constant output above an input threshold close to daylight intensities at dawn. The core oscillator would then be subjected to a driving cycle much closer to a perfect square wave than the intensity profile. We thus considered varying modulation depths for the core oscillator parameters to reflect this possible attenuation.\n\nAlthough the two types of model adjust experimental data equally well when subjected to a regular alternation, they have completely different responses to daylight fluctuations. In Fig.\u00a0, we assume that light intensity is constant throughout a given day but varies randomly from day to day. For almost zero modulation, the fully coupled model of Fig.\u00a0(B) maintains relatively regular oscillations of varying amplitude (Fig.\u00a0(B)). When parameter values are modulated by only a few percent, however, this model behaves erratically: oscillations stop for a few days, expression peaks occur a few hours in advance,... (Fig.\u00a0(C)). A circadian clock similarly built would be adversely affected by fluctuations in daylight intensity even with very strong attenuation in the input pathway.\n\nIn contrast to this, the two occasionally coupled oscillators of Fig.\u00a0 keep time perfectly even for extreme fluctuations (Figs.\u00a0(D)-(E)) and generate oscillations that are indistinguishable from those of the free-running oscillator which adjusts experimental data recorded under strictly periodic light\/dark alternation. Obviously, this extends to models combinining the two windows, such as the one used in Fig.\u00a0. This simple model thus describes a robust clock that is both sensitive to phase shifts in the forcing cycle and insensitive to fluctuations in intensity.\n\nWe also studied the effect of fluctuations at shorter time scales. When light intensity was varied randomly each hour, but with the same mean intensity each day, the permanently coupled model was still affected but much less than in Fig.\u00a0 (Fig. S6).\n\n## Influence of free-running period\n\nThe results described above may seem to rely on the FRP being equal to 24 hours. When the FRP is smaller or larger, coupling is required to achieve frequency locking and pull the oscillation period to 24 hours. To investigate this more general case, we scaled kinetic constants of the free-running model used in Fig.\u00a0(B) to shift the FRP to 25 or 23.5 hours. In both cases (short FRP and long FRP), we could find models with gated coupling that adjust perfectly the experimental data with a period of 24 hours (Fig.\u00a0). These models are very similar to those shown in Fig.\u00a0, the only notable difference being that coupling windows are shifted so that the induced resetting corrects for the period mismatch. Interestingly, the coupling windows for a FRP of 25 hours are located near the light\/dark and dark\/light transitions. We found that these coupling schemes were also very robust to daylight fluctuations (Fig. S7), indicating that the modulation ratio (equal to 3 for the two windows) is not critical. We also found that without taking adjustment into account, the free running oscillator is entrained by the coupling windows shown in Fig.\u00a0) within a wide range of modulation ratios, from a lower threshold of 1.05 (resp. 1.25) for the FRP equal to 23.5 hours (resp. 25 hours) to an upper threshold of 13 for both FRPs. With a modulation ratio of 3, free-running oscillators with FRPs ranging from 22 to 29 hours could be entrained.\n\n## Gating by smooth profiles\n\nGating of light input by rectangular profiles does not reflect the fact that the concentration of the mediators modulating the oscillator typically vary in a gradual way. The existence of nested coupling windows such that models with shorter windows can adjust data with larger parameter modulation (see Fig.\u00a0) suggests investigating the action of smooth gating profiles, with maximal parameter modulation near the center of the window. To this end, we considered 24-hour periodic, Gaussian-shaped, modulation profiles defined by: $1\/r_C(t)=\\delta^0_{P_C}\/\\delta_{P_C}(t)=\n1+k_C\\exp\\left(-\\frac{\\sin(\\pi(t-t_C)\/24)^2}{\\sigma_C^2}\\right)$ and $r_T(t)=\\delta_{P_T}(t)\/\\delta^0_{P_T}=1+k_T\\exp\\left(-\\frac{\\sin(\\pi(t-t_T)\/24)^2}{\\sigma_T^2}\\right)$, which are parameterized by the times of maximal modulation $t_{C}$, $t_{T}$, the coupling durations $\\sigma_C$, $\\sigma_T$ and the modulation depths $k_C$ and $k_T$. To assess whether good data adjustment and resetting behavior could be obtained simultaneously, these six parameters were chosen so as to minimize the RMS residual phase error 5 days after an initial random phase shift ranging from -12 to 12 hours (see Methods). Note that this naturally forces adjustment to experimental RNA profiles.\n\nThe behavior of the model using the optimized modulation profiles (Figs.\u00a0(A)-(B)) confirms the findings obtained with rectangular profiles (Fig.\u00a0). The entrained RNA and protein time profiles shadow that of the reference free-running oscillator, with little evidence of the coupling (Figs.\u00a0(C)-(E)). Phase resetting in response to a phase shift is excellent (Fig.\u00a0(F)): RMS (resp. maximum) residual phase shift after 5 days is 2.4 min (resp., 10 min). This is all the more remarkable as the Gaussian shape of the modulation profile is artificial, which shows that the dynamical mechanism exploited here is robust and relatively insensitive to the shape of the modulation profile. Moreover, the oscillator is extremely resistant to daylight fluctuations (Fig.\u00a0(F)). In spite of its simplicity, the two gene-oscillator studied here thus fulfills key requirements for a circadian oscillator when modulated with the right timing.\n\n# Discussion\n\nOur findings illustrate how mathematical modeling can give insight into the architecture of a genetic module. Not only can expression profiles of two *Ostreococcus* clock genes be reproduced accurately by a simple two-gene transcriptional feedback loop model, but furthermore excellent adjustment of mRNA data is provided by a free-running model. This counterintuitive result can be explained if coupling to the diurnal cycle occurs during specific temporal windows, where unidentified mediators interact with the TOC1-CCA1 oscillator in such a way that it experiences negligible forcing when it is in phase with the day\/night cycle, and strong resetting when it is out of phase. We could exhibit many coupling schemes compatible with experimental mRNA temporal profiles, differing by the coupling mechanism or by the window timing. This indicates that identification of the actual light input pathway will require additional experimental data. Our analysis strongly supports the conjecture that *Ostreococcus* genes *Cca1* and *Toc1* are the molecular components of an oscillator at the core of *Ostreococcus* clock but does not exclude that other coupled oscillators or feedback loops exist.\n\nWhy would a circadian oscillator decouple from the day\/night cycle when in phase with it so as to generate quasi-autonomous oscillations? A natural hypothesis is that this protects the clock against daylight fluctuations, which can be important in natural conditions\u00a0. In a vast majority of numerical simulations and experiments on circadian clocks reported in the literature, the day\/night cycle is taken into account through a perfect alternation of constant light intensity and darkness. However, this is somehow idealized, as the primary channel through which clocks get information about Earth rotation, namely daylight, is variable.\n\nIn nature, the daylight intensity sensed by an organism depends not only on time of day but also on various factors such as sky cover or, for marine organisms such as *Ostreococcus*, the distance to sea surface and water turbidity, which can affect perceived intensity much more than atmosphere. Therefore, the light intensity reaching a circadian clock can vary several-fold not only from one day to the next but also between different times of the day. A clock permanently coupled to light is also permanently subjected to its fluctuations. Depending on the coupling scheme, keeping time may become a challenge when fluctuations induce phase resettings and continuously drive the clock away from its desired state. Indeed, we found that a mathematical model with properly timed coupling windows was insensitive to strong light intensity fluctuations while a permanently coupled model became erratic even for very small coupling strengths. For simplicity, we only tested the robustness of a model with modulated TOC1 and CCA1 protein degradation. However, it should be stressed that all other light-coupling mechanisms that have been found to be robust with respect to adjustment (see Figs.\u00a0 and S4) are naturally also robust with respect to daylight fluctuations. Indeed they adjust the experimental data for varying coupling strengths at fixed window timings. This indicates that the limit cycle is insensitive to variations in the coupling strength, which is the key to the robustness to daylight fluctuations. Another interesting result from our numerical simulations is that the most disruptive fluctuations are the variations in intensity from one day to the other, since their time scale matches the oscillator period. Indeed, faster or slower fluctuations are easily filtered out.\n\nThese results lead to enquire whether similar designs exist in other circadian clocks. Although the importance of this problem was noted some time ago\u00a0, the robustness of circadian clocks to daylight fluctuations and how this constraint shapes their molecular architecture have been little studied until very recently. The discussion on how genetic oscillators can keep daytime has essentially focused on the most important sources of noise under constant conditions : temperature variations\u00a0 or fluctuations in concentration due to small numbers of molecules\u00a0. However, an operating clock is naturally subjected to an external forcing cycle, which is yet another source of fluctuations.\n\nWe thus conjecture that a circadian clock must be built so as to be insensitive to daylight intensity fluctuations when entrained by the day\/night cycle, just as it is insentitive to molecular or temperature fluctuations, and that this can be achieved by keeping the oscillator as close to the free-running limit cycle as possible, scheduling coupling at a time when the oscillator is not responsive. An important consequence of this principle is that it allows us to discriminate between different possible coupling mechanisms for a given model, as our analysis revealed dramatic differences in the ability of different parametric modulations to buffer fluctuations. It also allows us to determine the preferred timing for a given coupling mechanism, which may prove very helpful when trying to identify the molecular actors which mediate the light information to the clock.\n\nWhen the FRP is close to 24 hours, as in much of our analysis it is easy to understand why robustness to daylight fluctuations requires that the forced oscillation shadows the free-running solution. Robustness manifests itself in the time profile remaining constant when subjected to random sequences of daylight intensity. This includes strongly fluctuating sequences as well as sequences of constant daylight intensity at different levels. Thus, the oscillator response should be the same at high and low daylight intensities, which implies that the solution must remain close to the free-running one as forcing is increased from zero. Note that this only holds in entrainment conditions, where coupling is not needed. When the clock is out of phase, strong responses to forcing are expected, with resetting being faster as forcing is stronger.\n\nWhen the natural and external periods are significantly different, the problem may seem more complex as coupling is required to correct the period mismatch. There is a minimal coupling strength under which the oscillator is not frequency-locked and entrainment cannot occur. Nevertheless, we showed that properly timing the coupling windows is as effective for oscillators with FRP of 23.5 and 25 hours as for the 24-hour example we had considered. Again, the forced solution remains close to the free-running limit cycle even if proceeding at a different speed to correct the period mismatch. This also shows that FRP is not a critical parameter for adjustment of the experimental data used here.\n\nA consequence of the small deviation of the limit cycle from the free-running one when coupling strength is varied is that oscillations should vary little upon a transition from LD to LL or DD conditions (see, e.g., Figs.\u00a0(G)-(H)). We searched the litterature for examples of such behavior. Ref.\u00a0 provides a interesting comparison of models for the *Drosophila* and *Neurospora* circadian clocks which is illustrative for our discussion. In this study, the variation in amplitude is much less pronounced for the *Drosophila* model than for the *Neurospora* one (see Fig.\u00a02 of\u00a0). Concurrently, the sensitivity of the phase of the entrained oscillations to variations in the light-controlled parameter is much smaller for the *Drosophila* model (see Fig.\u00a03 of ), which is a necessary condition for robustness to daylight fluctuations. Another interesting comparison involves the one-loop and two-loop models of *Arabidopsis* clock. The one-loop model clearly modifies its behavior upon entering DD conditions from LD (see Fig.\u00a05 of\u00a0) while the two-loop model preserves its average waveform when transiting from LD to LL, except for the disappearance of the acute response to light at dawn (see Fig.\u00a06 of ). Thus, the two-loop model not only reproduces experimental data better but also seems more robust.\n\nThe *Drosophila* and *Neurospora* clock models analyzed in\u00a0 also differ in their response to forcing when their FRP is close to 24 hours\u00a0. A number of circadian models cannot be entrained when their FRP is too close to 24 hours because complex oscillations, period-doubled or chaotic ones, are observed easily for moderate to strong forcing. Indeed, it is expected that near resonance between the forcing and natural periods, the strong response exalts nonlinearities and favors complex behavior. Again, the *Drosophila* clock model appears to be more robust in this respect\u00a0. We stress that making the coupling invisible in entrainment conditions naturally addresses this issue. Dynamically uncoupling the oscillator from the diurnal cycle in entrainment conditions makes it immune both to fluctuations in daylight intensity and to destabilization in the face of strong forcing.\n\nAn important problem is how a clock with occasional coupling can adjust to different photoperiods so as to anticipate daily events all along the year. We can only touch briefly this question here as it requires understanding how the temporal profile of the coupling windows changes with photoperiod and thus a detailed description of the unknown light input pathways and additional feedback loops that control the timing of these windows. The key point is that the phase of the entrained oscillations is controlled by the position of the coupling windows. Thus the role of light input pathways and additional feedback loops, whose internal dynamics will typically be affected by input from photoreceptors and feedback from the TOC1\u2013CCA1 oscillator, is to time the coupling windows as needed for each photoperiod so that the correct oscillation timing is generated\u00a0. This question will be addressed in a future work, together with the analysis of the luminescence time series recorded for differents photoperiods.\n\nOur results also bring some insight into the recent observation that a circadian clock may require multiple feedback loops to maintain proper timing of expression peaks in response to noisy light input across the year\u00a0. We have shown here that a single two-gene loop can display impressive robustness to daylight fluctuations when its parameters are modulated with the right timing. As noted when discussing the response to different photoperiods, this requires the presence of additional feedback loops to generate the biochemical signal needed to drive the core oscillator appropriately, and which we have not yet identified and modeled in Ostreococcus. Robustness to fluctuations thus implies a minimal level of complexity.\n\nFinally, robustness to intensity fluctuations may explain why it is important to have a self-sustained oscillator at the core of the clock, as a forced damped oscillator permanently needs forcing to maintain its amplitude, and is thereby vulnerable to amplitude fluctuations. Confining the dynamics near the free-running limit cycle allows to have a pure phase dynamics for the core oscillator, uncoupled from intensity fluctuations. Understanding how to construct it will require taking into account the sensitivity of the free-running oscillator to perturbations across its cycle\u00a0.\n\nA simple organism as *Ostreococcus* can apparently combine mathematical simplicity with the complexity of any cell. The low genomic redundancy of *Ostreococcus* is certainly crucial for allowing accurate mathematical modeling, leading to better insight into the clock workings. *Ostreococcus* therefore stands as a very promising model for circadian biology, but also more generally for systems biology.\n\n# Materials and Methods\n\nA minimal mathematical model of the transcriptional loop where *Toc1* activates *Cca1* which represses *Toc1*, consists of the following four differential equations:\n\n$$\\label{eq:model}\n \\begin{eqnarray}\n \\dot{M_T} &=& \\mu_T + \\frac{\\lambda_T}{1+(P_C\/P_{C0})^{n_C}} -\n \\delta_{M_T} \\frac{K_{M_T} M_T}{K_{M_T} + M_T}\\\\\n \\dot{P_T} &=& \\beta_T M_T -\n \\delta_{P_T} \\frac{K_{P_T} P_T}{K_{P_T} + P_T}\\\\\n \\dot{M_C} &=& \\mu_C + \\frac{\\lambda_C (P_T\/P_{T0})^{n_T}}{1+(P_T\/P_{T0})^{n_T}} -\n \\delta_{M_C} \\frac{K_{M_C} M_C}{K_{M_C} + M_C}\\\\\n \\dot{P_C} &=& \\beta_C M_C -\n \\delta_{P_C} \\frac{K_{P_C} P_C}{K_{P_C} + P_C}\n \\end{eqnarray}$$ Eqs\u00a0() describe the time evolution of mRNA concentrations $M_C$ and $M_T$ and protein concentrations $P_C$ and $P_T$ for the *Cca1* and *Toc1* genes, as it results from mRNA synthesis regulated by the other protein, translation and enzymatic degradation. *Toc1* transcription rate varies between $\\mu_T$ at infinite CCA1 concentration and $\\mu_T+\\lambda_T$ at zero CCA1 concentration according to the usual gene regulation function with threshold $P_{C0}$ and cooperativity $n_C$. Similarly, *Cca1* transcription rate is $\\mu_C$ (resp., $\\mu_C+\\lambda_C$) at zero (resp., infinite) TOC1 concentration, with threshold $P_{T0}$ and cooperativity $n_T$. Translation of TOC1 and CCA1 occurs at rates $\\beta_T$ and $\\beta_C$, respectively. For each species $Y$, the Michaelis-Menten degradation term is written so that $\\delta_Y$ is the low-concentration degradation rate and $K_Y$ is the saturation threshold.\n\nModel\u00a0() has 16 free continuously varying parameters besides the cooperativities $n_C$ and $n_T$ which can be set to the integer values 1 or 2 by the adjustment procedure. mRNA concentrations are determined experimentally only relative to a reference value and protein profiles are not adjusted. Therefore, two solutions of Eqs.\u00a0() that have the same waveforms up to scale factors are equivalent. Therefore, we can eliminate four parameters by scaling Eqs.\u00a0(), with only 12 free parameters controlling adjustment when parameters do not vary in time, which optimizes parameter space exploration. Then parameters are rescaled so that the maximum value of protein profiles is 100 nM, the maximum value of *Cca1* mRNA profile is 10 nM and the *Toc1* and *Cca1* mRNA maximum values are in the same proportion as in microarray data. This makes it easier to compare regulation thresholds and degradation saturation thresholds relative to the maximum values of the four concentrations. When the number of modulated parameters is $m$, parameter space is $(12+m)$-dimensional.\n\nAdjustment was carried out by using a large number of random parameter sets as starting points for an optimization procedure based on a Modified Levenberg\u2013Marquardt algorithm (routine LMDIF of the MINPACK software suite\u00a0). Goodness of fit for a given parameter set was estimated by the root mean square (RMS) error between experimental and numerical mRNA levels, in logarithmic scale. Numerical integration was performed with the SEULEX algorithm\u00a0. Adjustment was carried out with 14 (resp. 2) Quad-Core Intel Xeon processors at 2.83 GHz during 72 hours for the 28-dimensional (resp. 12-dimensional) parameter space. Convergence was checked by verifying that the vicinity of the optimum was well sampled. In the uncoupled case, the ODE system is invariant under time translation so that its solutions are defined up to an arbitrary phase. An additional routine was then used to select the best-fitting phase.\n\nTo study the effect of daylight fluctuations, parameters were modulated as follows. $L(t) \\in \\left[0,1\\right]$ is the randomly varying light intensity, with $L^{\\text{ref}}=0.5$ the reference level. We define the reference modulation depth of the $Y$ parameter taking value $Y_L$ at standard light level and $Y_D$ in dark as $k^{\\text{ref}}_Y=\\left(Y_L-Y_D \\right)\/\\left(Y_L+Y_D \\right)$. $L(t)$ modifies modulation depth according to $k_Y=k_Y^{\\text{ref}}\\left[1 +\n \\beta\\;\\left(L-L^{\\text{ref}}\\right)\\right]$, where $\\beta$ quantifies sensitivity to light variation. The modified modulation depth fixes a new value for the day value, the dark value being unchanged. For models with occasional coupling, we use similar definitions with dark and light parameter values replaced by parameter values respectively outside and inside of the coupling window. The CCA1 stability modulation inside the window starting after dusk depends on the intensity of the previous day.\n\nThe parameters of the Gaussian-shaped modulation profiles were determined by optimizing resetting. For all possible variable initial time lag ranging from -12 to 12 hours, the effect of the coupling scheme based on the two profiles modulating TOC1 degradation and CCA1 degradation was characterized as follows. The time lag was applied to the free-running cycle adjusting experimental data. Then, the coupling scheme was applied for one or 5 days. Finally, the coupling was switched off and the residual phase error was measured after two days. The set of six parameters defining modulation profiles were obtained as those which minimize RMS residual phase error across the 24-hour interval.\n\n# Acknowledgments\n\nWe thank Bernard Vandenbunder for his helpful guidance in the early stages of this work oscillators and Constant Vandermoere for assistance with data analysis.\n\n# References\n\n# Figure Legends\n\n# Tables\n\n| Symbol | Description | FC (day) | FC (night) | FR |\n|:---|:---|---:|---:|---:|\n| $\\mu_T$ | Minimal *Toc1* transcription rate (nM\/min) | 0.0017 | 0.0016 | 0.0065 |\n| $\\lambda_T$ | CCA1-dependent *Toc1* transcription rate (nM\/min) | 0.93 | 0.29 | 0.67 |\n| $P_{C_0}$ | CCA1 level at *Toc1* repression threshold (nM) | 1.47 | 0.00 | 1.04 |\n| $n_C$ | Cooperativity of CCA1 | 2 | 2 | 2 |\n| $1\/\\delta_{M_T}$ | mTOC1 half-life (min) | 13.8 | 22.0 | 5.08 |\n| $K_{M_T}$ | mTOC1 degradation saturation threshold (nM) | 8.85 | 18.3 | 1.25 |\n| $\\beta_T$ | TOC1 translation rate (1\/min) | 0.013 | 0.023 | 0.016 |\n| $1\/\\delta_{P_T}$ | TOC1 half-life (min) | 29.9 | 29.0 | 3.58 |\n| $K_{P_T}$ | TOC1 degradation saturation threshold (nM) | 3.85 | 9.78 | 0.76 |\n| $\\mu_C$ | Minimal *Cca1* transcription rate (nM\/min) | 0.0075 | 0.017 | 0.052 |\n| $\\lambda_C$ | TOC1-dependent *Cca1* transcription rate (nM\/min) | 0.12 | 0.047 | 0.060 |\n| $P_{T_0}$ | TOC1 level at *Cca1* activation threshold (nM) | 100.4 | 1.49 | 44.1 |\n| $n_T$ | Cooperativity of CCA1 | 2 | 2 | 2 |\n| $1\/\\delta_{M_C}$ | mCCA1 half-life (min) | 13.3 | 52.2 | 0.82 |\n| $K_{M_C}$ | mCCA1 degradation saturation threshold (nM) | 0.56 | 3.76 | 0.063 |\n| $\\beta_C$ | CCA1 translation rate (1\/min) | 0.056 | 0.046 | 0.075 |\n| $1\/\\delta_{P_C}$ | CCA1 half-life (min) | 55.5 | 92.3 | 54.7 |\n| $K_{P_C}$ | CCA1 degradation saturation threshold (nM) | 32.4 | 36.0 | 46.0 |\n\n**Model parameter values**\n\n**Supporting Information**\n\n## Gated-coupling design in the weak modulation limit\n\nThe free oscillator model was shown to adjust remarkably well the RNAmicroarray data from LD12:12 experiments. A tempting hypothesis is that the synchronization of the free running oscillator to the day-night cycle involves a light-dependent gated-coupling mechanism that has restricted effect on the RNA traces when phase locked. We develop here a systematic method to repertoriate the coupling schemes that synchronize the free oscillator to the diurnal cycle while preserving the adjustment score obtained in the absence of coupling. For enough weak coupling strength, any coupling schemes that achieve the correct locking phase preserve the adjustement score. Those coupling schemes can be found in the framework of perturbation theory in the vicinity of a periodic orbit \\[1,2,3\\], assuming that the driving force period is enough close to the internal clock period. We consider the state vector of a nonlinear oscillator, which represents the concentration of the molecular clock components. In constant dark conditions, the concentration vector $\\mathbf{X}$ evolves according to: $$d\\mathbf{X}\/dt=\\mathbf{F}(\\mathbf{X},\\mathbf{p_0})\n\\label{eq:co}$$ Eq. has a periodic solution $\\mathbf{X_{\\gamma}}(t)$ corresponding to a stable limit cycle of period $T$ close to $24$ hours. We assume that the coupling between the light and the circadian oscillator is mediated by a set of $N$ components ($k$ is the index), which modulate the parameter vector in the direction of $\\mathbf{dp}_k$: $$\\mathbf{p}(t)=\\mathbf{p_0}+\\sum_{k=1,N} L_k(t,\\tau_k,(t_m)_k) \\mathbf{dp}_k$$ where the $24h$-periodic scalar function $L_k(t,\\tau_k,(t_m)_k)$ represents the temporal profile of activation (rectangular- or gaussian-shaped profiles in the present paper) of the light-dependent component $k$ with $\\tau_k$ and $(t_m)_k$ characterizing the effective coupling window duration and center ($t=0$ correspond to the night-day transition or CT0).\n\nA small enough parametric impulse perturbation applied at phase $u$ induces an infinitesimal change of the circadian oscillator phase defining a $T$-periodic scalar function $Z_{piPRC}(u,\\mathbf{dp})$ called infinitesimal phase response curve \\[2\\] or, to be more precise, parametric impulse phase response curve \\[3\\]: $$Z_{piPRC}(u,\\mathbf{dp}_k)=(\\mathbf{dp}_k)^T.\\mathbf{Z_p}(u)$$ where $$\\mathbf{Z_p}(u)=[\\frac{\\partial \\mathbf{F}(\\mathbf{X}_{\\gamma}(u))}\n{\\partial \\mathbf{p}}]^T \\frac\n{\\partial\\phi(\\mathbf{X}_{\\gamma}(u))}{\\partial{\\bf X}}$$\n\nThen, the phase change induced by the light sensed during the daytime can be derived from the convolution of the temporal profile of the light-sensing components with the piPRC: $$\\Delta\\phi=\\sum_{k=1,N} \\int_0^{T} L_k(u,\\tau_k,(t_m)_k)\nZ_{piPRC}(u+\\phi,\\mathbf{dp}_k) du$$ where $\\phi$ is the phase of the oscillator at CT0. A stable entrainment state requires that the scalar functions $L$ and $Z_{ipPRC}$ satisfies: $$\\left\\{ \\begin{array}{rll}\n\\sum_{k=1,N} \\int_0^{T} L_k(u,\\tau_k,(t_m)_k) \\,\nZ_{piPRC}(u+\\phi^*,\\mathbf{dp}_k) du=\\delta \\phi^* \\\\ \n\\sum_{k=1,N} \\int_0^{T} L_k(u,\\tau_k,(t_m)_k) \\,\nZ_{piPRC}'(u+\\phi^*,\\mathbf{dp}_k) du<0 \n\\end{array} \\right. \n\\label{eq-gp}$$ where $\\phi^*$ is the locked phase (relative to CT0) and $\\delta\\phi^*$ is the phase change induced by the period mismatch between the free oscillator and the day-night period, which is assumed to be small with respect to $T$.\n\nFor any modulated parameter set $\\mathbf{dp}$ whose $Z_{piPRC}$-function is equal to $\\delta \\phi^*$, one can always find $\\tau_k$ and $(t_m)_k$, that satisfies Eqs. above. In the case where there is a unique coupling scheme ($N=1$) with a rectangular profile, the coupling interval satisfies: $$\\left\\{ \\begin{array}{rll}\n\\int_{t_m-\\tau\/2}^{t_m+\\tau\/2} \\, Z_{piPRC}(u,\\mathbf{dp}) du=\\delta \\phi^*\\\\\n\\int_{t_m-\\tau\/2}^{t_m+\\tau\/2} \\, Z_{piPRC}'(u,\\mathbf{dp}) du<0\n\\end{array} \\right.\n\\label{eq-square}$$ Figures 5 and S4 show the numerical solutions of this equation with $\\delta\n\\phi^*$ equal to 0 (the FRP being equal to 24 hours), which determine the coupling intervals (compatible with experimental data) for positive and negative modulation of the 16 parameters of the model.\n\n# References\n\nKramer MA, Rabitz H, Calo JM (1984) Sensitivity analysis of oscillatory systems. Appl Math Model. 8: 328-340.\n\nRand DA, Shulgin BV, Salazar D, Millar AJ (2004) Design principles underlying circadian clocks. J R Soc Interface. 1: 119-30.\n\nTaylor SR, Gunawan R, Petzold LR, Doyle FJ 3rd (2008) Sensitivity Measures for Oscillating Systems: Application to Mammalian Circadian Gene Network. IEEE Trans Automat Contr. 53: 177-188.\n\n\"image\"\n\nFigure S1: **Transition from light\/dark alternation (LD) to constant light (LL) and constant darkness (DD) for the fully coupled model.** Time evolution of mRNA concentrations for the fully coupled model shown in Fig.\u00a02(A) for various light protocols: LD alternation (dashed, black), one LD period from ZT0 to ZT24 then constant light (in red) and one LD period from ZT0 to ZT24 then darkness (in blue). *Cca1* and *Toc1* mRNA concentrations are shown in the top and bottom frame, respectively.\n\n\"image\"\n\nFigure S2: **Influence of experimental errors on adjustement of a free running oscillator model to data**. Alternate target profiles with samples randomly chosen inside the interval of variation observed are generated and adjusted. Each random target corresponds to a slightly different parameter set and to a different adjustment RMS error (A) RMS error distribution; (B) The five target profiles most distant from each other have been selected and are associated with different colors. Crosses (resp. circles) indicate the *Cca1* (resp *Toc1*) mRNA target samples, the solid line is the numerical solution of the adjusting model.\n\n\"image\"\n\nFigure S3: **Probability distribution for parameter values in parameter sets with adjustment RMS error below 10%**. Parameters are determined as explained in Methods. The percentage of occurrence is evaluated for bins of width 0.2 in $\\log_{10}$. The probability distributions of parameter values for the model with all parameters modulated are shown in red and blue for the day and night values, respectively. The probability distribution of parameter values for the model with all parameters constant is shown in black.\n\n\"image\"\n\nFigure S4: **Characterization of coupling schemes.** (A) iPRC characterizing the phase change induced by an infinitesimal perturbation of parameters $\\lambda_X$, $\\beta_X$ and $K_X$. (B) Characterization of time position, $tm$, and duration $\\tau$ of couplings with a rectangular gating profile satisfying Eq. (1). Parameters are modulated either positively (red) or negatively (blue). (C) Characterization of time position and duration of couplings with a rectangular gating profile adjusting experimental data with a RMS error below $10\\%$ for four different levels of coupling strength (blue: $p\/p_0=1.17$; cyan: $p\/p_0=1.17$; red: $p\/p_0=0.85$; orange: $p\/p_0=0.5$; $p\/p_0$ being the ratio between the parameter values within and outside the coupling window)\n\n\"image\"\n\nFigure S5: **Resetting of the clock model of Fig.4 in response to a phase shift of the day\/night cycle**. Solid curves display the residual phase shift of the clock after 1 (black) and 5 (blue) day\/night cycles as a function of the initial phase shift. (A) TOC1 degradation rate is multiplied by 2.1 between ZT0 and ZT6.5. (B) CCA1 degradation rate is multiplied by 0.6 between ZT12.8 and ZT13.95. (C) Figure 6C is reproduced here for convenience. TOC1 (resp. CCA1) is multiplied by 2.1 (resp. 0.6) between ZT0 and ZT6.5 (resp. ZT12.8 and ZT13.95), which results in uniform convergence to phase-locking. Phase RMS error after 5 day\/night cycles is 25\u00a0min while the maximum error is 1\u00a0hour.\n\n\"image\"\n\nFigure S6: **Response of the fully coupled and occasionally coupled clock models to fluctuations in daylight intensity occurring on a time scale of one hour**. The figure is otherwise similar to Fig\u00a05.\n\n\"image\"\n\nFigure S7: **Response of the two occasionally coupled clock models of Fig.\u00a08 to fluctuations in daylight intensity**. (a) Light intensity varying randomly from day to day. The time evolution of TOC1 protein concentration is shown for: (b) the clock model with a FRP of 23.5h; (c) the clock model with a FRP of 25h. The figure is otherwise similar to Fig\u00a05.","meta":{"dup_signals":{"dup_doc_count":16,"dup_dump_count":7,"dup_details":{"curated_sources":2,"2017-13":3,"2015-18":3,"2015-11":2,"2014-10":1,"2024-30":1,"unknown":4}},"filename":"out\/1001.5258_extract_thommen-robustness-clocks.tex.md"},"subset":"arxiv"} +{"text":"# Abstract\n\nThe number of people using online social networks in their everyday life is continuously growing at a pace never saw before. This new kind of communication has an enormous impact on opinions, cultural trends, information spreading and even in the commercial success of new products. More importantly, social online networks have revealed as a fundamental organizing mechanism in recent country-wide social movements. In this paper, we provide a quantitative analysis of the structural and dynamical patterns emerging from the activity of an online social network around the ongoing May 15th (15M) movement in Spain. Our network is made up by users that exchanged tweets in a time period of one month, which includes the birth and stabilization of the 15M movement. We characterize in depth the growth of such dynamical network and find that it is scale-free with communities at the mesoscale. We also find that its dynamics exhibits typical features of critical systems such as robustness and power-law distributions for several quantities. Remarkably, we report that the patterns characterizing the spreading dynamics are asymmetric, giving rise to a clear distinction between information sources and sinks. Our study represent a first step towards the use of data from online social media to comprehend modern societal dynamics.\n\n# Introduction\n\nModern online socio-technological systems are producing a deep change in our traditional networking paradigms and in the way we communicate with each other. At the same time, online social media constitute nowadays efficient and fast means to group together many social agents around a common issue. In this way, new types of economic, financial and social phenomena are arising. An example of the latter is given by the so-called Arab revolts, which have materialized thanks to these new communication platforms. The protests have been mediated by the use of social networks such as Facebook, Twitter and YouTube, which have been critical for the birth and consolidation of campaigns involving strikes, demonstrations, marches and rallies.\n\nOn the other hand, online social networks not only modify in a radical way the dynamics of information and opinion spreading, but are also making our world even more global. More importantly, these platforms generate an enormous amount of time-stamped data, making it possible for the first time to study the fast dynamics associated to different spreading processes at a system-wide scale. These novel and rich data niches allow testing different social dynamics and models that would otherwise be highly elusive with traditional data-gathering methods. Additionally, the availability of data enables the study of phenomena that take place on time scales ranging from a few minutes or hours to a year-long duration. An example of the former kind of fast dynamics is given by large sport or cultural events, whereas cooperative content production such as the case of Wikipedia typically occurs in months or even years, thus, in a much slower time scale.\n\nIn this paper, we study the structural and dynamical patterns of the network made up by twitter users who have been involved in a social phenomenon that is currently taking place in Spain: the so-called May 15th movement (henceforth referred to as 15M). This movement-in-the-making had been brewing for a while in the social media, but took off on May 15th when the first demonstrators camped in a central square in Madrid, Spain. From that moment on, the protests and camps spread throughout the country. As many of the adherents are online social media users, the growth and stabilization of the movement was closely reflected in time-stamped data of twitter messages, which we have gathered and analyzed. This will allow us to elucidate the mechanisms driving the emergence of this kind of social phenomenon, and whether it shares dynamical and structural features with other natural, social and technological processes . Additionally, on more general scientific grounds, a social phenomenon like the 15M movement is an excellent opportunity to understand network formation processes and online spreading dynamics. The ultimate aim is to further advance our understanding of this kind of dynamics and eventually be able to make predictions based on real time data monitoring.\n\nIn what follows, we present the results of our analysis. On the one hand, we statistically characterize the structural patterns of the network of users who sent or received tweets containing keywords related to the 15M movement. We find that this network displays the typical features of other networks in Nature such as scale-free degree distributions, a community structure at the mesoscale and high structural robustness . On the other hand, we have also analyzed the dynamical patterns characterizing the spreading of information over the 15M network. Our results show that the 15M diffusion dynamics is highly asymmetric. Admittedly, a relative large fraction of the system is actively trafficking, but a great part of the overall traffic is delivered to a few users that do not pass them anymore, thus constituting a sort of information sinks. We round off our analysis by comparing our results with those reported in the literature for other kinds of online dynamical processes.\n\n# Methods\n\nThe data used in this study is a set of messages (tweets) that were publicly exchanged through *www. twitter.com*. The whole time-stamped data collected comprises a period of one month (between April 25th, 2011 at 00:03:26 and May 26th, 2011 at 23:59:55) and it was archived by a local start-up company, *Cierzo Development Ltd* using the SMMART Platform. This platform is evolving into a new concept called \"Open Social CRM,\" which combines concepts in monitoring tools, CRM tools, social tools and a philosophy of open innovation. The company restricts its collection to messages in Spanish language that come preferentially from users within or related to Spain. The internals of data collection are private to the company, but basically 23 hours of data are homogeneously collected each day, always leaving the same timeframe (16:00 to 17:00 CET time) to readjust the database due to the introduction of new Spanish nodes, purging of the non-Spanish related ones, etc.\n\nTo filter out the whole sample and choose only those messages related to the 15M movement, we selected 70 keywords (*hashtags*, see *Supplementary Information*) which were systematically used by the adherents to the demonstrations and camps. Next, the extracted sample was examined for missing hashtags $-$ of the top ten only one of them was not in the selected set, this being likely related to its bilingual nature *$\\#$acampadabcn*. The filtered data set appears to be representative enough of the total traffic related to the 15M movement produced during the period analyzed. As a matter of fact, a comparison with other databases, such as *topsy.com*, which aims to collect the whole set of twitter messages, shows that for the same period there were about 390.000 messages with the word \"*acampadasol*\" and 190.000 for the hashtag \"*$\\#$nolesvotes*\". Our sample is made up of 189.000 and 66.000 messages and hashtags, respectively, i.e., roughly above a third of the total number of messages.\n\nOnce this process is finished, the final sample consists of 581.749 tweets, out of which 46.557 were identified as *retweets* of unknown origin, and therefore were discarded. On its turn, these tweets were produced by 85.851 different users. To complete the data set, we located the references to other users inside each message. These references are marked in the system by an arobase, \"$@$username\". A user receives a notification, usually via email, each time a mention happens, and the messages having mentions are also copied to a special tab in the user interface. The total number of messages having at least a reference was 151.222. In some cases, the tweet is addressed to more than one user, so that the total pairs (source, target) extracted from these messages was actually higher: 206.592. This is the number of directed arrows in our network. We stress again that our network is a dynamical instance of a larger underlying network (i.e., that made up of followers and followings in twitter).\n\nFinally, although not directly related to the study presented here but important for complementary studies, data for all the involved users were scrapped directly from *twitter.com* using a cloud of 128 different nodes of a subnet. The scrap was successful for 84.229 users, for whom we also obtained their official list of followers. Moreover, about a half of them can be associated to a location (city), which is later translated to geographical coordinates via a standard geo-localization service from *Yahoo*. It is worth remarking that the extraction of followers gave a list in the order of 3 millions users, which roughly coincides with the order of the audience estimated by *Twitter* in Spain.\n\n# Results and Discussion\n\nThe availability of time-stamped data allows us to closely track the birth and development of the network made up of users who exchanged tweets related to the 15M movement during the period analyzed. In this network, every node represents a user while a link between two nodes is established whenever they exchange a message. We have made a movie (see the *Supplementary Information* for a low resolution version or go to *http:\/\/15m.bifi.es\/index.php* for a high resolution one with downloading options) that reproduces the temporal evolution of the networks and the dynamics of messages exchange during the period analyzed. The reader will note the highly dynamic character of the network as well as the dimension of the social phenomenon being analyzed.\n\nThe network constructed as described above is weighted and directed, i.e., a link from $i$ to $j$ means that $i$ sent at least a message to $j$ and the weight of the link $i\\rightarrow j$ stands for the actual number of such messages. Therefore, the adjacency matrix of the network is not symmetric ($j$ does not necessarily send a message to $i$). Moreover, it is worth noticing that we always work with accumulated data, such that the network at a certain time $t$ includes every message (link) produced at any time $t' \\le t$, i.e., once a link is established, it connects the two end nodes forever. Once the network is built, we are able to characterize it from a topological point of view, within the framework of complex network theory . In doing so, we discuss local and global descriptors, as well as the network structure at the mesoscale level. Additionally, we also analyze the dynamics of information spreading over the 15M network and compare our results with those already reported for other dynamical processes that are mediated by the Web 2.0.\n\n## Network Growth and Structure\n\nThe first point of interest concerns the structural growth pattern. We wonder whether a collective mobilization of thousands of agents demands a slow, progressive increase in size; or rather social networking platforms enable an abrupt emergence. In Figure (top) we present three snapshots of the system for different days, relative to day $D$ (May 15th). Colors stand for the \"age\" of the node: early active users are coded in yellow, where those that adhere the network in successive days are coded in green, red, etc. Black is left for the latest adopters (people whose activity began at $D+10$). Besides, the size of the nodes has been made proportional to their activity, taking into account both incoming and outgoing tweets (however, for the sake of clarity, such proportion has been truncated at $k_{in}+k_{out}=200$ in the networks displayed in the figure). Even this simple representation of the evolution of the 15M network is already indicative of the growth in the number of agents once the movement took off and time goes on.\n\nThe results depicted in the bottom panel of Figure further illustrate the way in which the network evolves by gaining adherents. The figure represents the proportion of active nodes at time $t$ (with a resolution of 12 hours) in the giant component relative to the total number of users in the network at the end of the growth process. As we can see from the figure, the formation of the network and its later increase in size does not proceed in a gradual proportional process but in a sequence of bursts concentrated in just a few days (from day $D$ to day $D+7$). Obviously this process is driven by the events surrounding the movement: as mentioned, at day $D$ the protesters decided to camp at *Puerta del Sol* square in Madrid, which in turn elicited huge attention from the media and made the difference as far as the spread of the movement to a country wide scale concerns. Besides, from our data, it appears that the number of active users saturates after $D+7$. It is interesting to note that in May 21st ($D+6$), the day preceding local and regional elections, more than the 80% of the network was already formed.\n\nBeyond structural growth, a second key aspect of the network under study concerns the distributions of strengths. The strength $s$ of a given node $i$ is defined, as usual, by the sum of the weights of the links that are incoming and outgoing to node $i$. In our case, it is also important for the discussion that will follow, to further divide this magnitude into two contributions. One the one hand, we have the strength derived from the weights of links incident to the node, $s_{in}$. This magnitude accounts for the total traffic (number of tweets) that a given node receives from its neighbors. Conversely, $s_{out}$ represents the sum of the traffic generated at a node, i.e., the number of tweets this user sends out. Additionally, let $P(s_{in})$ and $P(s_{out})$ be the cumulative distributions of both strengths, which we can be measured at different instants $t$ of the network development.\n\nFigure shows the cumulative distributions of the previous quantities for several times. As can be seen, even before the occurrence of the events that triggered public protests on day $D$, both $P(s_{in})$ and $P(s_{out})$ follow power-laws $P(s)\\sim s^{-\\gamma}$, but with different exponents ($\\gamma_{in}=1.1$ and $\\gamma_{out}=2.3$, respectively, as measured at $D+10$). Similar plots for the degree of the nodes exhibit the same behavior. It is well-known that the statistical properties of these variables in other technological, social and natural systems are also heterogeneously distributed. Therefore, the fat-tailed distributions that characterize the topology of the 15M network are not unique, but are rather ubiquitous in Nature. Nonetheless, the fact that the 15M network is scale-free, has deep consequences regarding a number of relevant issues including its origin, complexity, robustness and, from a dynamical point of view, the way in which information flows over the system. As the network obtained comes from the activity of the nodes, the heavy-tailed distribution of both nodes' degrees and strengths suggests a dynamics lacking any typical or characteristic scale.\n\nOn the other hand, the dynamical asymmetry between incoming and outgoing degrees or strengths is not surprising neither. Indeed, individual behavior, which ultimately determines the resulting (out) dynamics, is an intended social action, but the emergent properties of the collective behavior of agents are unintended . Essentially, subjects decide when and to whom a given message is sent. Therefore, the aggregate behavior of all agents and their popularity (i.e., how many incoming links a node has) result from individual choices. This is what is reflected in the in and out distributions. As a matter of fact, the exponent of the power law characterizing the degree probability distribution $p(k)$ lies in the interval $(2,3]$, as usually found in most real-world networks. Interestingly, spreading dynamics such as rumor and disease propagation processes are most efficient for scale-free networks whose exponent is precisely in this range . Finally, the strength distribution for the tweets sent, $p(s_{out})$, also resembles a power law function with an exponent larger than 3, although in this case the distribution exhibits an exponential cut-off. This might be due to the fact that sending messages has an associated cost in terms of bandwidth availability, the cognitive capacity to produce different messages and ultimately an unavoidable physical limitation to type them .\n\nAnother aspect of capital interest regards how the overall traffic is generated and delivered. One of the main consequences of the functional form of the strength distributions is presented in Figure . The emergence of hubs, namely, the signaling feature of scale-free networks, leads to a predictable oligopoly in the way information is spread. In Figure , we observe that the number of tweets sent grows with the number of active users of the network. The curves corresponding to different days (i.e., instances of the network) nearly collapse into a single one. This means that as users join the network, the traffic generated scales accordingly. Moreover, the figure indicates that, for instance, roughly the 10% of active subjects generate the 52% of the total traffic. This is another indication of the dynamical robustness of the network to random failures but at the same time of its fragility to attacks directed towards that 10% of users. More remarkably, the results depicted in the figure are in sharp contrast with the activity patterns corresponding to received tweets. In this latter case, as time goes on, the number of in-strength hubs decreases. As shown in the figure, by $D+10$, less than 1% of users receive more than 50% of the information. As we will show later on, these nodes correspond to authorities or mass media, which the adherents identify as main receptors (government) of or potential spreaders (mass media) for their messages. However, what at a priori seems to be a good choice, turns out to be harmful for the process of information spreading. As a matter of fact, we have checked that these hubs, which we call *information sinks* do receive a lot of messages but rarely act as spreaders within the network. As a consequence, almost all messages that arrive to those nodes are not redelivered and hence lost. In this sense, our results show that while the delivering of information is shared by a relative large number of users that keep the \"social temperature\" of the movement, most of this information is simply directed towards a few highly connected targets that might not pass the voice any longer (i.e., they are not active spreaders). Nonetheless, the information exchanged is public and users can therefore access it. This would however imply an individual action (to check a given user's timeline) that is not captured in our twitter data.\n\n## Community Structure\n\nThe modular structure is pervasive in many natural, social and technological networks. Generally speaking, modules are islands of highly connected nodes separated by a relatively small number of links. This meso-level skeleton is likely to be relevant to understanding dynamical processes in networked systems. Agents in social networks tend to gather with those who share cultural traits (homophily) or professional interests , and more specifically, political communication networks tend to exhibit a clustered structure along political opinion lines .\n\nWe have analyzed the community structure of the 15M network once its size stabilizes, i.e., at $t=D+10$. We have applied a random walk-based algorithm that optimizes a map equation on a network structure . Although alternative community detection algorithms are at hand, we chose the previous strategy because it is suited for networks, as it is actually our case, in which the dynamics of information flow is relevant. These type of algorithms rely on the intuitive idea that, if communities exist, a random walker tends to get trapped in them due to their dense within-connectivity . The output of such information theoretic algorithm is a partition made up of 6388 modules. Most of these communities have less than ten nodes. We focus our analysis on the 30 most important modules from a dynamical perspective, i.e. those which concentrate most of the random walker's activity. These modules do not necessarily coincide with the first 30 communities ranked according to their size, but all of them contain over 100 nodes. Figure shows these 30 communities in a compact view (each node represents a community) . Furthermore, each community is assigned a tag, corresponding to the most central node in that community. Again, these nodes have been identified as being dynamically dominant within their modules, thus they play an outstanding role in the dynamics of information. Our results show that modules are highly hierarchical and that nodes that are central to their communities, i.e., *local hubs*, are mostly hubs at the global scale as well.\n\nThe mesoscale structure allows to get deeper insights into social aspects of our case study. First, tags identifying the 30 largest communities are highly heterogeneous. 6 of these modules correspond to important mass media (newspapers and television), which points to the otherwise intuitive fact that users rely on these agents to amplify their opinion. The same can be said of 3 modules corresponding to famous journalists. More interestingly, 7 modules correspond to on-line activists and\/or veteran bloggers. These agents are unknown to most people, but they are present in the network from its birth and enjoy a solid reputation that facilitates their being considered a reference in the movement. Remarkably, 7 modules are formed by camps in 7 different cities. Madrid is of course the main one, as the movement began there (*acampadasol*, which comprehends over 3000 nodes). Other cities are Barcelona, Granada, Zaragoza, Valencia, Seville and Pamplona.\n\nThe fact that communities are geographically defined suggests some additional conclusions: (i) the mesoscale reflects the autonomy of each of the assemblies throughout the Spanish geography. Each of these modules hardly connects to any other, indicating a low communication between them; (ii) the exception to the previous point is Madrid: each minor camp holds a strong communication interchange with the community represented by *acampadasol*. Taking points (i) and (ii) together, it can be safely said that the movement is highly centralized, because in most cases a peripheral settlement is only influenced by Madrid and one or two minor ones. Finally, (iii) despite the potential of Web 2.0 communication platforms, data indicates that these media are mostly used to communicate with geographically close people. In other words, the network is *global*, but communication is mainly *local*. This is further verified in Table , where we have summarized the percentage of people whose geolocated information coincides with that of the module (city).\n\n## Popularity Evolution\n\nThe Web 2.0 has brought to network science the challenge to deal with highly dynamic, changing structures. Besides source and target nodes, and a (perhaps weighted) link between them, one must now consider a new ingredient: time. In this context, an interesting issue is related to the evolution of particular nodes: understanding how an element (be it a Wikipedia entry or a novel trend in social networks) comes to existence (appears in the network) and develops. Of further interest is to elucidate how a subset of these network's components ends up as a \"popular\" entity. This is a key aspect in network development, as popular agents eventually have an impact on other agents' opinions, acting as a referent, be those opinions related to politics, culture or business.\n\nTo capture the dynamics of popularity, we follow the framework recently proposed in . The natural quantity to measure popularity in a communication network is the number of messages that arrive at a node, which corresponds to that node's in-strength $s_{in}$, and the rate at which $s_{in}$ changes. Hence, a way to grasp how the activity of a node evolves is to consider its logarithmic derivative $[\\Delta s\/s]_{t} = (s_{t}-s_{t-1}) \/ s_{t-1}$, i.e., the relative variation of strength in a time unit (we omit subindex \"in\" for clarity). Figure displays the evolution of the latter variable for some arbitrarily chosen nodes among those that are information sinks. Beyond an initial surge typically observed in many nodes, the time series of the logarithmic derivative evidence a bursty behavior. Fluctuations depend on exogenous events, in a strong parallelism with the external circumstances that drive the whole network's strong changes. It is noteworthy that these patterns closely resemble other, less conflictive, examples of popularity evolution in the Wikipedia or the Web .\n\nOn the other hand, Figure shows how bursts are distributed according to their magitude for two different time intervals but the same time granularity (1 day). The observed pattern, which is the same regardless of the time intervals under consideration, clearly shows heavy-tailed distributions, again in close resemblance to results already reported for other web-mediated dynamics and a variety of critical phenomena in physical, economic and social systems. As a matter of fact, a simple model can account for the observed bursts distribution. The so-called *rank model* is specially conceived for networks in which prestige, rather than degree-based preferential attachment, plays a central role to determine nodes' connectivity . The rank model depends on a prestige measure that is used to rank nodes. In this model, the probability that a new node that joins the network at $t+1$ connects to an older one $j$ is given by $$p(t+1 \\rightarrow j) = \\frac{R_{j}^{-\\alpha}}{\\sum_{i=1}^{t} R_{i}^{-\\alpha}}$$ where $R_{j}$ is the rank of node $j$ and $\\alpha > 0$ determines the exponent $\\gamma$ of the resulting power-law degree distribution $p(k)$, such that $\\gamma = 1 + \\frac{1}{\\alpha}$. Figure compares the bursts distributions resulting form the data and from the model. The results shown correspond to the case in which nodes are ranked according to their age and $\\alpha$ has been set to $0.9091$ (for this value we obtained the best fit to the actual degree distribution of the empirical network). Moreover, we note that in order to simulate the non steady growth of the network, we have considered for the synthetic case that 1 day has gone by when the size of the network being generated coincides with that of the empirically assembled network for that time.\n\nAs we can see from the figure, even under the simplest rank rule, the burst magnitude distribution is nicely reproduced. As for our specific social context, the previous results do not imply that the driving mechanism behind the evolution of the 15M network is that simple, but illustrate that bursty activity of this sort can be produced by generic mechanisms. Given that this self-organized activity is widespread in nature, there are no reasons to consider that the network has its origin in external actions. Nonetheless, it must be pointed out that although the rank model grasps the main observed trends, both regarding structural characteristics and dynamical facts, yet other important aspects extracted from the data are not captured by the model. These limitations demand model refinements beyond the scope of the present study.\n\n# Conclusions\n\nThe social phenomenon here presented as a case study is a collective endeavor that is expressed at many levels, ranging from public demonstrations and camps to the presence of news in the mass media. In this work we have analyzed data from time-stamped, online activity in a specific social networking site during the formation and stabilization of this social movement. From a scientific point of view, these data represent a challenge as most network studies typically deal with a static structure.\n\nUndoubtedly, there are many facts that can be easily identified as the grounds of the 15M movement. Among them, the world-wide economic crisis and the impact it has had on society. Nonetheless, the particular events that triggered the growth of the whole movement remain unknown and are beyond the scope of this work. Addressing these questions would probably require an in-depth semantic analysis of the contents of interchanged messages. From its onset, our statistical characterization of the communication network built from tweets exchanged between adherents (and opponents) reveals a strong resemblance to well-known phenomena in natural and manmade systems, which are admittedly self-organized.\n\nAdditionally, the 15M movement also raises relevant questions with sociological consequences. We argue that information centralization (Figure ), as well as patterns of popularity growth (Figures and ) are indicative of a tendency towards a hierarchical structure. Opinion leaders emerge spontaneously and minor actants devote much energy to communicate with them (be it to have their ideas echoed, or to influence such leaders). This proclivity is coherent with economy of attention , i.e., the system tends to avoid the overabundance of opinions to prevent scarcity of attention, but raises doubts about the possibility of converging to an egalitarian social system in which information flows and is received in an efficient manner. As far as our analysis concerns, we have shown that in a dynamics such that the one at work for our case, a relative large number of information sources exist, which is behind the robust functioning of the system. Conversely, communication sinks, where information is lost, are also generated.\n\nOn the other hand, our analysis of the community structure reveals some interesting facts. Geo-centered modules are abundant, but ideological or fame-related ones are also remarkable. It is important to keep in mind that time-stamped data is the dynamic (or functional) result of the activity on top of a more stable underlying structure, that of \"following\" and \"followers\" in the social networking site. For this reason, the communication network we observe is ever-changing and we argue that modules within it have not a straightforward interpretation.\n\nFinally, we have studied the patterns by which nodes become increasingly more visible. Results indicate that popularity growth in the context of political conflict does not display significant differences from other, less fashionable examples. Popularity is dominated by a fluctuating behavior, and popularity burst distributions lack of a characteristic scale. This fact connects the dynamics of popularity with other critical phenomena in many natural and artificial systems.\n\nIn summary, online social networks and the Web 2.0 provide new challenges to network theory. Events in the real world, ranging from economic phenomena to political protest, stand as the driving forces leading to the emergence of complex, time-evolving communication patterns. In this scenario, network theory stands as a suitable tool to unfold the structural and dynamical facets of such emergent systems.\n\n# Acknowledgments\n\nWe are indebted to Beatriz Antol\u00ed, Guillermo Losilla, Rub\u00e9n Valles and Isabel Vidal for their help and assistance at several stages of this study. J.B.-H is supported by the Government of Arag\u00f3n (DGA) through a grant to FENOL and by Spanish MICINN through project FIS2008-01240. D.I. is supported by the Government of Arag\u00f3n through a Fundaci\u00f3n ARAID contract. Y. M. was partially supported by the FET-Open project DYNANETS (grant no. 233847) funded by the European Commission, by Spanish MICINN through projects FIS2008-01240 and FIS2009-13364-C02-01 and by Comunidad de Arag\u00f3n (Spain) through the project FMI22\/10. We also acknowledge the Spanish MICINN for financial support through the project FIS-164-50.\n\n# Figure Legends\n\n# Tables\n\n| **community tag** | **area** | **fraction of users from same area** |\n|:-----------------:|:---------:|:------------------------------------:|\n| @acampadasol | Madrid | 54% |\n| @acampadabcn | Barcelona | 81% |\n| @acampadavlc | Valencia | 63% |\n| @acampadazgz | Zaragoza | 82% |\n| @acampadagranada | Granada | 53% |\n| @acampadasevilla | Sevilla | 83% |\n| @15MPamplona | Pamplona | 71% |\n\n**Geographic origin of nodes in region-based communities**","meta":{"dup_signals":{"dup_doc_count":25,"dup_dump_count":20,"dup_details":{"curated_sources":4,"2023-23":1,"2021-49":1,"2020-29":1,"2020-05":1,"2019-43":1,"2019-18":1,"2019-04":1,"2018-47":2,"2018-13":2,"2017-30":1,"2017-26":1,"2017-22":1,"2017-09":1,"2017-04":1,"2016-36":1,"2016-30":1,"2016-26":1,"2016-22":1,"2023-40":1}},"filename":"out\/1107.1750_extract_15MDRAFT02.tex.md"},"subset":"arxiv"} +{"text":"abstract: We here present a model of the dynamics of extremism based on opinion dynamics in order to understand the circumstances which favour its emergence and development in large fractions of the general public. Our model is based on the bounded confidence hypothesis and on the evolution of initially anti-conformist agents to extreme positions. Numerical analyses demonstrate that a few anti-conformist are able to drag a large fraction of conformists agents to their position provided that they express their views more often than the conformists. The most influential parameter controlling the outcome of the dynamics is the uncertainty of the conformist agents; the higher their uncertainty, the higher is the influence of anti-conformists. Systematic scans of the parameter space show the existence of two regime transitions, one following the conformists uncertainty parameter and the other one following the anti-conformism strength.\n .\n .\n **Keywords:** Extremism, opinion dynamics, bounded confidence, clustering, anti-conformism.\nauthor: G\u00e9rard\u00a0Weisbuch \nLaboratoire de Physique Statistique \net Centre de Recherche sur l'Environnement et la Soci\u00e9t\u00e9 \nde l'Ecole Normale Sup\u00e9rieure, \n24 rue Lhomond, F 75231 Paris Cedex 5, France. \n*email*:firstname.lastname@example.com \nbibliography: biblio.bib\ntitle: From anti-conformism to extremism\n\n# Introduction\n\nThe present paper discusses the dynamics of extremism in a democratic setting. A probably over-optimistic view of democracy is that when opinions are openly expressed, some consensus opinion would emerge and citizens would vote in favour of a government whose actions would be in accordance with the views of a large majority of citizens. This utopia is shared by many writers, but History has consistently shown us that National Consensus was a dream, that could eventually occur at war time, not such a wishful situation.\n\nAt least one would expect that the elected government would be close enough to a centrist position satisfying the largest proportion of citizens, as the ice cream seller choosing to put his stand near the middle of a linear beach ().\n\nOnce again, History since the Eighteenth century Enlightenment period in Western Europe contradicts these simple views and we are not observing a smooth evolution towards more consensus nor towards the success of centrists parties. We rather observed alternation between regimes of dominance of centrists political parties and regimes of strong ideological fights between more extremist parties, eventually leading to de facto dictatorship, according to time periods and world regions. The present paper is an essay to model possible evolutions of public opinions leading to different opinion aggregation landscape forming the basis of political entities corresponding to parties. We here develop a model of opinion dynamics in order to answer such questions as:\n\n- How come rational[^1] people choose extremism?\n\n- How does an initial low proportion of anti-conformist influences\/(or does not), a large fraction of the general population to aggregate in powerful extremist clusters?\n\n- What characterises political clusters in terms of the number of agents in the cluster and their distance to a middle opinion? More precisely what are the regions in the parameter space of the model which would lead the different outcome of the dynamics?\n\nThe simulations presented in this paper are based on opinion dynamics: agents exchange their views on the occasion of encounters, and they might update their opinion as a result of these exchanges. We are well aware that opinion formation in politics involves many other processes than encounters and discussions among individuals: media, political parties, the government and other political institutions are involved as well. For the sake of clarity, we postpone the discussion of the robustness of our results with respect to these other factors to the last section of the paper.\n\nThe earliest models of opinion dynamics were binary opinions models, where opinions could take only two discrete values, e.g. -1 and +1 in the so-called voters models as described in , and summarised in .\n\nWe here present a model based on continuous opinions, more adapted to the discussion of the assets and liabilities of political choices among agents, and to the traditional right\/left axis of political analysts, than binary opinions. It is inspired from two approaches, the bounded confidence model of and the anti-conformism model of . Since these two models are used as building blocks of our model, we will first summarise their main aspects.\n\nThe rest of the paper is then divided into 3 sections\n\n- Short reminders of the previous models.\n\n - bounded confidence model including its application to extremism.\n\n - Smaldino and Epstein model of anti-conformism.\n\n- Our synthetic model is developed and its results presented.\n\n- Conclusions and discussion.\n\nDisclaimer\n\nThe present paper should not be interpreted as normative: we rather try to describe the evolution of opinions and political choices. One can certainly give examples such as Civil Rights in general, when initially considered extremist opinions were later largely accepted by the public. And other cases in which the consequences of extremism turned out to be dramatic.\n\n# Essentials of former models\n\nIn order to achieve consistency in notations and hypotheses, we use our own notation throughout the paper, which sometimes differ from those of and and make appropriate scale changes.\n\n## Bounded confidence\n\nThe bounded confidence model is based on a major cognitive bias, the confirmation bias (): we are mostly influenced by opinions close to ours and tend to reject opinions too far away. The mathematical model was independently introduced by and by . It follows the spirit of Axelrod's earlier model of dissemination of cultures. In model, cultures are described by strings of integers. Pairs of agents interact if their cultures are already close enough, in which case one of them adjusts one feature of its culture string to match that of the other agent's culture.\n\nIn bounded confidence models, opinions are represented by real numbers. When opinion differences are lower than a confidence threshold, agents adjust their opinion by decreasing such difference. In Deffuant et al. model, pairs of agents are randomly chosen in the population of agents and they eventually adjust their opinion if the confidence condition is met. Another pair is randomly chosen, and so on. Such an iteration mode is called random sequential.\n\nBy contrast apply the same opinion updating equation but they use parallel iteration: all opinions are updated simultaneously. Their choice is well adapted to discussions in committees for instance.\n\nWe will consistently use random sequential iteration in this paper.\n\n## Deffuant et al. bounded confidence model\n\nbounded confidence model was introduced to model situations in which actors have to take decisions involving cost\/benefit analysis in terms of money. Such was the case when the Common Agricultural Policy was modified in 1992: farmers were proposed to change their former practices in favour of more environment friendly practices, e.g. by reducing fertilisers and pesticides use, in exchange for financial aid. But optimising new practices involved a lot of financial uncertainties and surveys demonstrated that farmers would have many social interactions discussing the pros and the cons of the environmental contracts before taking any decision.\n\nmodel can be simply described:\n\nOpinions are represented by a **continuous variable** $x$.\n\nTwo randomly chosen agents with opinions $x$ and $x'$ interact if, and only if, $|x-x'|2.5$ and this is an indication of a change in dynamical regime: anti-conformists lost their strong influence on conformists. The increase in \"attractiveness\" with conformists's uncertainty $u$ also reflects the series of transitions in the number of clusters with $u$.\n\n# Discussion and conclusions\n\nLet us first summarise our results.\n\n- Anti-conformism of small fraction of the agents population can result in the emergence of large extremist clusters, provided that the anti-conformists express more often their views than conformists.\n\n- This influence exists whatever conformist uncertainty, and it is larger when uncertainty increases. Two distinct dynamical regimes are observed according to the value of uncertainty. For lower values, anti-conformists drag important fractions of conformist agents to their own extreme position. For higher values of uncertainty, consensus is restored, but along a much wider range of positions which can be centered far away from the initial center of gravity of initial opinions.\n\n- Obviously the anti-conformist influence increases with their number and the frequency of their interventions. By contrast, one observes a transition in the anti-conformist influence when anti-conformists position themselves too far away from the center; they then loose influence and are unable to drag large fractions of conformists.\n\n- Early intervention of anti-conformists increases their influence. And the early steps of the dynamics are responsible for the large deviations in peak positions.\n\n- The results concerning the number of peaks in the opinion distribution as a function of the uncertainty parameter and their approximate position are robust. The exact position of the peak cannot be predicted accurately, due to the susceptibility of the probabilistic dynamics to initial samplings.\n\nLet us now discuss how these conclusions would eventually be modified by the other players in the political game, media, parties, and other institutions such as elections, government etc.\n\nIn fact, media and political parties re-enforce the influence of non-conformists. Journals, newspapers or television compete for readership and audience. Journalists fight for impact, notoriety and reputation. In this market for information, or cognitive market as proposed by , the motivations are the same as for anti-conformists of . Impact is achieved by taking simple, extreme and fast positions. The tendency is increased by the use of Internet, from which journalists often take their views. The fast communication procedures on Social Networks also favour the extremes as observed on tweets and readers reaction to articles in the press. To maximise audience, societal and political debates on the television are dramatised: they oppose extreme views and seldom result in consensus. As a matter of fact, the media contribute largely to the high value of the relative frequency factor $f$ used in our simulations. In that respect, the growing role of the media and especially of the Internet will not automatically lead to a better understanding of challenges and options, but might on the contrary favour the expression of extremist views.\n\nThe same mechanisms can be observed during the political debate inside parties, before elections. Party members are competing to get positions inside the party or to represent the party in future elections. They also want to make clear that they are faithful to their party by strongly opposing other parties views. For them too, a simple ideological position is easier to express and to defend, than balancing between the contradictory constraints faced in the choice of a policy adapted to societal challenges.\n\nSo both media and political parties internal discussions re-reinforce the influence of extremists.\n\nThe dynamics might be different during elections and at the government level. On the occasion of national elections for instance, parties have to adapt their program to the electorate and make alliances to win support. In principle they should move towards the center for this. But when the electorate comprises strong extremist clusters, as often observed in our simulations, they have a choice to position themselves clearly on one side of the political checkers, especially under the influence of their members which are biased with respect to the 'rational' position of optimising support from the general population.\n\nThe government itself has to navigate between general support and the support from inside the parties of the alliance which brought it in power.\n\nIn conclusion of the present discussion, the dynamical processes inside the media and the parties are in agreement with our hypothesis of a stronger expression of anti-conformist positions. This re-enforce the conclusions of our model.\n\nOn the other hand, other aspects of politics concerning general elections or government positions necessitate further analysis.\n\nWhat we tried to demonstrate is that evolution towards extremism does not automatically imply coercion, strategic plots or the control of the media by a single agent. Simple human cognitive processes such as anti-conformism, cognitive biases and uncertainty of agents can favour its emergence and its influence on the constituency.\n\nThe results of our simulations were interpreted in terms of politics, but they could also provide some insight into other social phenomena involving the dynamics of extreme choices:\n\n- In markets of luxury goods: for instance, why do people buy fast cars or SUV vehicles when they have little use for these products? How is the market driven by these extreme choices?\n\n- in Fashion and in the Arts, where anti-conformism is the rule driving the perpetual motion of expressed realisations;\n\n- in the propagation of imaginary dangers related to new technologies in the media and the Internet ().\n\n**Acknowledgements** We thank Joshua Epstein and Paul Smaldino for sending their preprint prior to publication and the participants to the \"s\u00e9minaire inattendu\" for their comments. We thank anonymous referees for their corrections and for raising interesting issues.\n\n[^1]: rational here does not refer to economists' full rationality but rather to its common sense, people able to practice some form of reasoning\n\n[^2]: Many extensions of the bounded confidence model were proposed as described in the review of . Some take into account the possiblity of repulsion among agents such : agents can be either attracted for small differences in opinions, but can also have repulsive interaction when their difference is larger than another upper thresold. Other models, , consider two populations of interacting agents, some having only attractive interaction, others also having repulsive interactions.\n\n[^3]: Their paper does not explicitely refer to politics but they quote several references about politics.\n\n[^4]: Other authors have introduced anti-conformist agents in the simulation of binary opinion dynamics . In the context of binary opinions, say 0 or 1, anti-conformists have opinions **opposed** to the opinion of their neighbours. In the model as in the present paper, the anti-conformists choose opinions **further** than those of other agents. Their position can be described as 'plus royaliste que le Roi' or in English 'more catholic than the Pope'. Hence, the dynamics of 'our' mixed population is quite different from those described in\n\n[^5]: The paper covers more situations than reported here, including heterogeneity of $\\delta$, the anti-conformist strength. It e.g. shows that the above conclusions on convergence of the dynamics remain true provided that such heterogeneity is limited: the standard deviation of the $\\delta$ distribution should be less than 1.\n\n[^6]: We have chosen not to move the anti-conformist, although the alternative choice, move according to SE rule could have been made. Anyway, differences in behaviour between the two choices would not have changed qualitatively the dynamics for our choice of parameters.\n\n[^7]: This simple implementation does not cause any practical problem for opinion values close to zero since anti-conformists move away very soon from the average as observed in figures 5 and 7.\n\n[^8]: A word of caution: such histograms could be interpreted either as histograms of the positions of single isolated peaks as observed in figures 5 and 7, or as the aggregation of wider peaks. We confirm that only the first interpretation is correct from many direct observations of asymptotic histograms of single iteration processes. Furthermore, wide peaks would not be stable under the bounded confidence process.\n\n[^9]: Isolating the rightmost peak for these measurements was done by checking the histograms for a gap left of the peak and taking measurements on the remaining bins right of the gap; for figure 10 e.g., the empty bin at 0.65 opinion can be used to start collecting the statistics.\n\n[^10]: These three quantities correspond to standard measurements of peak characteristics in spectra, the area under the peak (fraction), the peak position with respect to the origin (average deviation), and the peak width (twice the standard deviation). For figure 10 e.g. the fraction of opinions in the righmost peak is 33 perc., the average position is 0.91 and the standard deviation 0.10.\n\n[^11]: In the next five figures, the fraction of opinions, the standard deviation and the attractiveness are given by the scale on the left, in red, and the average deviation by the scale on the right, in green.\n\n[^12]: larger values of $u$ were previously investigated in the section on the consensus regime.\n\n[^13]: we can only conjecture about this balance, but we have no explanation for it, nor why changes in attractiveness or its derivatives goes along with transitions in dynamical regimes.","meta":{"dup_signals":{"dup_doc_count":13,"dup_dump_count":3,"dup_details":{"curated_sources":1,"2017-13":1,"unknown":11}},"filename":"out\/1503.04799_extract_article.tex.md"},"subset":"arxiv"} +{"text":"abstract: Accumulating observational evidence suggests an intimate connection between rapidly expanding insect populations, deforestation, and global climate change. We review the evidence, emphasizing the vulnerability of key planetary carbon pools, especially the Earth's forests that link the micro-ecology of insect infestation to climate. We survey current research regimes and insect control strategies, concluding that at present they are insufficient to cope with the problem's present regional scale and its likely future global scale. We propose novel bioacoustic interactions between insects and trees as key drivers of infestation population dynamics and the resulting wide-scale deforestation. The bioacoustic mechanisms suggest new, nontoxic control interventions and detection strategies.\nauthor: David Dunn; James P. Crutchfield\nbibliography: ref.bib\ndate: 2024-09-29\ntitle: Insects, Trees, and Climate: \n The Bioacoustic Ecology of Deforestation and \n Entomogenic Climate Change\n\n# Introduction\n\nForest ecosystems result from a dynamic balance of soil, insects, plants, animals, and climate. The balance, though, can be destabilized by outbreaks of tree-eating insects. These outbreaks in turn are sensitive to climate, which controls precipitation. Drought stresses trees, rendering them vulnerable to insect predation. The net result is increased deforestation driven by insects and modulated by climate.\n\nFor their part, many predating insects persist only to the extent they successfully reproduce, which they do by consuming and living within trees. Drought-stressed trees are easier to infest compared to healthy trees, which have more robust defenses against attack. To find trees suitable for reproduction, insects track relevant environmental indicators, including chemical signals and, possibly, bioacoustic ones emitted by stressed trees. At the level of insect populations, infestation dynamics are sensitive to climate via seasonal temperatures. Specifically, insect populations increase markedly each year when winters are short and freezes less severe. The net result is rapidly changing insect populations whose dynamics are modulated by climate.\n\nThus, via temperature and precipitation, climate sets the context for tree growth and insect reproduction and also for the interaction between trees and insects. At the largest scale, climate is driven by absorbed solar energy and controlled by relative fractions of atmospheric gases. The amount of absorbed solar energy is determined by cloud and ground cover. Forests are a prime example, as an important ground cover that absorbs, uses, and re-radiates solar energy in various forms. At the same time forests are key moderators of atmospheric gases. Trees exhaust oxygen and take up carbon dioxide in a process that sequesters in solid form carbon from the atmosphere. As plants and trees evolved, in fact, they altered the atmosphere sufficiently that earth's climate, once inhospitable, changed and now supports a wide diversity of life.\n\nThere are three stories here: the trees', the insects', and the climate's. They necessarily overlap since the phenomena and interactions they describe co-occur in space and in time. Their overlap hints at an astoundingly complicated system, consisting of many cooperating and competing components; the health of any one depending on the health of others. (Figure 1 gives a schematic view of these components and their interactions.) How are we to understand the individual views as part of a larger whole? In particular, what can result from interactions between the different scales over which insects, trees, and climate adapt?\n\nTaking the stories together, we have, in engineering parlance, a *feedback loop*: Going from small to large scale, one sees that insects reproduce by feeding on trees, forests affect regional solar energy uptake and atmospheric gas balance, and, finally, energy storage and atmospheric gases affect climate. Simultaneously, the large scale (climate) sets the context for dynamics on the small scale: temperature modulates insect reproduction and precipitation controls tree growth. The feedback loop of insects, trees, and climate means that new kinds of behavior can appear\u2014dynamics not due to any single player, but to their interactions. Importantly, such feedback loops can maintain ecosystem stability or lead to instability that amplifies even small effects to large scale.\n\nHere we give a concrete example of the dynamic interaction between insects, trees, and climate. We focus on the role that bark beetles (Scolytidae or, more recently, Curculionidae: Scolytinae) play in large-scale deforestation and consequently in climate change. Bark beetles are emblematic of many different insect species that now participate in rapid deforestation. Likewise, we primarily focus upon the North American boreal forests for their unique characteristics but also as representative of the vulnerability of all types of forest ecosystems. And so, the picture we paint here is necessarily incomplete. Nonetheless, their cases serve to illustrate the complex of interactions that are implicated in the feedback loop and also the current limits to human response. Although they are not alone, bark beetles appear to be an example of a novel player in climate change. Unlike the climatic role that inanimate greenhouse gases are predicted to play in increasing global temperature over the next century, bark beetles represent a biotic agent that actively adapts on the time scale of years but that, despite the short time scale, still can cause effects, such as deforestation, at large spatial scales. To emphasize the novelty of this kind of biological, non-human agent, we refer to the result as *entomogenic climate change*.\n\nIn analyzing the relationship between the feedback loop components, one important conclusion is that we understand relatively little about the interactions between insects and trees and the dynamics of infestation. In particular, we see that there is a need to expand on the success of, and to acknowledge the limitations of, the dominant chemical ecology model of insect infestation.\n\nA detailed analysis of the problem of entomogenic climate change leads us to make a number of constructive suggestions for increased attention to relatively less familiar domains of study, including micro-ecological symbiosis and its nonlinear population dynamics, and insect social organization. Here we emphasize in particular the role that bark beetle bioacoustic behavior must have in their evolving multiple survival adaptations which, it appears, fill in significant gaps in the explanatory model of infestation dynamics. One goal is to stimulate interdisciplinary research that is appropriate to the complex of interactions implicated in deforestation and appropriate to discovering effective control strategies.\n\n# Forest Health and Climate: A recent snapshot\n\nThe Earth's three great forest ecosystems\u2014tropical, temperate, and boreal\u2014are of irreplaceable importance to its self-regulating balance. Their trees help to regulate its climate, provide essential timber resources, and create a diversity of habitat and nutrients that support other forms of life, including millions of people. Forests contribute to global climate dynamics through a carbon cycle in which atmospheric carbon dioxide is converted into an immense carbon pool. At any one point in time, the Earth's forest ecosystems together hold a majority of the Earth's carbon stocks with the boreal forests comprising 49 percent of the total carbon pool contained within these three types of forest ecosystems . That carbon is then slowly released back into the atmosphere through complicated decomposition processes.\n\nAll forms of deforestation, human and natural, directly impact climatic conditions by attenuating or delaying the carbon cycle. In concert with well-documented greenhouse gas effects that drive global atmospheric change, the potential loss of large areas of these forests, combined with accelerating deforestation of tropical and temperate regions, may have significant future climate impacts beyond already dire predictions. Ice core studies have revealed that the Earth's climate has varied cyclically over the past 450,000 years. Temperatures have been closely tied to variations in atmospheric carbon dioxide in a cyclic change that recurs on the time scale of millennia. Vegetation has been forced to adapt. The boreal forests are, in fact, highly vulnerable to these climate shifts. Examination of fossil pollen and other fossil records shows that, in response to temperature variations over the past millennia, North American boreal forests have radically changed many times . The unique sensitivity of these forests' tree species to temperature suggests that the predicted warmer climate will cause their ecological niches to shift north faster than the forests can migrate. Researchers believe that, in addition to other deforestation factors, the boreal forests may eventually be substantially reduced to just half their current size over the next century .\n\nOne major consequence of boreal deforestation is increasing fire risk. Even though forests require fire for reproduction and rejuvenation, a warmer climate will most likely push an otherwise natural disturbance to an extreme frequency and scale. Over the next half-century, the Siberian and Canadian boreal forests will most likely see as much as a 50 percent increase in burnt trees . One of the major sources fueling these fires will be dead and dying trees killed by various opportunistic insect species and their associated microorganisms.\n\nParalleling the concerns about the boreal forests, in recent years there has been a growing awareness of extensive insect outbreaks in various regional forests throughout the western United States. At first, local and national media reported on these outbreaks, and the surprising devastation to forested areas, as the result of regional drought conditions that encouraged various species of bark beetle to thrive. As consecutive summers of unprecedented forest fires consumed the dead and dying trees, though, a new concern emerged: insect-driven deforestation is a much larger threat connected to global climate change. In fact, climate experts, forestry personnel, and biologists, have all observed that these outbreaks are an inevitable consequence of a climatic shift to warmer temperatures .\n\nBiologists are now voicing concern that the problem exceeds any of the earlier projections. Evidence from diverse research sources suggests we are entering an unprecedented planetary event: forest ecology is rapidly changing due to exploding plant-consuming (*phytophagous*) insect populations. In 2004, NASA's Global Disturbances project analyzed nineteen years of satellite data ending in 2000. It revealed rapid defoliation over a brief period (1995 to 2000) of a vast region that extends from the US-Canadian border in western Canada to Alaska. The conclusion was that the devastation resulted from two different insects, the mountain pine beetle (*Dendroctonus ponderosae*) and the western spruce budworm (*Choristoneura occidentalis*) . Ecologist Chris Potter (NASA Ames Research Center) said at the time : \"This looks like something new happening on a huge scale. It's a sudden shift into a new kind of forest condition.\"\n\nNow, two years later we know of even further damage. In Alaska, spruce bark beetles (*Dendroctonus rufipennis*) have killed 4.4 million acres of forest . This damage results from only one such insect. Alaska is also witnessing population explosions of many others, including the western spruce budworm, the black-headed budworm (*Acleris gloverana Walsingham*), the amber-marked birch leaf miner (*Profenusa thomsoni*), and the aspen leaf miner (*Phyllocnistis populiella*). In British Columbia the mountain pine beetle has infested 21 million acres and killed 411 million cubic feet of trees. This is twice the annual take by all the loggers in Canada. The general consensus is that beetles will soon take 80 percent of the pines in the central forest of British Columbia. The Canadian Forest Service now calls the beetle invasion of Canada the largest known insect infestation in North American history .\n\nJesse Logan (USDA Forest Service) and James Powell (Utah State University, Logan) discussed the serious implications that a continuing warming trend will have on the range expansion of the mountain pine beetle into both higher elevations and more northern latitudes . At the time, one concern was that the beetles would breach the Canadian Rockies and expand into the great boreal forests of Canada. Historically, these forests have been immune to beetles due to predictably severe winter conditions that greatly attenuate beetle populations. Since much of Canada has seen mean winter temperature increases as high as $\\ensuremath{4^\\circ}$C in the last century, and even faster changes recently, the conditions for the beetles are improving rapidly. As British Columbia forestry officer Michael Pelchat recently said : \"We are seeing this pine beetle do things that have never been recorded before. They are attacking younger trees, and attacking timber in altitudes they have never been before.\"\n\nIt is now well established that mountain pine beetles have slipped through mountain passes from the Peace River country in northern British Columbia to Alberta, the most direct corridor to the boreal forests. If the beetle is successful at adapting to and colonizing Canada's jack pine, there will be little to stop it moving through the immense contiguous boreal forest, all the way to Labrador and the North American east coast. It then will have a path down into the forests of eastern Texas. Entomologist Jesse Logan describes this as \"a potential geographic event of continental scale with unknown, but potentially devastating, ecological consequences.\"\n\nContinental migration aside, if the beetles infest the high-elevation conifers, the so-called five-needle pines, of the western United States, this will reduce the snow-fence effect that these alpine forests provide. Snow fences hold windrows of captured snow that are crucial to the conservation and distribution of water from the Rocky Mountains. This is one of the primary origins of water that sources several major river systems in North America . The Rocky Mountains and the Southwest have seen massive die off of ponderosa and pinion pines numbering in the millions due to bark beetle infestation. Every western state is contending with various rates of unprecedented insect infestation not only by many different species of Scolytidae, but also by other plant-eating insects.\n\nWhile the conifers of the boreal forests have been the most dramatically affected, many temperate forest tree species are also struggling. The emerald ash borer (*Agrilus planipennis*) has killed over 20 million ash trees in Michigan, Ohio, and Indiana. In 2006 it was observed to have moved into northern Illinois and Ontario, Canada . A large, wood-boring conifer pest, the sirex woodwasp (*Sirex noctilio*)\u2014native to Europe and Asia\u2014has now entered several New York counties and southern Canada. It has recently devastated millions of pine trees in Australia, South America, and South Africa . These and other rising populations of phytophagous insects are now becoming recognized as a global problem and one of the most obvious and fast emerging consequences of global climate change. Over the past fifteen years there have been reports of unusual and unprecedented outbreaks occurring on nearly every continent.\n\n# What Drives Infestations?\n\nSeveral well-understood factors underlie how climate change impacts insect populations. The two dominant environmental factors are changes in temperature and moisture. Changing insect-host relationships and nonhost species impacts, such as predation and disease, also play essential roles.\n\nSince insects are cold-blooded (*poikilothermic*), they are extremely sensitive to temperature, being more active at higher temperatures. As winter temperatures increase, there are fewer freezing conditions that keep insect populations in check than in the past. Shortened winters, increasing summer temperatures, and fewer late-spring frosts correlate to increased insect feeding, faster growth rates, and rapid reproduction.\n\nMoisture availability and variability are also major determinants of insect habitat\u2014forest health and boundaries. Drought creates many conditions that are favorable to increased insect reproduction. Many drought-induced plant characteristics are attractive to insects. Higher plant surface temperatures, leaf yellowing, increased infrared reflectance, biochemical changes, and possibly stress-induced cavitation acoustic emissions, may all be positive signals to insects of host vulnerability. Drought also leads to increased food value in plant tissues through nutrient concentration, while reducing defensive compounds. These last factors may in turn increase the efficacy of insect immune systems and therefore enhance their ability to detoxify remaining plant defenses. Higher temperatures and decreased moisture may also decrease the activity of insect diseases and predator activity while optimizing conditions for mutualistic microorganisms that benefit insect growth .\n\nOne of the most frequently noted impacts of global climate change is the desynchronization of biotic developmental patterns\u2014such as as the inability of forests to migrate as quickly as other aspects of their ecological niches\u2014that have remained coherent for millennia. This de-coupling between various elements of an ecosystem is one of the most unpredictable and disruptive results of abrupt climate change. As environmental scientists, Jeff Price (California State Univ., Chico) and Terry Root (Stanford) state it, when discussing the impact of mean temperature increase :\n\n> As many tree species are long-lived and migrate slowly they would be expected to slowly colonize to the north of their range (or up in elevation), while at the southern edge of their range their rate of reproduction slows and finally stops. Even once a species stopped reproducing, the habitat may not undergo much compositional change until the existing community dies out. Thus, it could take decades to centuries for species in some vegetative communities to be replaced by others. As increased temperatures and drought stress plants, they become more susceptible to fires and insect breaks. These disturbances will likely play a large role in the conversion of habitats from one type to another. There could very well be instances where the existing plant communities are lost to disturbance but climatic conditions and migration rates limit the speed by which a new vegetative community replaces the original. Thus, some areas may transitionally be replaced by grasslands, shrublands and, especially, by invasive species. The probability of these transitional habitats may very well be increased with abrupt climate change.\n\nUnfortunately, insects respond to changes in their thermal environment much faster than their hosts, either through migration, adaptation, or evolution. Under the stress of abrupt climate change the only short-term limit on their increasing populations may be their near total elimination of suitable hosts. In short, trees only adapt slowly to changing conditions, while insects can disperse widely and adapt much faster to abrupt environmental changes. One conclusion is that the static, architectural view of Fig. 1 needs to be augmented to indicate the wide range of time scales involved. Table I gives rough estimates of the times over which various feedback loop components and their interactions adapt.\n\n| Time Scales | | |\n|:------------|:------------------------:|:-------------------:|\n| Component | Character | Years |\n| Climate | Season | $1$ |\n| | Temperature | $10^2$ |\n| | Glaciation | $10^4$ |\n| Forest | Migration | $10^2-10^3$ |\n| Tree | Infestation response | $10^{-2}$ |\n| | Death due to infestation | $10^{-1}$ |\n| | Life cycle | $10^2$ |\n| | Evolution | $10^4-10^5$ |\n| Bark Beetle | Tree nesting | $10^{-2}$ |\n| | Migration | $10^{-1} - 10^{-2}$ |\n| | Life cycle | $1$ |\n| | Adaptation | $10$ |\n| | Evolution | $10^2$ |\n\nTime Scales in Entomogenic Climate Change.\n\n# The Tree's Perspective\n\nWhile it is clear that under extreme conditions phytophagous insects and their associated microorganisms can quickly gain the advantage against host trees, it is also true that trees have evolved effective defense mechanisms. For example, in their defense against bark beetles there are two recognized components: the *preformed resin system* and the *induced hypersensitivity response*. Once a beetle bores through the outer tree bark into the inner tissues, resin ducts are severed and its flow begins. A beetle contends with the resin flow by removing resin from its entrance hole. Trees that are sufficiently hydrated often manage to \"pitch-out\" the invader through sufficient flow of resin. In some conifer species with well-defined resin-duct systems, resin is stored and available for beetle defense. The *monoterpenes* within the resin also have antibiotic and repellent properties to defend against beetle-associated fungi .\n\nThe induced hypersensitivity response is usually a secondary defense system; it is also known as *wound response*. It produces secondary resinosis, cellular dessication, tissue necrosis, and wound formation\u2014essentially a tree's attempt to isolate and deprive nutrition to an invading organism. In species without well-defined resin-duct systems it is often a primary defense mechanism. In both cases these defense strategies are very susceptible to variations in temperature and available moisture. Their efficacy also varies with different beetle species .\n\nA series of extremely warm summers in Alaska, starting in 1987, resulted in the dispersal of spruce bark beetles on the Kenai Peninsula when trees were water stressed and greater beetle brood sizes survived the winter. This also halved beetle development times from a two-year to a one-year cycle. The result was a major increase in beetle activity that grew every year until most of the mature trees were dead . Since winter survivability and the number of eggs laid by bark beetles is directly correlated to ambient temperature , it is no surprise that similar increases in yearly beetle population cycles have been observed throughout the western states and provinces as warming and local drought conditions have persisted . As Table 1 makes clear the relative time scales for increased infestation rates, and subsequent adaptive tree response, can put host trees at a serious disadvantage with regard to even the short-term effects of climatic warming.\n\n# Bark Beetles: Known and Unknown\n\nWhile many of the issues concerning bark beetle behavior and control can be generalized to other insect groups, many others cannot. With this limitation in mind, we must narrow the discussion to bark beetles to complete our analysis of the feedback loop. There are several essential questions that dominate the study of bark beetles, many of which have been pursued in an attempt to define viable infestation control strategies. These are:\n\n1. *The Pioneer Beetle*: Exactly how does a new beetle generation find new suitable host trees? Is the process merely random or are host-mediated attractants involved? Is one adaptation universal to all Scolytidae or are different processes used by different species?\n\n2. *Communication*: How do bark beetles communicate in order to mate, defend territory, coordinate tree-attack, and reduce competition within the host?\n\n3. *Beetles, Trees, and Microorganisms*: What controls the symbiotic micro-ecology between host trees, beetles, and the microorganisms that mediate between them?\n\nWhile humans have always been in a competitive interaction with insects, this has largely been a stalemate. Insects have readily adapted to many of our control strategies and some of our most effective defenses have had a tendency to backfire. In the case of the current North American bark beetle invasions, attempts at intervention are proving mostly negligible. The Canadian Forest Service, in response to the mountain pine beetle invasion of British Columbia, thinned healthy forests, cut down and burned infested trees, and set out beetle traps while hoping for a deep freeze that never came. Micheal Pelchat (Canadian Forest Service) describes what happened: \"We lost. They built up into an army and came across .\" Though ineffective, it is sobering to realize that such measures still constitute our only defensive arsenal as the beetles move into the boreal forests.\n\n## Chemical Micro-ecology\n\nOver the past thirty years many hundreds of scientific papers have been published on bark beetles. Among reported observations, the majority focused on beetle chemical ecology. A much smaller percentage addressed alternative aspects of beetle biology and their relationship to the environment. In some ways this has been, for good reason, due to the growth of the field. Chemical ecology has been one of the major successes in 20$^{\\mathrm{th}}$ century entomology. As Edward O. Wilson (Harvard) states :\n\n> The discipline that came into existence as a result of the pointillist studies by Eisner, Jerrold Meinwald, and a few other pioneers was chemical ecology. Its importance arises from the fact that the vast majority of organisms\u2014surely more than 99 percent of all species when plants, invertebrates, and microorganisms are thrown in\u2014orient, communicate, subdue prey and defend themselves by chemical means.\n\nAs this emphasizes, much of the living world communicates through chemical signaling\u2014intentional or inadvertent\u2014especially through those compounds exchanged between members of the same species, called *pheromones*, or through those chemical cues emitted by a prey source for the benefit of the predator, called *kairomones*. Chemical ecology is the study of these compounds that attempts to unravel and map this extensive chemical language through analysis of both chemical compounds and observation of the behavior of living organisms correlated to them. It also seeks to discover how these compounds are created and how to synthesize them for possible manipulation of their creators\u2014such as bark beetles .\n\nThe conventional and widely held chemical ecology model for bark beetle-tree interactions is easily summarized. Like many other insects, bark beetles manufacture communicative pheromones from molecular constituents that they draw from host trees. Some species of beetle are specialists that prefer a single species of tree while others have adapted to a range of different tree species. Different species can also favor different host conditions: live trees, weak or dying trees, or recently fallen timber. The breeding site within a host also varies, with different species taking up residence in either the lower or upper trunk, with still others preferring the crown. Presumably, this localization evolved to reduce competition between species, allowing a diversity to co-exist in a single tree. Many of the vast number of different Scolytidae species have evolved to maintain a non-lethal relationship to their hosts. Others have evolved to normally colonize dead or dying trees at normal low-population densities but can colonize living trees when populations reach high levels. Finally, there are the primary bark beetle species that normally kill their hosts. These species, that are usually the most destructive, can stage a mass attack and use aggregation pheromones between beetles to trigger this behavior.\n\n## Pioneer Beetle: Infestation linchpin\n\nAn attack begins with the pioneer beetle that locates, by means not yet elucidated, and lands upon a suitable host. Others join this beetle, all soon boring through the outer bark into the phloem and cambium layers where eggs are laid after mating. Within the resulting galleries that house the adult beetles and their eggs, the larvae hatch, pupate, and undergo metamorphosis into adulthood. In this way, they spend the largest fraction of their life-cycle (anywhere between 2 months to two years depending upon species and geographic location) inside a tree. This new generation emerges from the bark and flies away to seek new host trees. The widely held hypothesis is that the pioneer also attract other beetles to the host through a pheromone signal. In some species the pioneer is male and, in others, female. Each new beetle that is attracted to the host subsequently contributes to the general release of the aggregation pheromone. It is also theorized that the aggregation pheromone has an upper limit beyond which attracted beetles will land on adjacent trees rather than the initial host, since high concentrations would indicate over-use of the available host resources. Resident bacteria within the beetles may facilitate the production of aggregation pheromones. The aggregation pheromone of one species also tends to repel other species .\n\nAs new research has filled in the gaps of the chemical ecology model, it has become clearer that it is often over-simplified. There are a large number of nonchemical mediating factors that are both independent of pheromone signaling and directly affective of its role. The model tends to emphasize the chemical and olfactory mechanisms of bark beetles and downplays, or simply ignores, a large array of other factors. In some ways, regarding its dominance in the study of bark beetles, the chemical paradigm has become a victim of its own success. While proving to be a seemingly inexhaustible well of new hypotheses about the chemical intricacies of these creatures and their relationship to host trees, it has failed to answer many of the central questions about their reproductive and mating behavior and so their infestation dynamics.\n\nOne hope has been that understanding bark beetle chemical ecology would lead to its manipulation and eventually to a viable forestry management tool. Much to our loss, nothing of the sort has been forthcoming. This largely derives from the sheer complexity of the insect-tree micro-ecology and how far away we are from a sufficient understanding of mechanisms and interactions. The two major contributions of chemical ecology research to control measures have been those of pesticides and pheromone trapping. Most biologists appreciate that pesticides have a very limited role in controlling insect infestations at the scales in question. Pheromone traps are one of the essential tools of field research in entomology, but adapting them for large-scale control has been controversial at best; see Borden 1997 for an overview. Some researchers and chemical manufacturers assert that they have a positive effect in collecting, condensing, and re-directing of beetle populations . Other researchers, however, claim that these effects are inconclusive and, worse, in some cases may exacerbate negative conditions . In any case, effective traps and synthetic pheromone production are costly and their toxicity is undetermined.\n\nThese issues are illustrated by the only large-scale study of the effectiveness of pheromone trapping. In 1979 a massive outbreak of the European spruce bark beetle (*Ips typographus*) spread through the forests of Norway and Sweden. A large trap-out program was implemented in an attempt to counter the invasion of the Scandinavian spruce forests. 600,000 baited pheromone traps were placed throughout the infested forest. It yielded a \"capture\" of three billion beetles in 1979 and four billion in 1980. Despite what appears to be substantial numbers of captured beetles, ultimately the infestation was devastating. It appeared to have simply run its course. Trap effectiveness could not easily be evaluated. No consensus conclusions were drawn as to whether the deforestation would have been worse or better without them. Notably, despite increased infestation, no pheromone intervention of this scale has been attempted since. Unfortunately, there is still no clear picture of what was accomplished .\n\nGiven the differing scientific opinions and the as-yet undemonstrated benefits, pheromone trapping, the control strategy that derives from the chemical ecology paradigm, remains controversial. The apparent ineffectiveness of large-scale pheromone trapping, though, illustrates one of the central unanswered questions of the pioneer beetle. An underlying assumption of chemical ecology is that pheromones are the primary attractant for beetles seeking new hosts, but this remains a hypothesis. While many researchers believe that attraction is olfactory, others propose that visual cues are key for some species . Importantly, forestry management policy is based largely on the chemical ecology hypothesis that olfaction is dominant. It has never been definitively proven, however, and, for a number of reasons, it is unlikely to be. Stated simply, foraging insects most likely use whatever cues are the most accurate and easily assessed under varying circumstances. To assume otherwise is to go against the common logic that living systems evolve multiple survival strategies to cope with environmental complexity.\n\n## More Pieces of the Puzzle\n\nOther aspects of bark beetle behavior also remain mysterious. One of the most curious observations concerns the role lightning-damaged trees play in sustaining populations of some bark beetle species. For example, lightning plays an essential role in the ecology of southern pine beetle (*Dendroctonus frontalis*) populations in the Gulf Coastal Plain. Notably, infestations are observed to begin in the spring when beetle populations are usually low and lightening is frequent. Some years, records show that 75 percent of beetle infestations were associated with lightning strikes. In addition, the number of beetles in a struck tree averages nearly four times that of an unstruck, but infested tree. While there are obvious and dramatic chemical changes that occur after such a strike, none explain this extraordinary behavior, especially how beetles of several different species can be attracted from a substantial distance to a single struck tree. It appears that no substantial hypothesis has been put forward . This is one aspect of bark beetle behavior that begs for a novel perspective and, therefore, a new explanatory mechanism.\n\nAnother more recent curious observation concerns how some bark beetle infestations are associated with certain rock and tree stand types. A recent study, using landscape-level geographic analysis of the Panhandle National Forests of northern Idaho and eastern Washington, shows a correlation of Douglas-Fir beetle (*Dendroctonus pseudotsugae*) infestation rates with certain geologic strata and forest stand types. While the authors of this study speculate that perhaps the correlation is due to variations in nutrient stress that impact tree vulnerability, to date no research has verified this . In fact, the hypothesis that this effect can be correlated to nutrient stress says nothing about the nature of the tree vulnerability cues that might be communicated to beetles.\n\nOne of the most exciting areas of bark beetle research analyzes the complex micro-ecological dynamics between beetles, various forms of fungi, and diverse species of mites. For example, we now know that almost every state and federal information website gives an incomplete description of how bark beetles kill conifers. The story is that it is a fungus, carried by the beetle, which infects a tree's vascular system, choking off the flow of nutrients. While it is true that a compromised vascular system can ultimately kill a tree, the websites describe the relationship between blue-stain fungus (genus *Ophiostoma*) and the beetles as if there is only one organism and only one such species of fungus involved. The truth is that there is no clear consensus about what actually kills host trees. The emerging picture is a very complex one that involves many participatory agents. In fact, many different species of *Ophiostoma* and many other genera of fungi\u2014including *Entomocorticium*, *Ceratocystiopsis*, *Trichosporium*, *Leptographium*, and *Ceratocystis*\u2014are involved . There are probably other interspecific interactions between different species of beetles that are also significant . Moreover, mites that live on the beetles also carry the fungi .\n\nThe resulting picture is of a constantly shifting dynamic that includes fungi, mites, beetles, and trees. Moreover, the relationships involve different modes of symbiosis: parasitism, mutualism, and commensalism. Each of these, in turn, affects infestation dynamics in different ways. This might seem like a hopelessly complicated micro-ecology. However, it is this very diversity that leads to the hope that unraveling its complexity may contribute towards new biological control regimes .\n\nSince these mysteries and micro-ecological dynamics cannot be adequately explained by the familiar aspects of the chemical ecology paradigm, there is a need to look for other explanatory mechanisms that might bridge the gaps in our understanding or offer novel insights.\n\n## The Bioacoustic Ecology Hypothesis\n\nThe motivation to search out novel control regimes is clearly a response to the serious limitations that chemical control strategies have faced. This has necessitated both a search for entirely new areas of investigation, such as the previously mentioned micro-ecological dynamics, and a resurgence in areas of research that have received minimal attention in the past.\n\nOne of the more neglected research domains regarding bark beetles concerns their remarkable bioacoustic abilities. The sound producing mechanism in many bark beetles is most likely a *pars stridens* organ that functions as a friction-based grating surface. In *Ips confusus* beetles it is located on the back of the head and stroked by a *plectrum* on the under side of the dorsal anterior edge of the prothorax. In other species (*Dendroctonus*) the pars stridens is located on the surface under the elytra and near the apices and sutural margins. Another is found in some species on the underside of the head. All three of these sound generating organs produce a variety of chirps that range from simple single-impulse clicks to a range of different multi-impulse chirps. These also differ between genders of the same species and between different species probably due to subtle differences in the sound producing mechanisms. Collectively, all of the sounds and their associated mechanisms are referred to as *stridulation*, the most common form of sound production made by various forms of beetle .\n\nIn monogamous species of bark beetles, such as those of the genus *Dendroctonus*, the female is the pioneer and does not possess the complex pars stridens that the male uses. The opposite is true of the polygamous *Ips* genus where the male is the pioneer and the female the one with a pars stridens organ. In both cases, however, the pioneer is also known to produce simpler forms of communicative signaling using other, less understood sound generating mechanisms. We do know that there are at least 14 different types of stridulatory organs amongst adult beetles in thirty different families . So the potential for undiscovered varieties of such mechanisms among Scolytidae is high.\n\nPast research suggested that sound making and perception in bark beetles was of low significance compared to their chemical-signaling mechanisms. In fact, of the studies that dealt with their acoustic behavior, over half concentrated on the relationship of sound generation to chemical signaling. These include the role stridulation sound-making has in controlling attack spacing between entry points in the host or the triggering of pheromone release between genders . The resulting view is that bark beetles use a combination of chemical and acoustic signals to regulate aggression, attack on host trees, courtship, mating behavior, and population density.\n\nWhile the dual behavioral mechanisms of scent and sound are largely inseparable, it is usually assumed that bark beetles use chemical messages for communication at a distance while reserving acoustic signals for close-range communication. However, this distinction remains hypothetical. We do not have a definitive understanding of how far either their pheromones or sound signals can travel, let alone a full appreciation of the diverse forms of acoustic signaling that they may employ. We do know that both communication mechanisms are used after beetles have aggregated on a host and that one form of signaling can evoke the other.\n\nBark beetles exhibit an amazing array of complex social behaviors\u2014such as, group living, coordination of mass attack, the necessity for mass infestation to effectively counter host defenses, signaling to reduce intraspecific competition, and the collective occupation of nuptial chambers by polygamous species. This complexity implies that the coupling between communication mechanisms is significant. Despite the importance that their acoustic communication must have for overall survival and environmental fitness, there are still no published studies on their sound reception mechanisms or any identification of hearing organs.\n\nThe broad neglect of bark-beetle bioacoustic behavior has also led to a lack of follow-up on the proposal that host trees themselves produce acoustic cues that also attract pioneer beetles. Perhaps the earliest proposal dates back to 1987, when William Mattson and Robert Haack (USDA, Forest Service) speculated that cavitation events in trees might produce acoustic signals audible to plant-eating insects . Cavitation occurs in trees by breaking of the water columns in the conducting xylem tissue of leaves, stems, and trunks. The assumption has been that the sounds are vibrations coming from individual cells collapsing, which is due to gradual dehydration and prolonged water stress. While cavitation produces some acoustic emissions in the audible range, most occur in the ultrasound range. In fact, counting ultrasonic acoustic emissions from cavitating xylem tissues is a widely accepted monitoring practice used by botanists to measure drought stress in trees. Despite its common usage in botany, there has been very little study as to the actual generating mechanism. For the most part, it is merely a statistical measuring tool and the correlation between the incidence of cavitations and drought stress, an accepted fact .\n\nThis proposal requires, of course, that bark beetles be able to perceive the drought-stressed tree's ultrasonic acoustic emissions. At the present time, while insect sound-making mechanisms are fairly well understood, this is not the case with their auditory organs. Nonetheless, every year the list of insect species shown to have ultrasonic hearing grows. It now includes many species of butterflies and moths (*Lepidoptera*), mantids (*Mantodea*), grasshoppers and crickets (*Orthoptera*), flies (*Diptera*), and net-veined insects (*Neuroptera*).\n\nThere has also been increasing investigation of interspecific sensing in the ultrasonic range, such as the influence of bat echolocation on the evolution of moths and butterflies. In fact, it appears that much of the evolution of ultrasonic hearing in flying insects has been driven by this essential predator-prey relationship. These insects have evolved a startle response to the presence of echolocating bat chirps and take avoidance measures by suddenly dropping in flight .\n\nInterestingly, despite their being the largest insect order, there have been only two kinds of beetles (Coleoptera) discovered to have tympanum-hearing organs: scarabs (Scarabaeidae) and tiger beetles (Cicindelidae). This appears to be more a matter of lack of study than a general characteristic of the order. Researchers believe that many other ultrasound sensitive beetles will soon be discovered .\n\n## Testing the Hypothesis\n\nRecent fieldwork by one of us (DDD) focused on sound production by the pinion engraver beetle (*Ips confuses*). Sounds were recorded within the interior phloem layer of the trees, often adjacent to beetle nuptial chambers. A rich and varied acoustic ecology was documented\u2014an ecology that goes beyond the previously held assumptions about the role of sound within this species . Another important observation was that much of the sound production by this species has a very strong ultrasonic component. Since communication systems seldom evolve through investing substantial resources into a portions of the frequency spectrum that an organism cannot both generate and perceive , this raised the question of whether or not bark beetles have a complementary ultrasonic auditory capability.\n\nRecent laboratory investigations by Jayne Yack (Neuroethology Lab, Carleton University) have also revealed ultrasound components in some bark beetle signals and indirect evidence that the beetles might possess sensory organs for hearing airborne sounds. One possible implication that arises from the combination of these laboratory and field observations is that various bark beetle species may possess organs capable of hearing ultrasound for conspecific communication and are therefore preadpated for listening to diverse auditory cues from trees .\n\nIf further evidence of ultrasonic perception can be verified in this and other bark beetle species, then a number of interesting possibilities arise. It has been a working assumption among entomologists studying Scolytidae that bark beetles are not under predation pressure from insectivorous bats. The claim is that bark beetles do not fly at night. This would mean that the most likely explanation for bark beetles evolving an ultrasonic hearing capability is not applicable since it would be, in familiar evolutionary terms, an unnecessary adaptation. Thus, it would appear that such an adaptation must have evolved to monitor environmental sound cues, such as cavitation acoustic emissions, or a previously unknown intraspecific signaling system in the ultrasonic range. If verified, this would contribute substantially to an improved understanding of the role that sonic communication plays in the development, organization, and behavior of bark beetles\u2014a key and previously unsuspected role.\n\n## Multimodal Sensing, Communication, and Social Organization\n\nIn the overall scheme of entomogenic climate change, the complex feedback loop appears to turn critically on the bioacoustic and chemical-mediated interactions between beetles and trees. Given this, where else might we look for control methods?\n\nWhile receiving little or no concerted attention, there is one area of possible bark beetle research that warrants discussion since it could have important impacts on the design of new control methods. While bark beetles appear to have a complex communication system that uses both chemical and acoustic forms of signaling, the question of how complex their social organization might be has seldom been asked. Again, behaviors such as group living, coordination of mass attack, the necessity for mass infestation to effectively counter host defenses, signaling to reduce intraspecific competition, and the collective occupation of nuptial chambers by polygamous species, seem to imply that some level of rudimentary social awareness is implicated in bark beetle behavior and necessary for survival of the various species. How far this resembles more familiar forms of insect eusocial behavior remains an open question.\n\nIn recent years the important role of insect communication through vibrational substrates has become clear. Despite the dominance of chemical ecology, one now reads that of insect species using mechanical communication (sound in air, ripples on water, and so on) : \"92 percent use substrate vibrations alone or in concert with other forms of mechanical signaling.\" Reginald Cocroft (University of Missouri-Columbia) has hypothesized that for many group-living insects that feed on plants, substrate vibrational signaling is an essential aspect of how they exploit environmental resources. He suggests that there are at least three different kinds of challenges to these insects that are met by communication through plant substrates: locating and remaining in a conspecific group, locating food, and avoiding predators .\n\nWe are most familiar with the complex eusocial systems and related (and essential) communication systems of bees, wasps, ants, and termites. The above acoustic fieldwork has led us to conclude that there must be a larger range of forms of insect sociality and therefore means of organizational communication. More precise understanding of these forms of social organization may improve our ability to design better control systems, whether these are chemical, acoustic, or biological.\n\nIn investigating the sound communication of pinion engraver beetles, one conclusion became inescapable. The phloem and cambium layers of pinion trees are an amazingly effective medium for acoustic communication. Individual stridulations can carry for several feet within the tree bark interior, most of which are inaudible to an outside (human) listener or to sensitive recording apparatus. Moreover, the diverse sounds made by the pinion engraver beetles appear to effectively match the combination of cellulose, fluids, and air that comprise these tissue layers. In communication theory terms, there is an effective impedance match between sound generator and the transmission medium. These layers are also an appropriate acoustic medium for several other invertebrate sound makers. The field recordings reveal that the tree interior is a rich and teaming world of sound \u2014its own bioacoustic ecology.\n\nThese observations raise an important issue not addressed by previous bark beetle bioacoustic research. A very diverse range of sound signaling persists well after the putatively associated behaviors\u2014host selection, coordination of attack, courtship, territorial competition, and nuptial chamber excavations\u2014have all taken place. In fully colonized trees the stridulations, chirps, and clicks can go on continuously for days and weeks, long after most of these other behaviors will have apparently run their course. These observations suggest that these insects have a more sophisticated social organization than previously suspected\u2014one that requires ongoing communication through sound and substrate vibration.\n\nThe results in both bioacoustics and chemical ecology strongly suggest bark beetle communication is largely multimodal and that both pheromone and mechanical signaling are interwoven. A growing appreciation in many fields of biology has emerged that animal signals often consist of multiple parts within or across sensory modalities. Insects are not only an example of this observation, but they possess some of the most surprising examples of multicomponent and multimodal communication systems . Sometimes these different components or modalities signal different information and sometimes they are redundant. For example, it was assumed that bee communication, through either the famous \"waggle dance\" or associated sounds from wing vibrations, communicated different informational content during display. More recently, experiments with robot bees determined that these systems are largely redundant, most likely a strategy for reducing transmission errors . Collectively, these observations reinforce the opinion voiced many years ago by entomologist Philip Callahan (University of Florida) that: \"as long as sound is studied in one corner of the lab and scent in another, the mechanisms of these sound-modulated scent molecules will not be understood ...\".\n\n# Conclusion: Closing the Loop\n\nThe eventual impact that insect-driven deforestation and global climate change will have on the Earth's remaining forests ultimately depends on the rate at which temperatures increase. The impacts will be minimized if that rate is gradual, but increasingly disruptive if the change is abrupt. Unfortunately, most climate projections now show that a rapid temperature increase is more likely . The current signs of increasing insect populations at this early stage of warming does not portend well for forest health in the near future. The concern is exacerbated, since we have limited countermeasures under development.\n\nOne conclusion appears certain. Extensive deforestation by insects will convert the essential carbon pool provided by the Earth's forests into atmospheric carbon dioxide. Concomitantly, the generation of atmospheric oxygen by trees will decrease. Most immediately, though, as millions of trees die, they not only cease to participate in the global carbon cycle, but become potential fuel for more frequent and increasingly large-scale fire outbreaks. These fires will release further carbon dioxide into the atmosphere and do so more rapidly than the natural cycle of biomass decay. The interaction between these various components and the net effect is complicated at best\u2014a theme that runs through links in the entire feedback loop.\n\nAn example of this is how boreal forest fires affect climate . A constellation of substantially changed components (lost forest, sudden release of gases, and the like) leads, it is claimed, to no net climate impact. The repeated lesson of complex, nonlinear dynamical systems, though, is that the apparent stability of any part can be destabilized by its place in a larger system. Thus, one needs to evaluate the lack of boreal fire-climate effects in the context of the entire feedback loop.\n\nTaken alone, the potential loss of forests is of substantial concern to humans. When viewing this system as a feedback loop, though, the concern is that the individual components will become part of an accelerating positive feedback loop of sudden climatic change. Such entomogenic change, given the adaptive population dynamics of a key player (insects), may happen on a very short time scale. This necessitates a shift in the current characterization of increasing insect populations as merely symptomatic of global climate change to a concern for insects as a significant generative agent.\n\nWhile current research programs will continue to contribute important insights on chemical communication and associated behavior of plant-eating insects, hard-won experience suggests it is increasingly less plausible that the chemical ecology paradigm alone will be the source for effective intervention strategies, as originally hoped. We believe that alternative approaches will contribute fresh insights and suggest innovative mechanisms for detection, monitoring, and control. Most importantly, we conclude from the complexity of the constituents and interactions in the feedback loop that there must be greater support for interdisciplinary approaches. At a minimum, the problem we described requires a more comprehensive understanding of insect multimodal and multicomponent communication and its rich ecological context. These then must be evaluated in the larger frame of entomogenic climate change.\n\nIn addition to concerted research in bioacoustics, micro-ecological symbiosis and dynamics, and insect social organizations, these areas, in conjunction with the field of chemical ecology, must be integrated into a broader view of multiscale population, evolutionary, and climate dynamics. In this sense, the birth of chemical ecology serves as an inspiration. It grew out of an interdisciplinary collaboration between biology and chemistry. It is precisely this kind of intentional cooperation between disciplines\u2014but now over a greater range of scales\u2014that will most likely lead to new strategies for monitoring and defense against what seems to be a growing threat to the world's forests and ultimately to humanity itself.\n\n# Acknowledgments\n\nThe authors thank Dawn Sumner, Jim Tolisano, Richard Hofstetter, Jayne Yack, Reagan McGuire, and Bob Harrill for helpful discussions. This work was partially supported by the Art and Science Laboratory via a grant from the Delle Foundation and the Network Dynamics Program, funded by Intel Corporation, at UCD and the Santa Fe Institute.","meta":{"dup_signals":{"dup_doc_count":57,"dup_dump_count":48,"dup_details":{"curated_sources":3,"2023-14":2,"2022-49":1,"2021-25":1,"2020-40":1,"2020-34":1,"2020-16":1,"2019-35":1,"2019-13":1,"2019-04":1,"2018-51":1,"2018-47":1,"2018-26":1,"2018-05":1,"2017-43":1,"2017-34":1,"2017-26":1,"2017-17":1,"2017-09":1,"2017-04":1,"2016-50":1,"2016-44":1,"2016-40":1,"2016-36":1,"2016-30":1,"2016-22":1,"2016-18":1,"2016-07":1,"2015-48":1,"2015-40":1,"2015-35":1,"2015-32":1,"2015-27":1,"2015-22":1,"2015-14":1,"2014-52":1,"2014-49":2,"2014-42":3,"2014-41":2,"2014-35":2,"2014-23":1,"2014-15":2,"2015-18":1,"2015-11":1,"2014-10":1,"2013-48":1,"2013-20":1,"2017-13":1}},"filename":"out\/q-bio0612019_extract_itc.tex.md"},"subset":"arxiv"} +{"text":"abstract: We analyze the language learned by an agent trained with reinforcement learning as a component of the ActiveQA system\u00a0. In ActiveQA, question answering is framed as a reinforcement learning task in which an agent sits between the user and a black box question-answering system. The agent learns to reformulate the user's questions to elicit the optimal answers. It probes the system with many versions of a question that are generated via a sequence-to-sequence question reformulation model, then aggregates the returned evidence to find the best answer. This process is an instance of *machine-machine* communication. The question reformulation model must adapt its language to increase the quality of the answers returned, matching the language of the question answering system. We find that the agent does not learn transformations that align with semantic intuitions but discovers through learning classical information retrieval techniques such as tf-idf re-weighting and stemming.\nauthor: Christian Buck \n`firstname.lastname@example.com` Jannis Bulian \n`firstname.lastname@example.com` Massimiliano Ciaramita \n`firstname.lastname@example.com` Wojciech Gajewski \n`email@example.com` Andrea Gesmundo \n`firstname.lastname@example.com` Neil Houlsby \n`firstname.lastname@example.com` Wei Wang \n`email@example.com` \n \nGoogle\nbibliography: nips.bib\ntitle: Analyzing Language Learned by an \n Active Question Answering Agent\n\n# Introduction\n\npropose a reinforcement learning framework for question answering, called *active question answering* (ActiveQA), that aims to improve answering by systematically perturbing input questions (cf.\u00a0). Figure\u00a0 depicts the generic agent-environment framework. The agent (AQA) interacts with the environment (E) in order to answer a question ($q_0$). The environment includes a question answering system (Q&A), and emits observations and rewards. A state $s_t$ at time $t$ is the sequence of observations and previous actions generated starting from $q_0$: $s_t=x_0,u_0,x_1,\\ldots,u_{t-1},x_t$, where $x_i$ includes the question asked ($q_{i}$), the corresponding answer returned by the QA system ($a_i$), and possibly additional information such as features or auxiliary tasks. The agent includes an action scoring component (U), which produced and action $u_t$ by deciding whether to submit a new question to the environment or to return a final answer. Formally, $u_t\\in \\mathcal{Q}\\cup \\mathcal{A}$, where $\\mathcal{Q}$ is the set of all possible questions, and $\\mathcal{A}$ is the set of all possible answers. The agent relies on a question reformulation system (QR), that provides candidate follow up questions, and on an answer ranking system (AR), which scores the answers contained in $s_t$. Each answer returned is assigned a reward. The objective is to maximize the expected reward over a set of questions.\n\npresent a simplified version of this system with three core components: a question reformulator, an off-the-shelf black box QA system, and a candidate answer selection model. The question reformulator is trained with policy gradient\u00a0 to optimize the F1 score of the answers returned by the QA system to the question reformulations in place of the original question. The reformulator is implemented as a sequence-to-sequence model of the kind used for machine translation\u00a0. When generating question reformulations, the action-space is equal to the size of the vocabulary, typically $16k$ sentence pieces.[^1] Due to this large number of actions we warm start the reformulation policy with a monolingual sequence-to-sequence model that performs generic paraphrasing. This model is trained using the *zero-shot* translation technique\u00a0 on a large multilingual parallel corpus\u00a0, followed by regular supervised learning on a smaller monolingual corpus of questions\u00a0.\n\nThe reformulation and selection models form a trainable agent that seeks the best answers from the QA system. The reformulator proposes $N$ versions $q_i$ of the input question $q_0$ and passes them to the environment, which provides $N$ corresponding answers, $a_i$. The selection model scores each triple $(q_0,q_i,a_i)$ and returns the top-scoring candidate.[^2]\n\nCrucially, the agent may only query the environment with natural language questions. Thus, ActiveQA involves a machine-machine communication process inspired by the human-machine communication that takes place when users interact with digital services during information seeking tasks. For example, while searching for information on a search engine users tend to adopt a keyword-like *'queryese'* style of questioning. The AQA agent proves effective at reformulating questions on SearchQA\u00a0, a large dataset of complex questions from the *Jeopardy!* game. For this task BiDAF is chosen for the environment\u00a0, a deep network built for QA which has produced state-of-the-art results. Compared to a QA system that forms the environment using only the original questions, AQA outperforms this baseline by a wide margin, 11.4% absolute F1, thereby reducing the gap between machine (BiDAF) and human performance by 66%.\n\nHere we perform a qualitative analysis of this communication process to better understand what kind of language the agent has learned. We find that while optimizing its reformulations to adapt to the language of the QA system, AQA diverges from well structured language in favour of less fluent, but more effective, classic information retrieval (IR) query operations. These include term re-weighting (tf-idf), expansion and morphological simplification\/stemming. We hypothesize that the explanation of this behaviour is that current machine comprehension tasks primarily require ranking of short textual snippets, thus incentivizing relevance more than deep language understanding.\n\n# Analysis of the Agent's Language\n\nWe analyze input questions and reformulations on the $12k$ example development partition of the SearchQA dataset. Our goal is to gain insights on how the agent's language evolves during training via policy gradient. It is important to note that in the SearchQA dataset the original *Jeopardy!* clues have been preprocessed by lower-casing and stop word removal. The resulting preprocessed clues that form the sources (inputs) for the sequence-to-sequence reformulation model resemble more keyword-based search queries than grammatical questions. For example, the clue *Gandhi was deeply influenced by this count who wrote \"War and Peace\"* is simplified to *gandhi deeply influenced count wrote war peace*.\n\n## The Language of SearchQA Questions\n\nFigure\u00a0 summarizes statistics of the questions and rewrites which may shed some light on how the language changes. The (preprocessed) SearchQA questions contain 9.6 words on average. They contain few repeated terms, computed as the mean term frequency (TF) per question. The average is 1.03, but for most of the queries TF is 1.0, i.e.\u00a0no repetitions. We also compute the median document frequency (DF) per query, where a document is the context from which the answer is selected.[^3] DF gives a measure of how informative the question terms are.\n\n## The Language of the Base NMT Model\n\nWe first consider the top hypothesis generated by the pre-trained NMT reformulation system, before reinforcement learning (Base-NMT). This system is trained with full supervision, using a large multilingual and a small monolingual dataset. The Base-NMT rewrites differ greatly from their sources. They are shorter, 6.3 words on average, and have even fewer repeated terms (1.01). Interestingly, these reformulations are mostly syntactically well-formed questions. For example, the clue above becomes *Who influenced count wrote war?*.[^4] Base-NMT improves structural language quality by properly reinserting dropped function words and wh-phrases. We also verified the increased fluency by using a large language model and found that the Base-NMT rewrites are 50% more likely than the original questions. The bottom right hand plot in Figure\u00a0 summarizes the language model distributions (LM\u00a0WordLogP). The plot shows the average per-token language model negative log probabilities; a lower score indicates greater fluency. Although the distributions overlap to a great extent due to the large variance across questions, the differences in means are significant.\n\nWhile more fluent, the Base-NMT rewrites involve rarer terms, as indicated by the decrease in DF. This is probably due to a domain mismatch between SearchQA and the NMT training corpus.\n\n## The Language of the AQA Agent\n\nWe next consider the top hypothesis generated by the AQA question reformulator (AQA-QR) after the policy gradient training. The AQA-QR rewrites are those whose corresponding answers are evaluated as *AQA Top Hyp.* in\u00a0. Note, these single rewrites alone outperform the original SearchQA queries by a small margin (+2% on test). We analyze the top hypothesis instead of the final output of the full AQA agent to avoid confounding effects from the answer selection step. These rewrites look different from both the Base-NMT and the SearchQA ones. For the example above AQA-QR's top hypothesis is *What is name gandhi gandhi influence wrote peace peace?*. Surprisingly, 99.8% start with the prefix *What is name*. The second most frequent is *What country is* (81 times), followed by *What is is* (70) and *What state* (14). This is puzzling as it happens only for 9 Base-NMT rewrites, and never in the original SearchQA questions. We speculate it might be related to the fact that virtually all answers involve names, of named entities (Micronesia) or generic concepts (pizza). AQA-QR's rewrites are visibly less fluent than both the SearchQA and the Base-MT counterparts. In terms of language model probability they are less likely than both SearchQA and Base-NMT.[^5] However, they have more repeated terms (1.2 average TF), are significantly longer (11.9) than the Base-NMT initialization and contain more informative context terms (lower DF) than SearchQA questions.\n\nAdditionally, AQA-QR's reformulations contain morphological variants in 12.5% of cases. The number of questions that contain multiple tokens with the same stem doubles from SearchQA to AQA-QR. Singular forms are preferred over plurals. Morphological simplification is useful because it increases the chance that a word variant in the question matches the context.\n\n# Conclusions: Rediscovering IR?\n\nRecently, trained chatbots that negotiate via language utterances in order to complete a task. They report that the agent's language diverges from human language if there is no incentive for fluency in the reward function. Our findings seem related. The fact that the questions reformulated by AQA do not resemble natural language is not due to the keyword-like SearchQA input questions, because Base-NMT is capable of producing fluent questions from the same input.\n\nAQA learns to re-weight terms by focusing on informative (lower DF) terms while increasing term frequency (TF) via duplication. At the same time it learns to modify surface forms in ways akin to stemming and morphological analysis. Some of the techniques seem to adapt also to the specific properties of current deep QA architectures such as character-based modelling and attention. Sometimes AQA learns to generate semantically nonsensical, novel, surface term variants; e.g., it might transform the adjective *dense* to *densey*. The only justification for this is that such forms can be still exploited by the character-based BiDAF question encoder. Finally, repetitions can directly increase the chances of alignment in the attention components.\n\nWe hypothesize that there is no incentive for the model to use human language due to the nature of the task. AQA learns to ask BiDAF questions by optimizing a language that increases the likelihood of BiDAF *extracting* the right answer. argue that reading comprehension systems are not capable of significant language understanding and fail easily in adversarial settings. We suspect that current machine comprehension tasks involve mostly simple pattern matching and relevance modelling. As a consequence deep QA systems behave as sophisticated ranking systems trained to sort snippets of text from the context. As such, they resemble document retrieval systems which incentivizes the (re-)discovery of IR techniques that have been successful for decades\u00a0.\n\n# Examples\n\n```latex\n\\begin{footnotesize}\n\\begin{table*}[h!]\n \\centering\n \\renewcommand{\\arraystretch}{1.0}\n \\begin{tabular}{l|p{12cm}}\n \\toprule\n Jeopardy! & People of this nation AKA Nippon wrote with a brush, so painting became the preferred form of artistic expression \\\\\n SearchQA & people nation aka nippon wrote brush , painting became preferred form artistic expression \\\\\n Base-NMT & Aka nippon written form artistic expression?\\\\\n AQA-QR & What is name did people nation aka nippon wrote brush expression?\\\\\n AQA-full & people nation aka nippon wrote brush , painting became preferred form artistic expression \\\\\n \\midrule\n Jeopardy! & Michael Caine \\& Steve Martin teamed up as Lawrence \\& Freddy, a couple of these, the title of a 1988 film\\\\\n SearchQA & michael caine steve martin teamed lawrence freddy , couple , title 1988 film \\\\\n Base-NMT & Who was lawrence of michael caine steve martin? \\\\\n AQA-QR & What is name is name is name michael caine steve martin teamed lawrence freddy and title 1988 film?\\\\\n AQA-full & What is name is name where name is name michael caine steve martin teamed lawrence freddy and title 1988 film key 2000 ? \\\\\n \\midrule\n Jeopardy! & Used underwater, ammonia gelatin is a waterproof type of this explosive \\\\\n SearchQA & used underwater , ammonia gelatin waterproof type explosive\\\\\n Base-NMT & Where is ammonia gelatin waterproof?\\\\\n AQA-QR & What is name is used under water with ammonia gelatin water waterproof type explosive? \\\\\n AQA-full & used underwater , ammonia gelatin waterproof type explosive \\\\\n \\midrule\n Jeopardy! & The Cleveland Peninsula is about 40 miles northwest of Ketchikan in this state\\\\\n SearchQA & cleveland peninsula 40 miles northwest ketchikan state \\\\\n Base-NMT & The cleveland peninsula 40 miles? \\\\\n AQA-QR & What is name is cleveland peninsula state northwest state state state?\\\\\n AQA-full & What is name are cleveland peninsula state northwest state state state ?\\\\\n \\midrule\n Jeopardy! & Tess Ocean, Tinker Bell, Charlotte the Spider \\\\\n SearchQA & tess ocean , tinker bell , charlotte spider \\\\\n Base-NMT & What ocean tess tinker bell? \\\\\n AQA-QR & What is name tess ocean tinker bell link charlotte spider?\\\\\n AQA-full & What is name is name tess ocean tinker bell spider contain charlotte spider contain hump around the world winter au to finish au de mon moist \\\\\n \\midrule\n Jeopardy! & During the Tertiary Period, India plowed into Eurasia \\& this highest mountain range was formed\\\\\n SearchQA & tertiary period , india plowed eurasia highest mountain range formed \\\\\n Bas-NMT & What is eurasia highest mountain range? \\\\\n AQA-QR & What is name were tertiary period in india plowed eurasia? \\\\\n AQA-full & tertiary period , india plowed eurasia highest mountain range formed\\\\\n \\midrule\n Jeopardy! & The melody heard here is from the opera about Serse, better known to us as this \"X\"-rated Persian king\\\\\n SearchQA & melody heard opera serse , better known us x rated persian king\\\\\n Base-NMT& Melody heard opera serse thing?\\\\\n AQA-QR & What is name melody heard opera serse is better persian king?\\\\\n AQA-full & What is name is name melody heard opera serse is better persian king persian K ?\\\\\n \\midrule\n Jeopardy! & A type of humorous poem bears the name of this Irish port city \\\\\n SearchQA & type humorous poem bears name irish port city \\\\\n Base-NMT & Name of humorous poem bears name? \\\\\n AQA-QR & What is name is name humorous poem poem bear city city city? \\\\\n AQA-full & What is name is name were humorous poem poem bears name city city city ? \\\\\n \\bottomrule\n \\end{tabular}\n \\label{app1}\n\\end{table*}\n\n\n\n\\end{footnotesize}\n```\n\n[^1]: \n\n[^2]: For more details see\u00a0.\n\n[^3]: We use the median instead of the mean to reduce the influence of frequent outliers, such as commas, on the statistics. The mean DF is 460.\n\n[^4]: More examples can be found in Appendix\u00a0.\n\n[^5]: To compute meaningful language model scores we remove the prefix \"What is name\" from all queries, because it artificially inflates the fluency measure, due to the high frequency unigrams and bigrams.","meta":{"dup_signals":{"dup_doc_count":11,"dup_dump_count":2,"dup_details":{"curated_sources":2,"unknown":9}},"filename":"out\/1801.07537_extract_arxiv.tex.md"},"subset":"arxiv"} +{"text":"abstract: Phagocytosis is the fundamental cellular process by which eukaryotic cells bind and engulf particles by their cell membrane. Particle engulfment involves particle recognition by cell-surface receptors, signaling and remodeling of the actin cytoskeleton to guide the membrane around the particle in a zipper-like fashion. Despite the signaling complexity, phagocytosis also depends strongly on biophysical parameters, such as particle shape, and the need for actin-driven force generation remains poorly understood. Here, we propose a novel, three-dimensional and stochastic biophysical model of phagocytosis, and study the engulfment of particles of various sizes and shapes, including spiral and rod-shaped particles reminiscent of bacteria. Highly curved shapes are not taken up, in line with recent experimental results. Furthermore, we surprisingly find that even without actin-driven force generation, engulfment proceeds in a large regime of parameter values, albeit more slowly and with highly variable phagocytic cups. We experimentally confirm these predictions using fibroblasts, transfected with immunoreceptor Fc$\\gamma$RIIa for engulfment of immunoglobulin G-opsonized particles. Specifically, we compare the wild-type receptor with a mutant receptor, unable to signal to the actin cytoskeleton. Based on the reconstruction of phagocytic cups from imaging data, we indeed show that cells are able to engulf small particles even without support from biological actin-driven processes. This suggests that biochemical pathways render the evolutionary ancient process of phagocytic highly robust, allowing cells to engulf even very large particles. The particle-shape dependence of phagocytosis makes a systematic investigation of host-pathogen interactions and an efficient design of a vehicle for drug delivery possible.\nauthor: Sylvain Tollis$^{1, 2,}$, Anna. E. Dart$^{2, 3}$, George Tzircotis$^{2, 3}$, and Robert G.\u00a0Endres$^{1, 2, *}$\ntitle: The zipper mechanism in phagocytosis: energetic requirements and variability in phagocytic cup shape\n\n# Background\n\nPhagocytosis is the ancient, evolutionarily conserved process by which eukaryotic cells bind, engulf, and destroy particles and cells larger than 0.5$\\mu$m in diameter . The importance of phagocytosis is derived from its two main functions: (1) a feeding mechanism in single-cell organisms , and (2) the clearance of pathogens, apoptotic and senescent cells from our body by immune cells . As part of our immune defense, phagocytosis is mainly performed by professional phagocytes, including macrophages, neutrophils, and dendritic cells. Initiation of phagocytosis occurs with recognition of the target particle either directly or via an opsonising molecule. For instance the Fc portion of immunoglobulin G (IgG) is recognized by the cell-surface receptor Fc$\\gamma$RIIa . Ligand-receptor binding triggers intracellular signaling , resulting in remodeling of the actin cytoskeleton and coherent growth of cell membrane around the particle to form the phagocytic cup . Eventually, the leading edge of the growing cup closes, and a membrane vesicle enclosing the particle (phagosome) moves inside the cell. Subsequently, the phagosome fuses with vesicles containing enzymes , acids , and oxygen radicals to destroy the particle. \nThe biochemical pathways involved in phagocytosis are complex. Dozens of cell-surface receptors contribute to the recognition of a large variety of ligand molecules and subsequent particle engulfment . The Fc$\\gamma$ receptor (Fc$\\gamma$R) and complement receptor 3 (CR3) of the integrin receptor family are the most widely studied and understood receptors involved in phagocytosis. Fc$\\gamma$R-mediated phagocytosis proceeds through membrane protrusions and leads to thin cups , whereas in CR3-mediated phagocytosis, particles appear to sink into the cell . Spreading of the cell membrane over the particle involves actin-driven cell-shape changes similar to the processes involved in cell migration and adhesion . Specifically for Fc$\\gamma$R, binding to an IgG-opsonized particle results in receptor clustering and recruitment of small GTPases of the Rho family, which, via proteins of the WASP family, activate the Arp2\/3 complex . The latter promotes branching of actin filaments, leading to an increase in the number of uncapped ends and to an isotropic growth of the actin network . Additionally, the phagocytic cup has been shown to be enriched in gelsolin , coronin , and other regulators of actin polymerization. All in all, this complex signaling pathway involves 100-1000 different types of molecules , rendering mathematical modeling at the molecular level impossible. \nDespite the huge biochemical complexity, the engulfment process shows a strong dependence on simple biophysical parameters. First, it relies on the availability of extra membrane at the phagocytic cup , provided by delivery of membrane vesicles or unwrinkling of membrane folds . Second, completion of phagocytic uptake depends on the shape of the particle and, interestingly, on the initial orientation of the particle on the cell surface . For instance, experiments demonstrate that elongated spheroid polystyrene particles coated with IgG are more efficiently engulfed when presented to the phagocyte with their tip first. Third, a recent study by one of the authors demonstrates that the biophysical requirements for phagocytosis lead to either complete phagocytosis or stalled cups due to the presence of a mechanical bottleneck . Interestingly, the same study shows that engulfment appears to even proceed in cells treated with (modest amounts of) cytochalasin D, an inhibitor of actin polymerization, indicating that biochemical pathways may not always be necessary for this initial stage of phagocytosis. \nThe mechanism of phagocytosis is only partially understood, with key insights provided more than three decades ago. In the 1970's, Griffin and his collaborators demonstrated that incomplete coating of particles with ligand results in only partial uptake. This indicated that phagocytic uptake occurs via successive zipper-like ligand-receptor binding (Figure 1A), and not by an all-or-nothing mechanism triggered at the onset of phagocytosis. The zipper mechanism is the underlying assumption in a number of recent modeling works in phagocytosis and endocytosis , mainly addressing the influence of the cell-membrane tension and ligand-receptor bond density on engulfment. Despite the general acceptance of the zipper mechanism, many of its biophysical requirements are insufficiently understood. Questions, so far unanswered, include what the energetic requirements of the zipper mechanism are, specifically what role actin polymerization plays in its progression during phagocytosis, and also whether the zipper mechanism can explain the particle-shape dependence of phagocytosis. Previous models were unable to fully address the particle-shape dependence, as they assume rotational symmetry around the axis connecting cell and particle. Additionally, large particle-to-particle variation in cup growth and cell-to-cell variation in the related process of endocytosis point towards the importance of stochasticity during the uptake, not captured in previous deterministic approaches. \nRecent experiments provide new insights into the biophysical mechanism for driving the membrane around the particle, suggesting a ratchet-type mechanism. Once started, phagocytosis progresses unidirectionally and irreversibly . This irreversible membrane progression is further supported by the loss of lipid and protein mobility at the phagocytic cup, observed using fluorescence recovery after photobleaching (FRAP) . While several models proposed mechanisms of force generation by actin polymerization (see and references therein), recent experiments based on fluorescent speckle microscopy demonstrate that actin does not directly push the membrane outwards. Instead, by filling gaps provided by membrane fluctuations (or other types of membrane movement), actin polymerization prevents the membrane from moving backwards like a ratchet . The relevance of such Brownian ratchets in biology has previously been emphasized . The question is if a ratchet mechanism, together with energetic restrictions in membrane bending and stretching, can naturally lead to phagocytic uptake and account for the shape-dependence of phagocytosis. \n\nIn this work, we propose a ratchet-like biophysical model for the zipper mechanism. This model differs from previous works in that it is, to our knowledge, the first fully three-dimensional stochastic model of phagocytic engulfment. Specifically, thermal membrane fluctuations, assumed to play a major role in our model, provide the energy source to locally deform the membrane and to build further ligand-receptor bonds for zippering the membrane around the particle. Actin polymerization makes ligand-receptor bonds effectively *irreversible*, *i.e.* reinforced and stabilized for a significant amount of time. To investigate the role of actin, we compare cup progression for the regular *active* zipper with a *passive* zipper model in which ligand-receptor binding remains specific and strong but *reversible* due to the absence of actin polymerization. \nInterestingly, we find that the passive zipper also leads to engulfment of small particles, rendering phagocytosis highly robust. However, such passive engulfment is generally slower and produces much more variable phagocytic cups than the active zipper. Furthermore, our computer simulations lead to successful phagocytic engulfment in a broad range of parameters values, including different particle sizes. For non-spherical particles, completion of engulfment depends strongly on particle shape and orientation. Our model further predicts that cup shape invariably depends on membrane biophysical parameters, in particular surface tension and cell-volume constraint. \nTo test the predicted difference between the active and passive zippers, we experimentally implement the two different types of zippers using COS-7 fibroblasts which, after transfection with GFP-tagged Fc$\\gamma$ receptor, phagocytoze IgG-coated polystyrene particles. Specifically, we performed phagocytic assays under three different *conditions*: (1) cells expressing wild-type Fc$\\gamma$R for the active zipper (WT-Fc$\\gamma$R), (2) cells expressing a signaling-dead mutant receptor (Y282F\/Y298F-Fc$\\gamma$R), which specifically binds IgG ligand but is unable to signal to the actin cytoskeleton , and (3) cells expressing WT-Fc$\\gamma$R and treated with cytochalasin D (WT-Fc$\\gamma$R$+$CytoD). The last two conditions represent two versions of the passive zipper due to the absence of actin polymerization in phagocytic cups. To compare with our model, we systematically analyze confocal microscopy images, and quantitatively estimate cup variability for the three different conditions using *small* (1.5 $\\mu m$ radius) and *large* (3 $\\mu$m radius) particles. Consistently with our simulations, phagocytic cups develop more slowly and are significantly more variable in the absence of actin polymerization. Our results provide new insights into the robustness of phagocytosis, as well as the role of bacterial cell shape in host-pathogen interactions.\n\n# Results and Discussion\n\n## Ratchet model for the zipper mechanism\n\nOur model is based on the following experimental observations. Engulfment of quasi-spherical particles by neutrophils progresses continuously without significant pause or reversal, indicating that ligand-receptor binding is essentially irreversible . This irreversibility is further supported by FRAP and single-molecule experiments, which show that lipids and proteins in phagocytic cups, as well as ligand-bound Fc-receptors are immobilized in an actin-dependent fashion . Additional support for the notion of irreversible uptake was recently determined in a related context . Fluorescent speckle microscopy of actin flow and image analysis during cell migration show that the membrane at the leading edge protrudes first, followed by actin polymerization to fill the gap between the membrane and the actin cortex. Such actin polymerization is mainly restricted to the leading edge due to signaling by receptors and\/or localization of small GTPases of the Rho family . The role of actin polymerization in phagocytosis is hence to stabilize ligand-receptor bonds and to rectify membrane movements in a ratchet-like fashion, leading to unidirectional movement of the leading edge of the engulfing cell. \nFigure 1A introduces the general concept of the zipper mechanism in phagocytosis, and Figure 1B summarizes our ratchet model for this mechanism. The cell membrane and the actin cortex are described by a Helfrich-type energy function , including contributions from ligand-receptor binding, membrane bending and stretching, as well as a cell-volume constraint. Chosen membrane parameters effectively describe the cell plasma membrane with its underlying actin cortex. The model was implemented using finite-temperature Monte Carlo simulations of the discretized cell membrane (see *Methods* for details). Briefly, the algorithm proposes random, thermally generated membrane fluctuations (trial moves), which are either accepted or rejected depending on the change in total energy during the move. When a random membrane fluctuation brings the cell membrane in contact with the particle, this fluctuation is likely accepted due to energetically favorable ligand-receptor binding. Once accepted, this fluctuation is made irreversible as a result of signaling and actin polymerization. In contrast, a membrane fluctuation far away from the particle is less likely to be accepted. Even if accepted, the fluctuation is not made irreversible and hence may retract at a later time (see Supplementary Fig. S1). Hence, in our model the actin network only supports membrane fluctuations which lead to progression of the engulfing zipper as a result of signaling.\n\n## Dependence of phagocytic cup shape on membrane biophysical parameters\n\nUsing our model for the zipper mechanism, we have successfully simulated phagocytic engulfment in a broad range of parameter values (see Figure 2). Figure 2A shows two different characteristic cup shapes we obtained. Low surface tension (*i.e.* low energy cost for stretching the membrane and underlying actin cortex), and tight cell-volume constraint (*i.e.* high energy cost for increasing the cell volume), lead to a thin phagocytic cup since a thin cup requires extra membrane but little extra volume. In contrast, weak volume constraint and\/or high surface tension produce a broad cup. Based on the parameters explored, we chose intermediate values for both surface tension and cell-volume constraint as our Standard Parameters (SP) for the remainder of our simulations in order to produce realistic cup shapes (see *Methods* for details). Figure 2B shows that most parameter values can be changed independently by at least one order of magnitude, without negatively affecting engulfment completion. Note that changing simultaneously several parameters may affect engulfment more drastically. Our simulations also show that cup shape depends on the kinetics of engulfment, determined by membrane fluctuations and therefore temperature (see Supplementrary Fig. S2). Additionally, preventing thermal fluctuations (by setting the temperature to zero Kelvin) during a simulation stops cup progression. This indicates that in our model membrane fluctuations are indeed required to bring receptors in close contact with ligand molecules on the particle, emphasizing their important role in the ratchet mechanism.\n\n## Active versus passive uptake and the role of actin\n\nAlthough phagocytosis generally involves active processes such as actin polymerization in the cup (active zipper) , recent reports indicate that phagocytosis may still work in an actin-independent manner. Indeed, phagocytic uptake was observed despite treating phagocytes with (modest amounts of) cytochalasin D . Hence, ligand-receptor binding may be sufficient in guiding the cell membrane around the particle under certain conditions (passive zipper). To investigate the energetic requirements of the zipper mechanism, we implemented simulations of the passive zipper. In these simulations, ligand-receptor bonds are not stabilized by actin polymerization and can unbind at later times, *i.e.* remain reversible. Hence, engulfment may still progress if the energetic cost of stretching and deforming the membrane is offset by the ligand-receptor binding energy in the presence of thermal membrane fluctuations. \nFigure 3 (*left*) shows that engulfment of small ($1.5\\mu$m radius) particles by the passive zipper leads to more variable phagocytic cup shapes than engulfment by the active zipper. For the active zipper, random membrane fluctuations are rectified by irreversible ligand-receptor binding due to actin polymerization. This leads to uniform progression of the cell membrane all around the particle at approximately the same speed (Figure 3A). In contrast, engulfment by the passive zipper occurs through binding of large membrane ruffles which eventually enclose the particle (Figure 3C). The variability of the phagocytic cup may be a measure of the respective contributions of active and passive processes in engulfment progression. \nFigure 3 (*right*) compares time courses of the membrane energy and progression of uptake for the active and passive zippers. We found that in both cases, membrane energy decreases rapidly at the very beginning of the uptake process due to energetically favorable ligand-receptor binding without large, energetically unfavorable deformations of the cell membrane. After this short initial period, the total energy increases with simulation time. This increase is much more pronounced for the active zipper, which stabilizes energetically unfavorable random membrane fluctuations by actin polymerization. In contrast, for the passive zipper such high-energy membrane deformation may not last over time. The slower increase in energy for the passive zipper correlates with a slower engulfment. For the simulations shown, engulfment for the active zipper is approximately twice to three times as fast as for the passive zipper, although the latter eventually engulfs the particle. However, the difference between active and passive engulfment depends on biophysical parameters, and may be reduced for particular choices of the parameters values, *e.g.* lower surface tension and\/or stronger ligand-receptor binding.\n\n## Particle size matters for passive, not for active zipper\n\nExperiments show that phagocytosis is relatively insensitive to particle size . Using our model for the active zipper, we simulated engulfment of spherical particles with different radii ranging from $1.2$ to $3.8\\mu$m. Figure 2C shows that engulfment progresses normally for small and large particles. Hence the active zipper mechanism is sufficiently robust to allow engulfment of differently sized particles using the same set of biophysical parameters, although engulfment of large particles requires more time. Noticeably, large particles (in general, with radius larger than $2.5\\mu$m) were taken up via more regular phagocytic cups than small particles (with radius $1.5\\mu$m or smaller), indicating that active processes may be required for engulfment of large particles. To confirm this observation we have simulated engulfment of large $3\\mu$m-radius particles by both the active and the passive zipper, shown in Supplementary Fig. S5. While the active zipper resulted in complete uptake of the particle, the passive zipper only engulfed a few percent of the particle's surface area. Thus the difference in phagocytic efficiency between the two zipper types was exacerbated for large particles, reflecting the importance of actin polymerization for engulfment of large particles.\n\n## Experimental test of model predictions\n\nTo test our model predictions and to specifically compare active with passive engulfment for small and large particles, we transfected COS-7 cells with either wild-type Fc$\\gamma$R (WT-Fc$\\gamma$R) or a signaling-dead mutant receptor (Y282F\/\u00a0Y298F-Fc$\\gamma$R). Cells expressing the wild-type receptor are expected to perform active engulfment, whereas cells expressing the signaling-dead mutant receptor are expected to perform passive engulfment. As a control, passive engulfment is additionally implemented by treating cells expressing WT-Fc$\\gamma$R with $0.2\\mu$M of cytochalasin D (WT-Fc$\\gamma$R$+$CytoD), which prevents actin polymerization (see *Methods*). Synchronized phagocytosis assays using small (1.5 $\\mu$m radius) and large (3 $\\mu$m radius) IgG-opsonized polystyrene particles were carried out, and, after fixation, receptor localization in phagocytic cups was visualized by fluorescence confocal microscopy. Cells were imaged at different time points during phagocytosis, and at each time point, three to eight imaged cells were each engulfing simultaneously between four and twenty particles (Figure 4A). Consequently, for each condition we analyzed at least seventy particles. \n\nTo test whether passive engulfment leads to more variable cups than active engulfment, we developed an image-analysis method illustrated in Figure 4B-F. The cup shape varibility was quantified by the standard deviation of the distribution of cell-membrane (Fc$\\gamma$R-GFP fluorescence) height around the particle, divided by the square root of the average membrane height. The unit of membrane height is given by the distance ($0.4\\mu$m) between consecutive confocal image planes (see *Methods*). Figure 5A shows that for small particles engulfed between 20 and 40% of their surfaces, cup variability increases from cells transfected with wild-type receptor to cells transfected with signaling-dead mutant receptor to WT-Fc$\\gamma$+CytoD cells. The lowest variability, found for cells expressing wild-type receptor, is statistically significant against both passive zipper types (Student's t-test, p-value $<0.001$). This result is consistent with model predictions: Figure 5A, *inset* shows the cup variability from simulations, revealing that the active zipper leads to significantly less variable cups. In contrast, for the ranges of engulfment between 40 and 60% (Figure 5B) and between 60 and 100% (see Supplementary Fig. S7) we observed no noticeable difference in cup variability between the three experimental conditions, while our model consistently predicts more variable cups for the passive zipper (Figure 5B, *inset*). This discrepancy may indicate that active processes such as contraction by myosin motor proteins become important at later stages of engulfment, limiting our model's full validity to the early events in phagocytosis (see *Conclusion* section). \nTo confirm that our results are independent of the specifics of the analysis method used, we also analyzed phagocytic cups with an alternative, albeit less accurate, method (see Supplementary Fig. S9). This method consists of determining the distribution of Fc$\\gamma$R-GFP fluorescence intensity around the particle at its equator plane, restricting the analysis to approximately half taken up particles ($30-70\\%$). The standard deviation of this distribution provides an alternative measure of the cup variability. We arrived at the same conclusion, confirming our result of the difference in cup variability (see Supplementary Fig. 10). Finally, note that temperature-induced synchronization is imperfect and may lead to variability in cup growth . However, our measures of the variability in cup shape are independent of such an effect since we include all time points together in the analysis (except for plots showing the time dependence of engulfment). \nOur experiments further show that cup shape ranges from regular to variable for all three experimental conditions, but that the frequency of different cup shapes depends on the condition. Figure 5C plots the repartition of phagocytic cups for different conditions into both regular and variable cups. Note that a cup was identified as regular if its variability was below the variability averaged over all experimental conditions. In contrast, a cup was identified as variable if its variability was above the overall average. This plot shows that in our experiments a regular cup is most likely produced by a cell expressing wild-type receptor, whereas a variable cup is most likely to be produced by a cytochalasin-D treated cell. Hence, the cup shape has universal features independent of biochemical details. Examples of a regular and a variable cup are provided in Figures 4E and F, respectively. Both cups were taken from a cytochalasin-D treated cell, confirming that a regular cup can occur under any of our experimental conditions. \nOur model also predicts that uptake by the active zipper is significantly faster than with the passive zipper (see Figures 3B and D). We experimentally tested this prediction by determining the percentage of engulfed surface area for each particle for different time points after initiation of phagocytosis, and comparing this result with our simulations, in which simulation time was matched to actual time. Figure 6A shows that cells transfected with the wild-type receptor (active zipper) engulf significantly faster (three to four times) than cells under the other two conditions (passive zippers). This result is in quantitative accordance with our model predictions (Figure 6B). Furthermore, we determined the time dependence of phagocytic uptake for large particles. The active zipper, although slower for large than for small particles, still engulfs regularly, both in experiments (Figure 6C) and simulations (Figure 6D). Note that predicted and measured time courses are in very good agreement without rescaling the time axis of the large-particle simulation. Furthermore, Figures 6C and 6D demonstrate the inability of the passive zipper to take up large particles, in both experiments and simulations. After more than 10 minutes, the average engulfed surface area remains below 20%. \n\nNote that for the time points measured, the average uptake does not exceed 50%, even for WT Fc$\\gamma$R. This is caused by the fact that some particles are not engulfed irrespective of the condition (see Supplementary Fig. S6), reducing the average percentage of engulfment. The proportion of almost completely engulfed particles (with engulfed surface area larger than 70%) beyond 6 minutes is represented in the inset of Figure 6A, showing that completion of engulfment is possible even for cells without actin polymerization. Note that long phagocytic assays (12, 14, and 45 minutes) were performed for cytochalasin-D treated cells only, explaining why the difference in complete engulfment with cells expressing the wild-type receptor is smaller than one may expect. \nFrom these results, we conclude that functional actin polymerization is required for fast and regular engulfment in phagocytosis. Nevertheless, in line with our model predictions, cells showing deficient actin polymerization at cups are still able to uptake small particles, although more slowly and with more variable cups.\n\n## Active zipper reproduces particle-shape dependence of phagocytosis\n\nPreviously published experiments show that phagocytosis depends strongly on particle shape. In particular, elongated particles (similar to rod-shaped bacteria) are only taken up when presented to the phagocyte with their tip first . To test the particle-shape dependence of phagocytosis we conducted simulations of the active zipper using particles of different shapes while varying the initial orientation of the particle on the cell surface. Figure 2D shows the uptake of a prolate spheroid for two different orientations on the cell membrane after the same elapsed simulation time. In accordance with experimental observations , uptake is more advanced for the spheroid particle engulfed with its tip first (about 80% engulfed surface area) than for the particle attached along its major axis (about 50% engulfed in the same amount of time). This suggests a strong inhibitory effect of high local curvature on uptake. In our model this is readily attributed to the energetic cost of bending the membrane around the two highly curved ends of an elongated particle placed horizontally on the cell membrane. In line with this explanation and experiments , the spiral-shaped particle in Figure 2E is not engulfed after a time duration sufficient for the engulfment of large spherical particle of twice its volume. Hence, our simulations demonstrate that particle shape and orientation are indeed important biophysical parameters for phagocytosis.\n\n# Conclusion\n\nIn this work, we studied the biophysical requirements of the zipper mechanism, in particular the role of receptor-induced actin polymerization, and the effect of particle shape on uptake. In our model, the underlying biophysical mechanism of the zipper is an actin-driven thermal ratchet, which renders random membrane fluctuations irreversible close to the particle (Figure 1). This mechanism is supported by several recent experiments . Previously, such Brownian ratchet models were successfully applied to explain force generation by actin polymerization and motility of the pathogen *Listeria* in hosts cells . Our fully stochastic simulations can address for the first time the variability in particle uptake, recently noticed in phagocytic cup growth and completion of endocytosis . Implementation of our model in simulations led indeed to phagocytic engulfment for a broad range of values of membrane parameters (Figure 2), indicating exquisit robustness of the phagocytic process. However, phagocytic cup shape depends on parameter values, specifically on the ratio between surface tension and cell-volume constraint, as well as on the kinetics of engulfment. Cells with low surface tension and\/or tight volume constraint develop thin cups (Figure 2A, *left*), characteristic of Fc$\\gamma$R-mediated phagocytosis . In contrast, cells with high surface tension produce broad cups (Figure 2A, *right*). The latter cup shape is more reminiscent of CR3-mediated phagocytosis, although for this type of phagocytosis particles are believed to sink into cells without protrusive cups . Using our model we were able to address the question whether the zipper mechanism requires an active driving force, such as provided by actin polymerization. For this purpose, we compared the regular active zipper with a passive version of the zipper. In the passive zipper, ligand-receptor bonds are as strong as for the active zipper (based on experimental observation ) but are reversible, *i.e.* are not supported by actin polymerization. We demonstrated that the passive zipper also leads to engulfment of small particles (of radius $1.5\\mu$m), although cup progression is slower and more variable (see Figure 3). Our active zipper can also reproduce the independence of uptake on particle size, in line with experimental observations . In contrast, large particles (of radius $3\\mu$m) are poorly phagocytozed by the passive zipper. We subsequently confirmed these predictions with experiments by transfecting COS-7 fibroblasts with wild-type Fc$\\gamma$R and signaling-dead mutant Y282F\/Y298F Fc$\\gamma$R (see Figures 5 and 6). While the wild-type receptor represents the active zipper, the passive zipper is implemented through the use of signaling-dead mutant receptor or treatment with cytochalasin D. Both prevent actin polymerization in the cups. Our study may indicate that ancient forms of phagocytosis were driven by physical (passive) principles, and only later in evolution biochemical regulatory pathways were added for further support and robustness. Passive phagocytosis may also become important when energy sources are scarce. Despite the robustness of phagocytosis to particle size, there appears to be a mechanical bottleneck around half-engulfment, recently observed by imaging and also predicted by our model. For slight variations in some of the parameters, our simulations of the active zipper produce either complete or significantly incomplete uptake (see Supplementary Fig. S6). Indeed, when the cup grows, deforming the membrane costs more and more energy per surface area engulfed. Beyond half-engulfment, the surface tension energy is twice as high as at the beginning of engulfment due to the membrane folding back onto itself. If this energy cannot be provided by the zipper, then cup progression stalls and the particle remains incompletely engulfed. Alternatively, the experimental data on incomplete particle uptake may be the result of particle attachment on cell-membrane areas, unable to phagocytose due to other reasons, such as unfavorable local cell-surface curvature, proximity to cell edge or nucleus, or missing proteins, lipids, and smaller molecules belonging to key signaling pathways. Further studies will be required, ideally using live-cell imaging to avoid the need for conserving the cell's internal structures by fixation . \nOur model for the zipper mechanism can also explain the strong particle-shape dependence observed in phagocytosis. Experiments show that an elongated spheroid is rapidly engulfed if the particle attaches to the cell membrane with its tip, but not if the particle attaches along its major axis . Furthermore, spiral-shaped particles are not phagocytozed . The strong particle-shape dependence of phagocytosis is likely of biological relevance. On the one hand, it may increase the rate of infection of host cells by pathogenic bacteria. Indeed, recent experiments show that *Mycobacteria tuberculosis* and *marinum* are efficiently taken up and later released for spreading of the infection with the bacteria's tip first . On the other hand, the highly curved shapes of some bacteria, *e.g.* the spiral-shaped *Helicobacter* and *Campylobacter* species, may prevent their uptake by macrophages , although injection of effector proteins can also be used by pathogens to hijack the immune or host cell's phagocytic response . Furthermore, the particle-shape dependence of phagocytosis may be exploited to improve drug delivery by enclosing active drugs in particles, whose shape prevents uptake and destruction by macrophages . \nWhile our biophysical model for the zipper mechanism is readily accessible for analysis and interpretation, the small number of model parameters makes a direct comparison with measured parameter values difficult. First, our membrane parameters such as surface tension and bending stiffness are an order of magnitude smaller than reported bulk membrane parameters (see *Methods*). This reduction is not surprising as cells regulate these parameters locally for efficient uptake . Such regulation may include lowering of the surface tension by local membrane delivery through vesicles and unfolding of membrane wrinkles , as well as changes in the lipid and protein composition in the phagocytic cup . Second, our description of ligand-receptor interaction assumes that ligand and receptor distributions are continuous and homogeneous, while experiments indicate the formation of receptor micro-clusters , possibly as part of lipid rafts . Third, our mechanism for actin polymerization based on membrane fluctuations neglects the role of the motor proteins (myosin I and II), whose role in membrane deformations has been established . Consequently, our simulations describe well the dynamics of engulfment during the first two thirds of the uptake. At later stages of uptake, experiments show that phagocytic cups close rapidly with a thin membrane protrusion , while our simulations show slow cup closure. However, taking into account myosin-driven contraction is beyond the scope of the current work. \nOur model may also be applicable to other biological systems in which a zipper-like mechanism is involved. One such example is sporulation of *Bacillus subtilis* during starvation. After asymmetric cell division, the larger mother cell engulfs the smaller forespore for spore maturation. Interestingly, the mother cell even engulfs the forespore when the cell wall is artificially removed. This process occurs in a fast, zipper-like fashion without known sources of energy . Importantly, forespore engulfment is subject to high variation. About 60% of the cells successfully complete forespore engulfment, while 40% do not at all, similar to the observation of the mechanical bottleneck in phagocytosis. Other examples of engulfment may not be driven by a zipper mechanism. For instance, the penetration of red blood cells by the malaria *Plasmodium* merozoite is driven by an elaborate actin machinery of the particle (the parasite), which devotes all its resources to wrap the cell membrane around itself. In stark contrast to phagocytosis, the engulfing host cell is completely passive .\n\n# Methods\n\n## Theoretical techniques\n\nThe cell membrane is described by a two dimensional elastic sheet , which includes both its lipid bilayer and associated actin cortex. The particle is assumed to be rigid and immobile. Moreover, we account for ligand-receptor binding and include a constraint on the cell volume. Therefore, the total free energy is given by $$E=E_{\\text{m}}+E_{\\text{vol}}+E_{\\text{LR}} \\label{eq:free_energy_total} \\text{,}$$ where $$E_{\\text{m}}=\\int_{m} d^{2}\\mathbf{r}\\left[\\frac{\\kappa_{b}}{2} C^2(\\mathbf{r})+\\sigma\\right] \\label{eq:free_energy_membrane}$$ with $C^2(\\mathbf{r})=C{_1}^{2}(\\mathbf{r})+C{_2}^{2}(\\mathbf{r})$ the square curvature obtained from the minimal ($C{_1}$) and maximal ($C{_2}$) curvatures at point $\\mathbf{r}$. Note that the term corresponding to the product $C{_1}(\\mathbf{r})C{_2}(\\mathbf{r})$ is independent of the actual shape of the membrane as long as the overall topology is conserved, and therefore is ignored (Gauss-Bonnet theorem). Bending stiffness $\\kappa_{b}$ reflects the energy cost of bending, and surface tension $\\sigma$ reflects the energy cost of stretching the membrane with underlying actin cortex. Furthermore, expanding or shrinking the cytosol locally costs the energy $$E_{\\text{vol}}=E_{\\text{cell}}(V)-E_{\\text{cell}}(V_0)=\\kappa_{P}(V-V_0)^2 \\label{eq:free_energy_volume} \\text{,}$$ where the quadratic dependence on actual volume $V$ comes from the lowest order Taylor expansion of the cell energy around local steady state volume $V_0$, and $\\kappa_{P}=\\frac{1}{2}\\left( \\frac{\\partial^2 E_{cell}}{\\partial V^2 }\\right)_{V=V_0}$. Taylor expansion is justified by the fact that our simulations and experiments use particles significantly smaller than the cell (at least 10-20 times in volume). Finally, in our model ligand-receptor binding is not described explicitly at the molecular scale, but is accounted for by a membrane-particle contact potential $V_{LR}(\\mathbf{r})$, where $V_{LR}(\\mathbf{r})=-V_{LR}^{0}$ if a membrane patch is within a distance $R_0$ of the particle and zero if further away. Specifically, the associated energy is given by $$E_{\\text{LR}}=\\int_{m} d^{2}\\mathbf{r}\\text{ }V_{\\text{LR}}(\\mathbf{r}) \\label{eq:free_energy_LR}\\text{,}$$ where the integral is performed over the cell-membrane area. Effectively, $V_{LR}^{0}$ is given by the product of the individual ligand-receptor binding energy and the density of ligand-receptor bonds, divided by the density of vertices on the model membrane (see below). The width of the square potential $R_0$ is chosen to be very small compared to the other length-scales of the model $R_0<0.1R$ and does not influence the results.\n\nSimulations of phagocytic engulfment were implemented by discretizing the cell and particle surfaces using the Surface Evolver software . This software is designed to perform energy minimization on flexible surfaces, and is freely available from http:\/\/www.susqu.edu\/facstaff\/\u00a0b\/brakke\/evolver. The software includes a built-in programming language, which we used to implement a Monte Carlo algorithm (see below). The cell membrane is approximated by a finite number of vertices, used to create a triangular mesh. The software computes the local energy density at each vertex and sums up the energy contributions from all the surface elements to obtain the total free energy Eq. 1.\n\nOur model uses four tunable biophysical parameters. Unless otherwise specified, we have used the set of Standard Parameters (SP), chosen according to experimental measurements when possible (see Supplementary Information, section 1), but ultimately to produce realistic cup shapes (see Figure 2). This set of parameters includes: the cell membrane bending rigidity $\\kappa_b$ and surface tension $\\sigma$, respectively set to $1.3 \\times 10^{-2}\\text{ pN}\\text{}\\mu \\text{m}$ and $6.2 \\times 10^{-6}\\text{ mNm}^{-1}$, *i.e.* slightly below the experimentally measured values since local changes in chemical composition of the cups membrane may reduce these parameters . The third parameter is the total binding energy density $\\epsilon=58.5\\text{ pN}\\text{}\\mu \\text{m}^{-1}$. This value was estimated from measurements of the individual Fc$\\gamma$R-IgG binding free energy $\\Delta F_{LR} \\approx 20k_{B}T$ , the average density $d_{LR}=270-435\\text{ }\\mu$m$^{-2}$ of IgG-Fc$\\gamma$R bonds , and the fact that in response to diffusion and trapping or signaling, receptors may cluster at the cup. Finally, the local constraint on cell-volume has been chosen $\\kappa_{P}=2.56 \\times 10^{-5}\\text{ pN}\\text{}\\mu \\text{m}^{-5}$ to allow 20 percent volume variation in line with observation .\n\nThe Surface Evolver was only used to obtain a triangular mesh (vertices connected by edges) of the cell membrane, and to resample the membrane as the uptake progresses. The cell-membrane evolution was implemented using finite-temperature Monte Carlo Metropolis simulations . Details of the simulation can be found in the Supplementary Information, section 2. Briefly, our algorithm calculates the total energy of the initial membrane configuration, then randomly selects a point to be the center of a membrane fluctuation, and a random direction and lateral extension of the fluctuation. Subsequently, the energy of the new membrane configuration is calculated, and compared to the initial energy. If the membrane fluctuation decreased the energy, the trial fluctuation is accepted and the procedure is reiterated starting from the new membrane configuration. To the contrary, if the membrane energy increased with the trial fluctuation, the latter may be rejected with some probability depending on the configuration's energy difference. In this case, a new fluctuation is attempted from the initial configuration. Between trial fluctuations, the cell-membrane vertices are examined. For the active zipper, the vertices within the closed neighborhood of the particle are immobilized for the remainder of the simulation. For the passive zipper, every membrane fluctuation may be reversed at a later time.\n\n(1) The amplitude of a membrane fluctuation. This parameter is set to $0.5R_0$ in all the simulations. (2) The mesh size, describing the maximal distance between two neighboring vertices. This parameter is set to $R_0$. (3) Minimal width of a membrane fluctuation. To ensure that a minimum number of vertices is involved in each fluctuation, this parameter is set to $4R_0$. (4) Mesh refinement range. Our simulation script automatically refines the mesh locally around a previously immobilized vertex within this range. This parameter is set to the particle radius $R$, leading to reasonably smooth cup shapes in a reasonably short calculation time (1-2 days for complete uptake using a Intel(R) Core(TM)2 Quad CPU working at 2.50GHz and run by the RedHat EL5 linux distribution with 4GB RAM).\n\n## Experimental techniques\n\nCOS-7 fibroblast cells were obtained from American Type Culture Collection (ATCC) and cultured in Dulbecco's Modified Eagle's Medium (DMEM), supplemented with 10% foetal bovine serum (FBS) and penicillin\/streptomycin (Invitrogen). cDNAs encoding wild-type and Y282F\/Y298F human Fc$\\gamma$RIIa from pRK5-Fc$\\gamma$RIIa and pRK5-Y282F\/Y298F-Fc$\\gamma$RIIa were subcloned into pEGFP-N1 (Clontech) using primers 5'-ggtccaactgcacctcggt-3' and 5'-ccccccgaattctgttattactgttgacatggtc-3'. The cytoplasmic tail truncation mutant 239-Fc$\\gamma$RIIa was generated from the pRK5-Fc$\\gamma$RIIa template using primers 5'-ggtccaactgcacctcggt-3' and 5'-gggggggaattctcctgcagtagatcaaggccact-3'. Rabbit anti-bovine serum albumin (BSA) serum was purchased from Sigma-Aldrich. Alexa-conjugated secondary antibodies and phalloidin were purchased from Invitrogen.\n\nCOS-7 cells were transfected with GFP-tagged Fc$\\gamma$RIIa constructs using an Amaxa Nucleofector and Nucleofector cell line kit R following the manufacturer instructions. For phagocytosis assays, transfected cells were seeded onto glass coverslips in 24-well plates at a density of 15,000 cells\/coverslip and incubated at 37$\\,^{\\circ}$C for 72 h. 1 hour before commencement of phagocytosis assays, cells were incubated for 1 hour at 37$\\,^{\\circ}$C with serum-free DMEM plus 10 mM Hepes (Invitrogen). $1.5\\mu$m- and $3\\mu$m-radius latex-polystyrene particles (Sigma-Aldrich) were opsonized by first incubating overnight at 4$\\,^{\\circ}$C with 3% BSA fraction V in PBS (Sigma-Aldrich) followed by incubation with 1:100 dilution of rabbit anti-BSA in PBS for 1 hour at room temperature. Particles were re-suspended in ice-cold serum-free DMEM plus 10 mM Hepes at a concentration of 1.5x10$^6$ particles\/ml and 500 $\\mu$l added to each coverslip. Plates were incubated on ice for 10 minutes to allow binding of particles. Medium was then replaced with pre-warmed serum-free DMEM plus 10 mM Hepes and plates were incubated at 37$\\,^{\\circ}$C, then processed for scoring or microscopy as described below. Experiments carried out on cells treated with cytochalasin D were conducted as above with the addition of a 20 min pre-incubation step with 0.2 $\\mu$M cytochalasin D in serum-free DMEM plus 10 mM Hepes at 37$\\,^{\\circ}$C immediately before incubation of cells with opsonized particles. This concentration of cytochalasin D was included in all further incubation steps.\n\nPlates were placed on ice after a 20 minute incubation at 37$\\,^{\\circ}$C and medium replaced with a 1:500 dilution of anti-rabbit Alexa 488 in 3% BSA\/PBS at 4$\\,^{\\circ}$C for 5 min. Cells were fixed after incubation at 37$\\,^{\\circ}$C for the appropriate amount of time with ice-cold 4% paraformaldehyde\/PBS, permeabilised and labelled with goat anti-rabbit Alexa 633, and phalloidin Alexa 555 for visualizing F-actin at room temperature for 30 min. Z-series image stacks were acquired on a Zeiss LSM-510 confocal microscope using a step size of 0.4 $\\mu$m.\n\nTwo fluorescence channels (IgG, Fc$\\gamma$R-GFP) were acquired and analyzed using MATLAB (MathWorks). During the acquisition process, the Fc$\\gamma$R-GFP fluorescence intensity was set to zero at any pixel where the IgG intensity is null. The fluorescence intensity distribution of IgG was used to determine the coordinates of the centers of particles with their corresponding radii, using an automated search based on the Hough transform , available online at www.mathworks.com\/matlabcentral\/fileexchange\/4985 within each 2D image. The percentage of engulfed particle surface area was calculated by comparing the local Fc$\\gamma$R-GFP (cell membrane) and IgG (particle surface) intensity distributions within a sphere $S_0$ of radius $4R\/3$ whose center coincides with the particle center. Two methods were used to quantify the variability of the cups. The method used in the main text cuts the three-dimensional image (and hence the circular projections of particles within each imaging plane) in twenty-four angular segments, and finds the highest plane in which Fc$\\gamma$R-GFP fluorescence intensity is detected in the immediate neighborhood of the particle for each segment (see Figure 4). This analysis produces a distribution of membrane height *versus* angular segment index, for which we compute the average and the mean-square deviation. The average height reached by cell membrane is roughly proportional to the surface engulfed (see Supplementary Fig. S10), and the mean-square deviation divided by the square root of the average height quantifies the variability of the phagocytic cup, excluding a trivial size dependence on the variability. The less accurate alternative method determines the angular distribution of Fc$\\gamma$R-GFP fluorescence intensity within the particle's equator plane in the particle's immediate neighborhood, keeping only particles whose uptake level comprises between $30$ and $70\\%$ (roughly half-engulfed particles). Specifically, this region is cut into twenty-four identical angular segments, for which the total Fc$\\gamma$R fluorescence intensity is calculated. Then the average intensity per segment is calculated, as well as the standard deviation. The higher the standard deviation, the more variable the cup.\n\n# Accession Numbers\n\nThe Fc$\\gamma$RIIa receptor is referenced in protein database Genbank (http:\/\/www.ncbi.nlm.nih.gov\/Genbank) under the accession number CAA01563. Signaling-dead mutant receptor Y282F\/Y298F-Fc$\\gamma$RIIa is obtained by replacing tyrosines (Y) with phenylalanines (F) at positions 282 and 298.\n\n# Authors contributions\n\nST participated to the conception of the model, and designed the simulation and image analysis algorithms. He carried out the simulations, image analysis, and statistical analysis of data as well as drafted the manuscript. AD and GT carried out the fluorescence imaging experiments and participated in writing the manuscript. RGE participated to the conception of the model and the interpretation of both theoretical and experimental results, and contributed to the writing of the manuscript. All authors read and approved the final manuscript.\n\n# Acknowledgements\n\nThis paper is dedicated to the memory of Emmanuelle Caron, who tragically passed away in 2009. We acknowledge Micah Dembo and G\u00fcnther Gerisch for helpful discussions, and Ken Brakke for help with the Surface Evolver software. We thank Vania Braga, Tony Magee and Brian Robertson for careful reading of the manuscript, and Suhail Islam for computational support. All authors would like to acknowledge funding from the Center for Integrative Systems Biology at Imperial College (CISBIC). RGE was additionally supported by the Biotechnology and Biological Sciences Research Council grant BB\/G000131\/1.","meta":{"dup_signals":{"dup_doc_count":29,"dup_dump_count":23,"dup_details":{"curated_sources":2,"2022-33":1,"2022-05":1,"2021-49":1,"2021-43":1,"2021-10":1,"2020-50":1,"2020-34":1,"2020-24":1,"2015-40":1,"2015-27":1,"2015-22":1,"2015-14":1,"2014-52":2,"2014-49":1,"2014-42":3,"2014-35":1,"2023-40":1,"2015-06":1,"2014-10":2,"2013-48":2,"2013-20":1,"2024-18":1}},"filename":"out\/1011.0370_extract_Endres-Tollis2010.tex.md"},"subset":"arxiv"} +{"text":"author: G. E. Hinton$^\\ast$, N. Srivastava, A. Krizhevsky, I. Sutskever and R. R. Salakhutdinov \nDepartment of Computer Science, University of Toronto, \n6 King's College Rd, Toronto, Ontario M5S 3G4, Canada \n \n$^\\ast$To whom correspondence should be addressed; E-mail: email@example.com\nbibliography: dropout.bib\ntitle: Improving neural networks by preventing co-adaptation of feature detectors\n\n> **When a large feedforward neural network is trained on a small training set, it typically performs poorly on held-out test data. This \"overfitting\" is greatly reduced by randomly omitting half of the feature detectors on each training case. This prevents complex co-adaptations in which a feature detector is only helpful in the context of several other specific feature detectors. Instead, each neuron learns to detect a feature that is generally helpful for producing the correct answer given the combinatorially large variety of internal contexts in which it must operate. Random \"dropout\" gives big improvements on many benchmark tasks and sets new records for speech and object recognition.**\n\nA feedforward, artificial neural network uses layers of non-linear \"hidden\" units between its inputs and its outputs. By adapting the weights on the incoming connections of these hidden units it learns feature detectors that enable it to predict the correct output when given an input vector . If the relationship between the input and the correct output is complicated and the network has enough hidden units to model it accurately, there will typically be many different settings of the weights that can model the training set almost perfectly, especially if there is only a limited amount of labeled training data. Each of these weight vectors will make different predictions on held-out test data and almost all of them will do worse on the test data than on the training data because the feature detectors have been tuned to work well together on the training data but not on the test data.\n\nOverfitting can be reduced by using \"dropout\" to prevent complex co-adaptations on the training data. On each presentation of each training case, each hidden unit is randomly omitted from the network with a probability of 0.5, so a hidden unit cannot rely on other hidden units being present. Another way to view the dropout procedure is as a very efficient way of performing model averaging with neural networks. A good way to reduce the error on the test set is to average the predictions produced by a very large number of different networks. The standard way to do this is to train many separate networks and then to apply each of these networks to the test data, but this is computationally expensive during both training and testing. Random dropout makes it possible to train a huge number of different networks in a reasonable time. There is almost certainly a different network for each presentation of each training case but all of these networks share the same weights for the hidden units that are present.\n\nWe use the standard, stochastic gradient descent procedure for training the dropout neural networks on mini-batches of training cases, but we modify the penalty term that is normally used to prevent the weights from growing too large. Instead of penalizing the squared length (L2 norm) of the whole weight vector, we set an upper bound on the L2 norm of the incoming weight vector for each individual hidden unit. If a weight-update violates this constraint, we renormalize the weights of the hidden unit by division. Using a constraint rather than a penalty prevents weights from growing very large no matter how large the proposed weight-update is. This makes it possible to start with a very large learning rate which decays during learning, thus allowing a far more thorough search of the weight-space than methods that start with small weights and use a small learning rate.\n\nAt test time, we use the \"mean network\" that contains all of the hidden units but with their outgoing weights halved to compensate for the fact that twice as many of them are active. In practice, this gives very similar performance to averaging over a large number of dropout networks. In networks with a single hidden layer of $N$ units and a \"softmax\" output layer for computing the probabilities of the class labels, using the mean network is exactly equivalent to taking the geometric mean of the probability distributions over labels predicted by all $2^N$ possible networks. Assuming the dropout networks do not all make identical predictions, the prediction of the mean network is guaranteed to assign a higher log probability to the correct answer than the mean of the log probabilities assigned by the individual dropout networks . Similarly, for regression with linear output units, the squared error of the mean network is always better than the average of the squared errors of the dropout networks.\n\nWe initially explored the effectiveness of dropout using MNIST, a widely used benchmark for machine learning algorithms. It contains 60,000 28x28 training images of individual hand written digits and 10,000 test images. Performance on the test set can be greatly improved by enhancing the training data with transformed images or by wiring knowledge about spatial transformations into a convolutional neural network or by using generative pre-training to extract useful features from the training images without using the labels . Without using any of these tricks, the best published result for a standard feedforward neural network is 160 errors on the test set. This can be reduced to about 130 errors by using 50% dropout with separate L2 constraints on the incoming weights of each hidden unit and further reduced to about 110 errors by also dropping out a random 20% of the pixels (see figure\u00a0).\n\nDropout can also be combined with generative pre-training, but in this case we use a small learning rate and no weight constraints to avoid losing the feature detectors discovered by the pre-training. The publically available, pre-trained deep belief net described in got 118 errors when it was fine-tuned using standard back-propagation and 92 errors when fine-tuned using 50% dropout of the hidden units. When the publically available code at URL was used to pre-train a deep Boltzmann machine five times, the unrolled network got 103, 97, 94, 93 and 88 errors when fine-tuned using standard backpropagation and 83, 79, 78, 78 and 77 errors when using 50% dropout of the hidden units. The mean of 79 errors is a record for methods that do not use prior knowledge or enhanced training sets (For details see Appendix\u00a0).\n\nWe then applied dropout to TIMIT, a widely used benchmark for recognition of clean speech with a small vocabulary. Speech recognition systems use hidden Markov models (HMMs) to deal with temporal variability and they need an acoustic model that determines how well a frame of coefficients extracted from the acoustic input fits each possible state of each hidden Markov model. Recently, deep, pre-trained, feedforward neural networks that map a short sequence of frames into a probability distribution over HMM states have been shown to outperform tradional Gaussian mixture models on both TIMIT and a variety of more realistic large vocabulary tasks .\n\nFigure\u00a0 shows the frame *classification* error rate on the core test set of the TIMIT benchmark when the central frame of a window is classified as belonging to the HMM state that is given the highest probability by the neural net. The input to the net is 21 adjacent frames with an advance of 10ms per frame. The neural net has 4 fully-connected hidden layers of 4000 units per layer and 185 \"softmax\" output units that are subsequently merged into the 39 distinct classes used for the benchmark. Dropout of 50% of the hidden units significantly improves classification for a variety of different network architectures (see figure\u00a0). To get the frame *recognition* rate, the class probabilities that the neural network outputs for each frame are given to a decoder which knows about transition probabilities between HMM states and runs the Viterbi algorithm to infer the single best sequence of HMM states. Without dropout, the recognition rate is $22.7$% and with dropout this improves to $19.7$%, which is a record for methods that do not use any information about speaker identity.\n\nCIFAR-10 is a benchmark task for object recognition. It uses 32x32 downsampled color images of 10 different object classes that were found by searching the web for the names of the class (e.g. dog) or its subclasses (e.g. Golden Retriever). These images were labeled by hand to produce 50,000 training images and 10,000 test images in which there is a single dominant object that could plausibly be given the class name (see figure ). The best published error rate on the test set, without using transformed data, is 18.5% . We achieved an error rate of 16.6% by using a neural network with three convolutional hidden layers interleaved with three \"max-pooling\" layers that report the maximum activity in local pools of convolutional units. These six layers were followed by one locally-connected layer (For details see Appendix\u00a0) . Using dropout in the last hidden layer gives an error rate of 15.6%.\n\nImageNet is an extremely challenging object recognition dataset consisting of thousands of high-resolution images of thousands of classes of object . In 2010, a subset of 1000 classes with roughly 1000 examples per class was the basis of an object recognition competition in which the winning entry, which was actually an average of six separate models, achieved an error rate of 47.2% on the test set. The current state-of-the-art result on this dataset is 45.7% . We achieved comparable performance of 48.6% error using a single neural network with five convolutional hidden layers interleaved with \"max-pooling\" layer followed by two globally connected layers and a final 1000-way softmax layer. All layers had L2 weight constraints on the incoming weights of each hidden unit. Using 50% dropout in the sixth hidden layer reduces this to a record 42.4% (For details see Appendix\u00a0).\n\nFor the speech recognition dataset and both of the object recognition datasets it is necessary to make a large number of decisions in designing the architecture of the net. We made these decisions by holding out a separate validation set that was used to evaluate the performance of a large number of different architectures and we then used the architecture that performed best with dropout on the validation set to assess the performance of dropout on the real test set.\n\nThe Reuters dataset contains documents that have been labeled with a hierarchy of classes. We created training and test sets each containing 201,369 documents from 50 mutually exclusive classes. Each document was represented by a vector of counts for 2000 common non-stop words, with each count $C$ being transformed to $\\log(1+C)$. A feedforward neural network with 2 fully connected layers of 2000 hidden units trained with backpropagation gets 31.05% error on the test set. This is reduced to 29.62% by using 50% dropout (Appendix\u00a0).\n\nWe have tried various dropout probabilities and almost all of them improve the generalization performance of the network. For fully connected layers, dropout in all hidden layers works better than dropout in only one hidden layer and more extreme probabilities tend to be worse, which is why we have used 0.5 throughout this paper. For the inputs, dropout can also help, though it it often better to retain more than 50% of the inputs. It is also possible to adapt the individual dropout probability of each hidden or input unit by comparing the average performance on a validation set with the average performance when the unit is present. This makes the method work slightly better. For datasets in which the required input-output mapping has a number of fairly different regimes, performance can probably be further improved by making the dropout probabilities be a learned function of the input, thus creating a statistically efficient \"mixture of experts\" in which there are combinatorially many experts, but each parameter gets adapted on a large fraction of the training data.\n\nDropout is considerably simpler to implement than Bayesian model averaging which weights each model by its posterior probability given the training data. For complicated model classes, like feedforward neural networks, Bayesian methods typically use a Markov chain Monte Carlo method to sample models from the posterior distribution . By contrast, dropout with a probability of $0.5$ assumes that all the models will eventually be given equal importance in the combination but the learning of the shared weights takes this into account. At test time, the fact that the dropout decisions are independent for each unit makes it very easy to approximate the combined opinions of exponentially many dropout nets by using a single pass through the mean net. This is far more efficient than averaging the predictions of many separate models.\n\nA popular alternative to Bayesian model averaging is \"bagging\" in which different models are trained on different random selections of cases from the training set and all models are given equal weight in the combination . Bagging is most often used with models such as decision trees because these are very quick to fit to data and very quick at test time. Dropout allows a similar approach to be applied to feedforward neural networks which are much more powerful models. Dropout can be seen as an extreme form of bagging in which each model is trained on a single case and each parameter of the model is very strongly regularized by sharing it with the corresponding parameter in all the other models. This is a much better regularizer than the standard method of shrinking parameters towards zero.\n\nA familiar and extreme case of dropout is \"naive bayes\" in which each input feature is trained separately to predict the class label and then the predictive distributions of all the features are multiplied together at test time. When there is very little training data, this often works much better than logistic classification which trains each input feature to work well in the context of all the other features.\n\nFinally, there is an intriguing similarity between dropout and a recent theory of the role of sex in evolution\u00a0. One possible interpretation of the theory of mixability articulated in\u00a0 is that sex breaks up sets of co-adapted genes and this means that achieving a function by using a large set of co-adapted genes is not nearly as robust as achieving the same function, perhaps less than optimally, in multiple alternative ways, each of which only uses a small number of co-adapted genes. This allows evolution to avoid dead-ends in which improvements in fitness require co- ordinated changes to a large number of co-adapted genes. It also reduces the probability that small changes in the environment will cause large decreases in fitness \u2013 a phenomenon which is known as \"overfitting\" in the field of machine learning.\n\n# Experiments on MNIST\n\n## Details for dropout training\n\nThe MNIST dataset consists of 28 $\\times$ 28 digit images - 60,000 for training and 10,000 for testing. The objective is to classify the digit images into their correct digit class. We experimented with neural nets of different architectures (different number of hidden units and layers) to evaluate the sensitivity of the dropout method to these choices. We show results for 4 nets (784-800-800-10, 784-1200-1200-10, 784-2000-2000-10, 784-1200-1200-1200-10). For each of these architectures we use the same dropout rates - 50% dropout for all hidden units and 20% dropout for visible units. We use stochastic gradient descent with 100-sized minibatches and a cross-entropy objective function. An exponentially decaying learning rate is used that starts at the value of 10.0 (applied to the average gradient in each minibatch). The learning rate is multiplied by 0.998 after each epoch of training. The incoming weight vector corresponding to each hidden unit is constrained to have a maximum squared length of $l$. If, as a result of an update, the squared length exceeds $l$, the vector is scaled down so as to make it have a squared length of $l$. Using cross validation we found that $l = 15$ gave best results. Weights are initialzed to small random values drawn from a zero-mean normal distribution with standard deviation 0.01. Momentum is used to speed up learning. The momentum starts off at a value of 0.5 and is increased linearly to 0.99 over the first 500 epochs, after which it stays at 0.99. Also, the learning rate is multiplied by a factor of (1-momentum). No weight decay is used. Weights were updated at the end of each minibatch. Training was done for 3000 epochs. The weight update takes the following form: $$\\begin{aligned}\n\\Delta w^{t} & = & p^{t}\\Delta w^{t-1} - (1-p^{t})\\epsilon^{t}\\langle \\nabla_w L \\rangle \\\\\nw^{t} & = & w^{t-1} + \\Delta w^{t},\n\\end{aligned}$$ where, $$\\begin{aligned}\n\\epsilon^{t} & = & \\epsilon_{0}f^{t}\\\\\np^{t} &=& \\begin{cases} \n \\frac{t}{T}p_{i} + (1-\\frac{t}{T})p_{f} & t < T \\\\\n p_{f} & t \\geq T\n \\end{cases}\n\\end{aligned}$$ with $\\epsilon_0 = 10.0$, $f = 0.998$, $p_{i} = 0.5$, $p_{f} = 0.99$, $T = 500$. While using a constant learning rate also gives improvements over standard backpropagation, starting with a high learning rate and decaying it provided a significant boost in performance. Constraining input vectors to have a fixed length prevents weights from increasing arbitrarily in magnitude irrespective of the learning rate. This gives the network a lot of opportunity to search for a good configuration in the weight space. As the learning rate decays, the algorithm is able to take smaller steps and finds the right step size at which it can make learning progress. Using a high final momentum distributes gradient information over a large number of updates making learning stable in this scenario where each gradient computation is for a different stochastic network.\n\n## Details for dropout finetuning\n\nApart from training a neural network starting from random weights, dropout can also be used to finetune pretrained models. We found that finetuning a model using dropout with a small learning rate can give much better performace than standard backpropagation finetuning.\n\n*Deep Belief Nets* - We took a neural network pretrained using a Deep Belief Network\u00a0. It had a 784-500-500-2000 architecture and was trained using greedy layer-wise Contrastive Divergence learning [^1]. Instead of fine-tuning it with the usual backpropagation algorithm, we used the dropout version of it. Dropout rate was same as before : 50% for hidden units and 20% for visible units. A constant small learning rate of 1.0 was used. No constraint was imposed on the length of incoming weight vectors. No weight decay was used. All other hyper-parameters were set to be the same as before. The model was trained for 1000 epochs with stochstic gradient descent using minibatches of size 100. While standard back propagation gave about 118 errors, dropout decreased the errors to about 92.\n\n*Deep Boltzmann Machines* - We also took a pretrained Deep Boltzmann Machine [^2] (784-500-1000-10) and finetuned it using dropout-backpropagation. The model uses a 1784 - 500 - 1000 - 10 architecture (The extra 1000 input units come from the mean-field activations of the second layer of hidden units in the DBM, See\u00a0 for details). All finetuning hyper-parameters were set to be the same as the ones used for a Deep Belief Network. We were able to get a mean of about 79 errors with dropout whereas usual finetuning gives about 94 errors.\n\n## Effect on features\n\nOne reason why dropout gives major improvements over backpropagation is that it encourages each individual hidden unit to learn a useful feature without relying on specific other hidden units to correct its mistakes. In order to verify this and better understand the effect of dropout on feature learning, we look at the first level of features learned by a 784-500-500 neural network without any generative pre-training. The features are shown in Figure\u00a0. Each panel shows 100 random features learned by each network. The features that dropout learns are simpler and look like strokes, whereas the ones learned by standard backpropagation are difficult to interpret. This confirms that dropout indeed forces the discriminative model to learn good features which are less co-adapted and leads to better generalization.\n\n# Experiments on TIMIT\n\nThe TIMIT Acoustic-Phonetic Continuous Speech Corpus is a standard dataset used for evaluation of automatic speech recognition systems. It consists of recordings of 630 speakers of 8 dialects of American English each reading 10 phonetically-rich sentences. It also comes with the word and phone-level transcriptions of the speech. The objective is to convert a given speech signal into a transcription sequence of phones. This data needs to be pre-processed to extract input features and output targets. We used Kaldi, an open source code library for speech [^3], to pre-process the dataset so that our results can be reproduced exactly. The inputs to our networks are log filter bank responses. They are extracted for 25 ms speech windows with strides of 10 ms.\n\nEach dimension of the input representation was normalized to have mean 0 and variance 1. Minibatches of size 100 were used for both pretraining and dropout finetuning. We tried several network architectures by varying the number of input frames (15 and 31), number of layers in the neural network (3, 4 and 5) and the number of hidden units in each layer (2000 and 4000). Figure\u00a0 shows the validation error curves for a number of these combinations. Using dropout consistently leads to lower error rates.\n\n## Pretraining\n\nFor all our experiments on TIMIT, we pretrain the neural network with a Deep Belief Network\u00a0. Since the inputs are real-valued, the first layer was pre-trained as a Gaussian RBM. Visible biases were initialized to zero and weights to random numbers sampled from a zero-mean normal distribution with standard deviation 0.01. The variance of each visible unit was set to 1.0 and not learned. Learning was done by minimizing Contrastive Divergence. Momentum was used to speed up learning. Momentum started at 0.5 and was increased linearly to 0.9 over 20 epochs. A learning rate of 0.001 on the average gradient was used (which was then multiplied by 1-momentum). An L2 weight decay of 0.001 was used. The model was trained for 100 epochs.\n\nAll subsequent layers were trained as binary RBMs. A learning rate of 0.01 was used. The visible bias of each unit was initialized to $\\log(p\/(1-p))$ where $p$ was the mean activation of that unit in the dataset. All other hyper-parameters were set to be the same as those we used for the Gaussian RBM. Each layer was trained for 50 epochs.\n\n## Dropout Finetuning\n\nThe pretrained RBMs were used to initialize the weights in a neural network. The network was then finetuned with dropout-backpropagation. Momentum was increased from 0.5 to 0.9 linearly over 10 epochs. A small constant learning rate of 1.0 was used (applied to the average gradient on a minibatch). All other hyperparameters are the same as for MNIST dropout finetuning. The model needs to be run for about 200 epochs to converge. The same network was also finetuned with standard backpropagation using a smaller learning rate of 0.1, keeping all other hyperparameters\n\nFigure\u00a0 shows the frame classification error and cross-entropy objective value on the training and validation sets. We compare the performance of dropout and standard backpropagation on several network architectures and input representations. Dropout consistently achieves lower error and cross-entropy. It significantly controls overfitting, making the method robust to choices of network architecture. It allows much larger nets to be trained and removes the need for early stopping. We also observed that the final error obtained by the model is not very sensitive to the choice of learning rate and momentum.\n\n# Experiments on Reuters\n\nReuters Corpus Volume I (RCV1-v2) is an archive of 804,414 newswire stories that have been manually categorized into 103 topics[^4]. The corpus covers four major groups: corporate\/industrial, economics, government\/social, and markets. Sample topics include Energy Markets, Accounts\/Earnings, Government Borrowings, Disasters and Accidents, Interbank Markets, Legal\/Judicial, Production\/Services, etc. The topic classes form a tree which is typically of depth three.\n\nWe took the dataset and split it into 63 classes based on the the 63 categories at the second-level of the category tree. We removed 11 categories that did not have any data and one category that had only 4 training examples. We also removed one category that covered a huge chunk (25%) of the examples. This left us with 50 classes and 402,738 documents. We divided the documents into equal-sized training and test sets randomly. Each document was represented using the 2000 most frequent non-stopwords in the dataset.\n\nWe trained a neural network using dropout-backpropagation and compared it with standard backpropagation. We used a 2000-2000-1000-50 architecture. The training hyperparameters are same as that in MNIST dropout training (Appendix\u00a0). Training was done for 500 epochs.\n\nFigure\u00a0 shows the training and test set errors as learning progresses. We show two nets - one with a 2000-2000-1000-50 and another with a 2000-1000-1000-50 architecture trained with and without dropout. As in all previous datasets discussed so far, we obtain significant improvements here too. The learning not only results in better generalization, but also proceeds smoothly, without the need for early stopping.\n\n# Tiny Images and CIFAR-10\n\nThe Tiny Images dataset contains 80 million $32\\times32$ color images collected from the web. The images were found by searching various image search engines for English nouns, so each image comes with a very unreliable label, which is the noun that was used to find it. The CIFAR-10 dataset is a subset of the Tiny Images dataset which contains 60000 images divided among ten classes[^5]. Each class contains 5000 training images and 1000 testing images. The classes are *airplane*, *automobile*, *bird*, *cat*, *deer*, *dog*, *frog*, *horse*, *ship*, and *truck*. The CIFAR-10 dataset was obtained by filtering the Tiny Images dataset to remove images with incorrect labels. The CIFAR-10 images are highly varied, and there is no canonical viewpoint or scale at which the objects appear. The only criteria for including an image were that the image contain one dominant instance of a CIFAR-10 class, and that the object in the image be easily identifiable as belonging to the class indicated by the image label.\n\n# ImageNet\n\nImageNet is a dataset of millions of labeled images in thousands of categories. The images were collected from the web and labelled by human labellers using Amazon's Mechanical Turk crowd-sourcing tool. In 2010, a subset of roughly 1000 images in each of 1000 classes was the basis of an object recognition competition, a part of the Pascal Visual Object Challenge. This is the version of ImageNet on which we performed our experiments. In all, there are roughly 1.3 million training images, 50000 validation images, and 150000 testing images. This dataset is similar in spirit to the CIFAR-10, but on a much bigger scale. The images are full-resolution, and there are 1000 categories instead of ten. Another difference is that the ImageNet images often contain multiple instances of ImageNet objects, simply due to the sheer number of object classes. For this reason, even a human would have difficulty approaching perfect accuracy on this dataset. For our experiments we resized all images to $256\\times256$ pixels.\n\n# Convolutional Neural Networks\n\nOur models for CIFAR-10 and ImageNet are deep, feed-forward convolutional neural networks (CNNs). Feed-forward neural networks are models which consist of several layers of \"neurons\", where each neuron in a given layer applies a linear filter to the outputs of the neurons in the previous layer. Typically, a scalar bias is added to the filter output and a nonlinear activation function is applied to the result before the neuron's output is passed to the next layer. The linear filters and biases are referred to as *weights*, and these are the parameters of the network that are learned from the training data.\n\nCNNs differ from ordinary neural networks in several ways. First, neurons in a CNN are organized topographically into a bank that reflects the organization of dimensions in the input data. So for images, the neurons are laid out on a 2D grid. Second, neurons in a CNN apply filters which are local in extent and which are centered at the neuron's location in the topographic organization. This is reasonable for datasets where we expect the dependence of input dimensions to be a decreasing function of distance, which is the case for pixels in natural images. In particular, we expect that useful clues to the identity of the object in an input image can be found by examining small local neighborhoods of the image. Third, all neurons in a bank apply the same filter, but as just mentioned, they apply it at different locations in the input image. This is reasonable for datasets with roughly stationary statistics, such as natural images. We expect that the same kinds of structures can appear at all positions in an input image, so it is reasonable to treat all positions equally by filtering them in the same way. In this way, a bank of neurons in a CNN applies a convolution operation to its input. A single layer in a CNN typically has multiple banks of neurons, each performing a convolution with a different filter. These banks of neurons become distinct input channels into the next layer. The distance, in pixels, between the boundaries of the receptive fields of neighboring neurons in a convolutional bank determines the *stride* with which the convolution operation is applied. Larger strides imply fewer neurons per bank. Our models use a stride of one pixel unless otherwise noted.\n\nOne important consequence of this convolutional shared-filter architecture is a drastic reduction in the number of parameters relative to a neural net in which all neurons apply different filters. This reduces the net's representational capacity, but it also reduces its capacity to overfit, so dropout is far less advantageous in convolutional layers.\n\n## Pooling\n\nCNNs typically also feature \"pooling\" layers which summarize the activities of local patches of neurons in convolutional layers. Essentially, a pooling layer takes as input the output of a convolutional layer and subsamples it. A pooling layer consists of pooling units which are laid out topographically and connected to a local neighborhood of convolutional unit outputs from the same bank. Each pooling unit then computes some function of the bank's output in that neighborhood. Typical functions are maximum and average. Pooling layers with such units are called max-pooling and average-pooling layers, respectively. The pooling units are usually spaced at least several pixels apart, so that there are fewer total pooling units than there are convolutional unit outputs in the previous layer. Making this spacing smaller than the size of the neighborhood that the pooling units summarize produces *overlapping pooling.* This variant makes the pooling layer produce a coarse coding of the convolutional unit outputs, which we have found to aid generalization in our experiments. We refer to this spacing as the *stride* between pooling units, analogously to the stride between convolutional units. Pooling layers introduce a level of local translation invariance to the network, which improves generalization. They are the analogues of *complex cells* in the mammalian visual cortex, which pool activities of multiple simple cells. These cells are known to exhibit similar phase-invariance properties.\n\n## Local response normalization\n\nOur networks also include response normalization layers. This type of layer encourages competition for large activations among neurons belonging to different banks. In particular, the activity $a_{x,y}^{i}$ of a neuron in bank $i$ at position $(x,y)$ in the topographic organization is divided by $$\\left(1+\\alpha\\sum_{j=i-N\/2}^{i+N\/2}(a_{x,y}^{j})^{2}\\right)^{\\beta}$$ where the sum runs over $N$ \"adjacent\" banks of neurons at *the same position in the topographic organization*. The ordering of the banks is of course arbitrary and determined before training begins. Response normalization layers implement a form of lateral inhibition found in real neurons. The constants $N,\\alpha$, and $\\beta$ are hyper-parameters whose values are determined using a validation set.\n\n## Neuron nonlinearities\n\nAll of the neurons in our networks utilize the max-with-zero nonlinearity. That is, their output is computed as $a_{x,y}^{i}=\\max(0,z_{x,y}^{i})$ where $z_{x,y}^{i}$ is the total input to the neuron (equivalently, the output of the neuron's linear filter added to the bias). This nonlinearity has several advantages over traditional saturating neuron models, including a significant reduction in the training time required to reach a given error rate. This nonlinearity also reduces the need for contrast-normalization and similar data pre-processing schemes, because neurons with this nonlinearity do not saturate \u2013 their activities simply scale up when presented with unusually large input values. Consequently, the only data pre-processing step which we take is to subtract the mean activity from each pixel, so that the data is centered. So we train our networks on the (centered) raw RGB values of the pixels.\n\n## Objective function\n\nOur networks maximize the multinomial logistic regression objective, which is equivalent to minimizing the average across training cases of the cross-entropy between the true label distribution and the model's predicted label distribution.\n\n## Weight initialization\n\nWe initialize the weights in our model from a zero-mean normal distribution with a variance set high enough to produce positive inputs into the neurons in each layer. This is a slightly tricky point when using the max-with-zero nonlinearity. If the input to a neuron is always negative, no learning will take place because its output will be uniformly zero, as will the derivative of its output with respect to its input. Therefore it's important to initialize the weights from a distribution with a sufficiently large variance such that all neurons are likely to get positive inputs at least occasionally. In practice, we simply try different variances until we find an initialization that works. It usually only takes a few attempts. We also find that initializing the biases of the neurons in the hidden layers with some positive constant (1 in our case) helps get learning off the ground, for the same reason.\n\n## Training\n\nWe train our models using stochastic gradient descent with a batch size of 128 examples and momentum of 0.9. Therefore the update rule for weight $w$ is $$\\begin{aligned}\nv_{i+1} & = & 0.9 v_{i}+\\epsilon<\\frac{\\partial E}{\\partial w_{i}}>_{i}\\\\\nw_{i+1} & = & w_{i}+v_{i+1}\n\\end{aligned}$$ where $i$ is the iteration index, $v$ is a momentum variable, $\\epsilon$ is the learning rate, and $<\\frac{\\partial E}{\\partial w_{i}}>_{i}$ is the average over the $i^{th}$ batch of the derivative of the objective with respect to $w_{i}$. We use the publicly available `cuda-convnet` package to train all of our models on a single NVIDIA GTX 580 GPU. Training on CIFAR-10 takes roughly 90 minutes. Training on ImageNet takes roughly four days with dropout and two days without.\n\n## Learning rates\n\nWe use an equal learning rate for each layer, whose value we determine heuristically as the largest power of ten that produces reductions in the objective function. In practice it is typically of the order $10^{-2}$ or $10^{-3}$. We reduce the learning rate twice by a factor of ten shortly before terminating training.\n\n# Models for CIFAR-10\n\nOur model for CIFAR-10 without dropout is a CNN with three convolutional layers. Pooling layers follow all three. All of the pooling layers summarize a $3\\times3$ neighborhood and use a stride of 2. The pooling layer which follows the first convolutional layer performs max-pooling, while the remaining pooling layers perform average-pooling. Response normalization layers follow the first two pooling layers, with $N=9$, $\\alpha=0.001$, and $\\beta=0.75$. The upper-most pooling layer is connected to a ten-unit softmax layer which outputs a probability distribution over class labels. All convolutional layers have 64 filter banks and use a filter size of $5\\times5$ (times the number of channels in the preceding layer).\n\nOur model for CIFAR-10 with dropout is similar, but because dropout imposes a strong regularization on the network, we are able to use more parameters. Therefore we add a fourth weight layer, which takes its input from the third pooling layer. This weight layer is *locally-connected but not convolutional.* It is like a convolutional layer in which filters in the same bank do not share weights. This layer contains 16 banks of filters of size $3\\times3$. This is the layer in which we use 50% dropout. The softmax layer takes its input from this fourth weight layer.\n\n# Models for ImageNet\n\nOur model for ImageNet with dropout is a CNN which is trained on $224\\times224$ patches randomly extracted from the $256\\times256$ images, as well as their horizontal reflections. This is a form of data augmentation that reduces the network's capacity to overfit the training data and helps generalization. The network contains seven weight layers. The first five are convolutional, while the last two are globally-connected. Max-pooling layers follow the first, second, and fifth convolutional layers. All of the pooling layers summarize a $3\\times3$ neighborhood and use a stride of 2. Response-normalization layers follow the first and second pooling layers. The first convolutional layer has 64 filter banks with $11\\times11$ filters which it applies with a stride of 4 pixels (this is the distance between neighboring neurons in a bank). The second convolutional layer has 256 filter banks with $5\\times5$ filters. This layer takes two inputs. The first input to this layer is the (pooled and response-normalized) output of the first convolutional layer. The 256 banks in this layer are divided arbitrarily into groups of 64, and each group connects to a unique random 16 channels from the first convolutional layer. The second input to this layer is a subsampled version of the original image ($56\\times56$), which is filtered by this layer with a stride of 2 pixels. The two maps resulting from filtering the two inputs are summed element-wise (they have exactly the same dimensions) and a max-with-zero nonlinearity is applied to the sum in the usual way. The third, fourth, and fifth convolutional layers are connected to one another without any intervening pooling or normalization layers, but the max-with-zero nonlinearity is applied at each layer after linear filtering. The third convolutional layer has 512 filter banks divided into groups of 32, each group connecting to a unique random subset of 16 channels produced by the (pooled, normalized) outputs of the second convolutional layer. The fourth and fifth convolutional layers similarly have 512 filter banks divided into groups of 32, each group connecting to a unique random subset of 32 channels produced by the layer below. The next two weight layers are globally-connected, with 4096 neurons each. In these last two layers we use 50% dropout. Finally, the output of the last globally-connected layer is fed to a 1000-way softmax which produces a distribution over the 1000 class labels. We test our model by averaging the prediction of the net on ten $224\\times224$ patches of the $256\\times256$ input image: the center patch, the four corner patches, and their horizontal reflections. Even though we make ten passes of each image at test time, we are able to run our system in real-time.\n\nOur model for ImageNet without dropout is similar, but without the two globally-connected layers which create serious overfitting when used without dropout.\n\nIn order to achieve state-of-the-art performance on the validation set, we found it necessary to use the very complicated network architecture described above. Fortunately, the complexity of this architecture is not the main point of our paper. What we wanted to demonstrate is that dropout is a significant help even for the very complex neural nets that have been developed by the joint efforts of many groups over many years to be really good at object recognition. This is clearly demonstrated by the fact that using non-convolutional higher layers with a lot of parameters leads to a big improvement with dropout but makes things worse without dropout.\n\n[^1]: For code see http:\/\/www.cs.toronto.edu\/~hinton\/MatlabForSciencePaper.html <\/a>\n\n[^2]: For code see http:\/\/www.utstat.toronto.edu\/~rsalakhu\/DBM.html <\/a>\n\n[^3]: http:\/\/kaldi.sourceforge.net <\/a>\n\n[^4]: The corpus is available at http:\/\/www.ai.mit.edu\/projects\/jmlr\/papers\/volume5\/lewis04a\/lyrl2004_rcv1v2_README.htm\n\n[^5]: The CIFAR dataset is available at http:\/\/www.cs.toronto.edu\/$\\sim$kriz\/cifar.html.","meta":{"dup_signals":{"dup_doc_count":13,"dup_dump_count":3,"dup_details":{"curated_sources":2,"2024-18":1,"unknown":10}},"filename":"out\/1207.0580_extract_dropout.tex.md"},"subset":"arxiv"} +{"text":"abstract: Traffic congestion is one of the most notable problems arising in worldwide urban areas, importantly compromising human mobility and air quality. Current technologies to sense real-time data about cities, and its open distribution for analysis, allow the advent of new approaches for improvement and control. Here, we propose an idealized model, the Microscopic Congestion Model, based on the critical phenomena arising in complex networks, that allows to analytically predict congestion hotspots in urban environments. Results on real cities' road networks, considering, in some experiments, real-traffic data, show that the proposed model is capable of identifying susceptible junctions that might become hotspots if mobility demand increases.\nauthor: Albert Sol\u00e9-Ribalta; Sergio G\u00f3mez; Alex Arenas\ntitle: A model to identify urban traffic congestion hotspots in complex networks\n\n# Introduction\n\nUrban life is characterized by a huge mobility, mainly motorized. Amidst the complex urban management problems there is a prevalent one: traffic congestion. Several approaches exist to efficiently design road networks and routing strategies, however, the establishment of collective actions, given the complex behavior of drivers, to prevent or ameliorate urban traffic congestion is still at its dawn. Usually, congestion is not homogeneously distributed around all city area but there are salient locations where congestion is settled. We call this locations congestion hotspots. These hotspots usually correspond to junctions and are problematic for the efficiency of the network as well as for the health of pedestrians and drivers. It has been shown that drivers in-queue in a traffic jam are the most affected individuals to car exhaust pollution inhalation. In addition, these hotspots are usually located in the city center, magnifying the problem. Assuming that congestion is an inevitable consequence of urban motorized areas, the challenge is to develop strategies towards a sustainable congestion regime at which delays and pollution are under control. The first step to confront congestion is the modelling and understating of the congestion phenomena.\n\nThe modeling of traffic flows it is prevalent hot topic since the late 70's when the Gipps' model appear . The Gipps' model and other car-following models have evidenced the necessity of modeling traffic flows to improve road network efficiency and also have shown how congestion severely affects the traffic flows. Since ten years ago also the complex networks' community has proposed stylized models to analyze the problem of traffic congestion in networks and design optimal topologies to avoid it. The focus of attention of the previous works was the onset of congestion, which corresponds to a critical point in a phase transition, and how it depends on the topology of the network and the routing strategies used. However, the proper analysis of the system after congestion has remained analytically slippery. It is known that when a transportation network reaches congestion, the system becomes highly non-linear, large fluctuation exists and the travel time and the amount of vehicles queued in a junction diverge. This phenomenon is equivalent to a phase transition in physics, and its modeling is challenging. Here, we propose an idealized model to predict the behaviour of transportation networks after the onset of congestion. The presented model is analytically tractable and can be iteratively solved up to convergence. To the best of our knowledge, this is the first analytical model that is able to give predictions beyond the onset of congestion. We present the model in terms of road transportation networks but it could also be applied to analyze other types of transportation networks such as: computer networks, business organizations or social networks.\n\n# Results\n\nTo identify congestion hotspots in urban environments we propose a model based on the theory of critical (congestion) phenomena on complex networks. The model, that we call *Microscopic Congestion Model* (MCM), is a mechanistic model (yet simple) and analytically tractable. It is based on assuming that the growth of vehicles observed in each congested node of the networks is constant. This usually happens in real transportation networks at the stationary state. The assumption allows us to describe, with a set of balance equations (one for each node), the increment of vehicles in the junction queues' and the number of vehicles arriving or traversing each junction from neighboring junctions. Mathematically, the increment of the vehicles per unit time at every junction $i$ of the city, $\\Delta{q}_{i}$, satisfies the following balance equation: $$\\Delta{q}_{i} = g_{i} + \\sigma_{i} - d_{i},\n \\label{BE}$$ where $g_{i}$ is the average number of vehicles entering junction $i$ from the area surrounding $i$, $\\sigma_{i}$ is the average number of vehicles that arrive to junction $i$ from the adjacent links of that junction, and $d_{i} \\in [0,\\tau_i]$ corresponds to the average of vehicles that actually finish in junction $i$ or traverse towards other junctions. Note that the value of $d_i$ is upper-bounded by the maximum amount of vehicles $\\tau_i$ that can traverse junction $i$ in a time step. This simulates the physical constraints of the road network. A graphical explanation of the variables of the model is shown in Fig.\u00a0.\n\nThe system of Eqs.\u00a0 defined for every node $i$, is coupled through the incoming flux variables $\\sigma_{i}$, that can be expressed as $$\\sigma_{i} = \\sum^{S}_{j=1} P_{ji} p_{j} d_{j},\n \\label{sigma}$$ where $P_{ji}$ accounts for the routing strategy of the vehicles (probability of going from $j$ to $i$), $p_j$ stands for the probability of traversing junction $j$ but not finishing at $j$ and $S$ is the number of nodes in the network (see *Materials and Methods* for a detailed description of the MCM).\n\nFor each junction $i$, the onset of congestion is determined by $d_i=\\tau_i$, meaning that the junction is behaving at its maximum capability of processing vehicles. Thus, for any flux generation ($g_i$), routing strategy ($P_{ij}$) and origin-destination probability distribution, Eqs.\u00a0 can be solved using an iterative approach (see *Materials and Methods*) to predict the increase of vehicles per unit time at each junction of the network ($\\Delta{q}_{i}$). The only hypothesis we use is that the system dynamics has reached a stationary state in which the growth of the queues is constant. It is worth commenting here that the MCM model considers a fixed average of new vehicles entering the system $g_i$. However, $g_i$ certainly changes during day time, with increasing values in rush hours and lower values during off-peak periods. MCM can easily consider evolving values of $g_i$ provided the time scale to reach the stationary state in the MCM (which is usually of the order of minutes in real traffic systems) is shorter than the rate of change in the evolution of $g_i$ (which is usually of the order of hours for the daily peaks).\n\n## Validation on synthetic networks\n\nTo validate MCM we conducted experiments on several synthetic networks and with two different routing strategies: local search strategy and shortest path strategy. In both routing strategies we assume, for simplicity, that all vehicles randomly choose the starting and ending junctions of their journey uniformly within all junctions of the network. Thus, each junction generates new vehicles with the same rate $g_i = \\rho$. For shortest path strategy, vehicles follow a randomly selected shortest path towards the destination. Without loss of generality we fix $\\tau=1$ and analyze the performance of MCM for different values of $\\rho$.\n\nFigure\u00a0 shows the accuracy on predicting the values of the order parameter $\\eta=\\frac{\\sum \\Delta q_i}{\\rho S}$ and $d_i$ for shortest paths routing strategy. As in refs.\u00a0, this order parameter $\\eta$ corresponds to the ratio between in-transit and generated vehicles. All experiments show that the MCM achieves high accuracy in predicting the macroscopic and microscopic variables of the stylised transportation dynamics.\n\n## Application to real scenarios\n\nINRIX Traffic Scorecard (http:\/\/www.inrix.com\/) reports the rankings of the most congested countries worldwide in 2014. US, Canada and most of the European countries are in the top 15, with averages that range from 14 to 50 hours per year wasted in congestion, with their corresponding economical and environmental negative consequences. To demonstrate that the MCM model can be applied to real scenarios to obtain real predictions, in the following we apply the MCM model to the ninth most congested cities according to the INRIX Traffic Scorecard (see Table\u00a0).\n\nWe first focus on the city of Milan, the city with largest INRIX value. To evaluate the outcome of the MCM model, we first gather data about the road network topology using Open Street Map (OSM). OSM data represents each road (or way) with an ordered list of nodes which can either be road junctions or simply changes of the direction of the road. We have obtained the required abstraction of the road network building a simplified version of the OSM data which only accounts for road junctions (nodes). Then, for each pair of adjacent junctions we have queried the real travel distance (i.e.\u00a0following the road path) using the API provided by Google Maps. The resulting network corresponds to a spatial weighed directed network where the driving directions are represented and the weight of each link indicates the expected traveling time between two adjacent junctions.\n\nSecond, we build up the dynamics of the model analyzing real traffic data provided by Telecom Italia for their Big Data Challenge. The data provides, for every car entering the cordon pricing zone in Milan during November and December 2013, an encoding of the car's plate number, time and gate of entrance (a total of 9183475 records). This allows us to obtain the (hourly) average incoming and outgoing traffic flow, for each gate of the cordon taxed area.\n\nGiven the previous topology and traffic information, we generated traffic compatible with the observations, and evaluated the outcome of the MCM model. Specifically, the simulated dynamics is as follows: for each vehicle entering the Area-C we fix a randomly selected location as destination and use the shortest path route towards it. After the vehicle has arrived to its destination, it randomly chooses an exit door and travels to it also using the shortest path route. This is similar to the well-known *Home-to-Work* travel pattern (see details in *Materials and Methods*). Figures\u00a0 and\u00a0 show the obtained results. Figures\u00a0 displays the predicted congestion hotspots on a map of Milan, panel **A** of the same figure shows a real traffic situation obtained with google maps. We see that the predicted congestion hotspots are located in the circular roads of Milan as well as on the arterial roads of the city; this agrees with the real traffic situation shown in panel **A**. Figures\u00a0 shows the distribution of the mean increments each junction has to deal with. This might be a good indicator to decide about future planning actuations to improve city mobility. However, differently from what is described in , the improvement of the throughput of a single junction might not be enough to improve city mobility since this might end up with the collapse of neighbouring junctions (their incoming rate $\\sigma_i$ will increase). This is situation is similar to the Braess' paradox . Figures\u00a0 shows the mean increment of vehicles (in vehicles per minute) for each hour of the weekday. The figure clearly shows the morning and evening rush hours as well as the lunch time.\n\n```latex\n\\begin{figure*}[!ht]\\begin{tabular}{ll}\n {\\bf A} & {\\bf B}\\\\\n \\includegraphics[width=0.90\\columnwidth]{fig03a.png} ~~~ &~~~\n \\includegraphics[width=0.90\\columnwidth]{fig03b.png}\n \\end{tabular}\n \\caption{Congestion hotspot analysis of the city of Milan. Panel {\\bf A} shows the typical situation around 9 a.m. for a week day. The image and the data has been obtained with Google Maps. Google maps displays traffic information considering historical data and real-time car velocity reported by smartphones \\cite{barth2009googlemapstraffic}. Panel {\\bf B} shows the prediction of the MCM model considering the real road topology obtained using Open Street Map and real traffic data provided by Telecom Italia for their Big Data Challenge. For all congestion hotspots the model has predicted, we shown its mean increment of the queue size, $\\langle \\Delta q_{i} \\rangle$.}\n \\label{fig:milano_hotspot_map}\n\\end{figure*}\n```\n\nFor the other top nine congested cities, we do not have previous traffic information, neither about the real flux of vehicles nor about the vehicle source and destination distributions (to obtain a fair comparison between all the analysed cities we have not consider the Telecom traffic data for Milan here). Thus, for each city, we consider homogeneously distributed source and destination locations and the required road traffic to obtain an order parameter $\\eta$ compatible with the congestion effects recorded by INRIX sensing of real traffic. By relating the INRIX value and $\\eta$, we are assuming that there exists a relation between the fraction of global congestion and the fraction of extra time wasted reported by INRIX. The obtained results are summarized in Table\u00a0, which shows that the amount of hotspots is correlated with the INRIX value. This shows evidences that the percentage increase in the average travel time to commute between to city locations is related to the number of congestion hotspots and with the excess of vehicles within the city.\n\n```latex\n\\begin{threeparttable}\n \\begin{tabular}{lcccc}\n City & INRIX\\tnote{a} & hotspots & nodes & links\\\\\n \\hline\n Milano & 36.2 & 108 & 6924 & 14315 \\\\\n London & 32.4 & 93 & 6378 & 14662 \\\\\n Los Angeles & 32.2 & 57 & 6799 & 19368 \\\\\n Brussels & 30.5 & 50 & 6645 & 15624 \\\\\n Antwerpen & 28.6 & 44 & 6530 & 15252 \\\\\n San Francisco & 27.9 & 45 & 8854 & 25530 \\\\\n Stuttgart & 21.9 & 34 & 8330 & 19946 \\\\\n Nottingham & 21.6 & 28 & 7337 & 16723 \\\\\n Karlsruhe & 21.3 & 19 & 4257 & 10379 \\\\\n \\hline\n \\end{tabular}\n \\begin{tablenotes}\n \\item[a] The INRIX index is the percentage increase in the average travel time of a commute above free-flow conditions during peak hours, e.g.\\ an INRIX index of~30 indicates a 40-minute free-flow trip will take 52~minutes. Each city has been mapped to a graph with the indicated numbers of nodes and links.\n \\end{tablenotes}\n \\end{threeparttable}\n```\n\n# Discussion\n\nThe previous results show that the MCM (Microscopic Congestion Model) can be used to predict the local congestion before and beyond the onset of congestion of a transportation network. Up to the knowledge of the authors, this is the first analytical model that is able to give predictions beyond the onset of congestion where the system is highly non-linear, large fluctuation exists and the amount of vehicles on transit diverge with respect to time. Our model is based on assuming that the growth of vehicles observed in each congested node of the networks is constant, which allowed us to derive a set of balance equations that can accurately predict microscopic, mesoscopic and microscopic variables of the transportation network.\n\nTraffic congestion is a common and open problem whose negative impacts range from wasted time and unpredictable travel delays to a waste of energy and an uncontrolled increase of air pollution. A first step towards the understanding and fight of congestion and its related consequences is the analytical modelling of the congestion phenomena. Here, we have shown that the MCM model is detailed enough to give real predictions considering real traffic data and topology. These results pave the way to a new generation of stilyzed physical models of traffic on networks in the congestion regime, that could be very valuable to assess and test new traffic policies on urban areas in a computer simulated scenario.\n\n# Materials and Methods\n\n## Microscopic Congestion Model\n\nLet node $i$ denote a road junction, edge $a_{ij}$ the road segment between junctions $i$ and $j$, $N^{\\mbox{\\scriptsize in}}_{i}$ and $N^{\\mbox{\\scriptsize out}}_{i}$ the sets of ingoing and outgoing neighbouring junctions of junction $i$ respectively, and $S$ the number of junctions in the road network of the city. Incoming vehicles to node $i$ at each time step can be of two types: those coming from other junctions $N^{\\mbox{\\scriptsize in}}_{i}$ and those that start its trip with node $i$ as its first crossed junction. We consider this second type of incoming vehicles as generated in node $i$. Our Microscopic Congestion Model (MCM) describes the increment of the vehicles per unit time at every junction $i$ of the city, $\\Delta{q}_{i}$, as: $$\\label{oneLayerMMC}\n \\Delta{q}_{i} = g_{i} + \\sigma_{i} - d_{i}\\,,$$ where $g_{i}$ is the average number of vehicles generated in node $i$, $\\sigma_{i}$ is the average number of vehicles that arrive to node $i$ from $N^{\\mbox{\\scriptsize in}}_{i}$ junctions, and $d_{i} \\in [0,\\tau_i]$ corresponds to the average number of vehicles that actually finish in it, or traverse this junction towards neighboring nodes in $N^{\\mbox{\\scriptsize out}}_{i}$. Parameter $\\tau_i$ represents the maximum routing rate of junction $i$. As described in the main text, we decompose the incoming flux of vehicles $\\sigma_i$ to node $i$ as $$\\label{sigmadef}\n \\sigma_{i} = \\sum_{j \\in N^{\\mbox{\\scriptsize in}}_{i}} P_{ji} p_{j} d_{j}\\,,$$ where $p_i$ is the probability that a vehicle waiting in node $i$ has not arrived to its destination (i.e., it is going to visit at least another junction in the next step) and $P_{ji}$ is the probability that a vehicle crossing node $j$ goes to node $i\\in N^{\\mbox{\\scriptsize out}}_{j}$ in its next step.\n\nSince vehicles just generated in a certain node are not affected by the congestion in the rest of the network, we separate their contributions in the computation of probabilities $p$ and $P$. Thus, we decompose $p_i$ as $$\\begin{aligned}\n\\label{smallP}\n p_{i} & = & p^{\\mbox{\\scriptsize gen}}_{i}p^{\\mbox{\\scriptsize loc}}_{i} + (1-p^{\\mbox{\\scriptsize gen}}_{i}) p^{\\mbox{\\scriptsize ext}}_{i}\\,,\n\\end{aligned}$$ where the first term accounts for vehicles generated in node $i$ ($p^{\\mbox{\\scriptsize gen}}_{i}$) whose destination is not $i$ ($p^{\\mbox{\\scriptsize loc}}_{i}$) and the second term accounts for vehicles not generated in $i$ whose destination is not $i$ ($p^{\\mbox{\\scriptsize ext}}_{i}$). Supposing trips consist in traveling through two or more junctions we have that $p^{\\mbox{\\scriptsize loc}}_{i}=1$. Probability $p^{\\mbox{\\scriptsize gen}}_{i}$ is equal to the fraction of vehicles generated in $i$ with respect to the total amount of incoming vehicles: $$\\begin{aligned}\n p^{\\mbox{\\scriptsize gen}}_{i} = \\frac{g_i}{g_i+\\sigma_i}\\,.\n\\end{aligned}$$ Considering the distribution of origins, destinations, the routing strategy and the congestion in the network, probability $p^{\\mbox{\\scriptsize ext}}_{i}$ can be expressed in terms of the effective node betweenness $\\tilde{B}_i$ and the effective vehicle arrivals $\\tilde{e}_i$ (the amount of vehicles with destination node $i$ that arrive to node $i$ at each time step): $$\\begin{aligned}\n p^{\\mbox{\\scriptsize ext}}_{i}=\\frac{\\tilde{B}_i}{\\tilde{B}_i + \\tilde{e}_i} \\,.\n\\end{aligned}$$ The effective betweenness $\\tilde{B}_i$ of a node $i$ accounts for the expected amount of vehicles each node $i$ receives per unit time considering the routing algorithm and the overall congestion of the network. See *Materials and Methods* subsection *Effective betweenness in congested transportation networks* for an extended description and computation of the effective node betweenness $\\tilde{B}_i$ and the effective vehicle arrivals $\\tilde{e}_i$.\n\nIn the same spirit, we decompose the probability $P_{ji}$ that a vehicle waiting in node $j$ goes to node $i$ as: $$\\begin{aligned}\n\\label{BigP}\n P_{ji} &=& p^{\\mbox{\\scriptsize rgen}}_{j}P^{\\mbox{\\scriptsize loc}}_{ji} + (1-p^{\\mbox{\\scriptsize rgen}}_{j})P^{\\mbox{\\scriptsize ext}}_{ji}\\,.\n\\end{aligned}$$ The first term corresponds to the routed vehicles generated in node $j$ ($p^{\\mbox{\\scriptsize rgen}}_{j}$) that go to node $i$ ($P^{\\mbox{\\scriptsize loc}}_{ji}$) and the second term to the routed vehicles not generated in $j$ that go to node $i$ ($P^{\\mbox{\\scriptsize ext}}_{ji}$). Similarly as before, $p^{\\mbox{\\scriptsize rgen}}_{j}$ can be expressed as the rate between the vehicles generated in $j$ and the total amount of routed vehicles: $$\\begin{aligned}\n p^{\\mbox{\\scriptsize rgen}}_{j} = \\frac{g_j}{g_j+\\sigma_j p^{\\mbox{\\scriptsize ext}}_{j}}\\,,\n\\end{aligned}$$ and, $P^{\\mbox{\\scriptsize loc}}_{ji}$ and $P^{\\mbox{\\scriptsize ext}}_{ji}$ can be computed in terms of the normalized effective edge betweenness of the network: $$\\begin{aligned}\n P^{\\mbox{\\scriptsize loc}}_{ji} &=& \\frac{\\tilde{E}^{\\mbox{\\scriptsize loc}}_{ji}}{\\displaystyle\\sum_{k=1}^S \\tilde{E}^{\\mbox{\\scriptsize loc}}_{jk}} \\,, \\\\\n P^{\\mbox{\\scriptsize ext}}_{ji} &=& \\frac{\\tilde{E}^{\\mbox{\\scriptsize ext}}_{ji}}{\\displaystyle\\sum_{k=1}^S \\tilde{E}^{\\mbox{\\scriptsize ext}}_{jk}} \\,, \\label{oneLayerMMC_lastEq}\n\\end{aligned}$$ where the computation of $\\tilde{E}^{\\mbox{\\scriptsize loc}}_{ji}$ only considers paths that start on node $j$ and $\\tilde{E}^{\\mbox{\\scriptsize ext}}_{ji}$ only considers paths that do not start on node $j$. Equivalently to the effective node betweenness $\\tilde{B}_i$, computation of $\\tilde{E}^{\\mbox{\\scriptsize loc}}_{ji}$ and $\\tilde{E}^{\\mbox{\\scriptsize ext}}_{ji}$ consider, if required, all congested junctions in the network, as described in a later section, as well as the distribution of the vehicle sources and destinations. Note that the sum of $E^{\\mbox{\\scriptsize loc}}_{ji}$ and $E^{\\mbox{\\scriptsize ext}}_{ji}$ corresponds to the classical edge betweenness. Moreover, $P_{ji}$ is an exact expression before and after the onset of congestion.\n\nEventually, the MCM is composed by a set of $S$ equations ($\\Delta{q}_{i} = g_{i} + \\sigma_{i} - d_{i}$), one for each junction, and, in principle, a set of $2S$ unknowns, $\\Delta{q}_{i}$ and $d_i$ for each junction. To see that the system is indeed determined we need to note that for congested junctions $\\Delta q_{i} > 0$ and, thus, after the transient state, $d_{i}=\\tau_i$. For the non-congested junctions we have that $\\Delta q_{i}=0$ and consequently $d_i = g_i + \\sigma_i$. In conclusion, for any node $i$, either $d_i=\\tau_i$ or $d_i=g_i + \\sigma_i$ which reduces the amount of unknowns to $S$.\n\nTo solve the model given a fixed generation rate $g_i$, we start by considering that no junction is congested and we solve the set of equations Eqs.\u00a0\u2013 by iteration. It is possible that some nodes exceed their maximum routing rate. If this is the case, we set the node with maximum $d_i$ as congested and we solve the system again. This process is repeated until no new junction exceeds its maximum routing rate.\n\n## Onset of congestion using the Microscopic Congestion Model\n\nMost of the works that consider static routing strategies assume that the generation rate of vehicles is the same for all nodes, $g_i=\\rho$. In that case, it is possible to compute the critical generation rate $\\rho_c$ such that for any generation rate $\\rho > \\rho_c$ the network will not be able to route or absorb all the traffic. After this point is reached, the amount of vehicles $Q(t)$ in the network will grow proportionally with time, $Q(t) \\propto t$, since some of the vehicles get stacked in the queues of the nodes. This transition to the congested state is characterized using the following order parameter: $$\\label{oneLayerOrderParameter}\n \\eta(\\rho)= \\lim_{t\\rightarrow\\infty}\\frac{\\langle{\\Delta Q}\\rangle}{\\rho S}\\,,$$ where $\\langle\\Delta Q\\rangle$ represents the average increment of vehicles per unit of time in the stationary state. Basically, the order parameter measures the ratio between in-transit and generated vehicles.\n\nIn the non-congested phase, the amount of incoming and outgoing vehicles for each node can be computed in terms of the node's algorithmic betweenness $B_i$, see ref.\u00a0. In particular, $$\\label{oneLayerSigma}\n \\sigma_{i} = \\rho\\left(\\frac{B_i}{S-1} + 1\\right)\\,,$$ where the second term inside the parentheses accounts for the fact that, in our model, vehicles are also queued at the destination node, unlike in ref.\u00a0. When no junction is congested we have that $\\Delta{q}_{i} = 0$ for all nodes and consequently $$\\label{oneLayerD^i}\n d_i = \\rho + \\sigma_i = \\rho\\left(\\frac{B_i}{S-1} + 2\\right)\\,.$$ A node $i$ becomes congested when it is required to process more vehicles than its maximum processing rate, $d_i > \\tau$. Thus, the critical generation rate at which the first node, and so the system, reaches congestion is: $$\\label{rhocFirstNode}\n \\rho_{c} = \\min_i \\frac{\\tau\\left(S-1\\right)}{B_i + 2(S-1)}\\,.$$\n\nThis is one of the most important analytical results on transportation networks with static routing strategies. In the following, we show that we can recover Eq.\u00a0() before the onset of congestion using our MCM approach. After substitution of the expression of the probabilities in Eq.\u00a0: $$\\sigma_i = \\sum\\limits_j \\frac{\\rho(B_j + S-1) P^{\\mbox{\\scriptsize loc}}_{ji} + \\sigma_j B_j P^{\\mbox{\\scriptsize ext}}_{ji}}{(\\rho+\\sigma_j)(B_j + S-1)} d_j \\,,$$ and, given we do not have congestion (i.e., $d_j = \\rho + \\sigma_j$), it simplifies to $$\\label{eq:sigmaMCM}\n \\sigma_i = \\sum\\limits_j \\frac{\\rho(B_j + S-1) P^{\\mbox{\\scriptsize loc}}_{ji} + \\sigma_j B_j P^{\\mbox{\\scriptsize ext}}_{ji}}{B_j + S-1}\\,.$$ Equation () in matrix form becomes $$(I - M)\\boldsymbol{\\sigma} = \\rho\\boldsymbol{\\pi} \\,,$$ where $$\\begin{aligned}\n M_{ij} &=& \\frac{B_j P^{\\mbox{\\scriptsize ext}}_{ji}}{B_j + S - 1}\\,,\\\\\n \\pi_i &=& \\sum_j P^{\\mbox{\\scriptsize loc}}_{ji}\\,,\n\\end{aligned}$$ and then $$\\boldsymbol{\\sigma} = \\rho(I-M)^{-1}\\boldsymbol{\\pi}\\,.$$ This expression can be shown to be equivalent to Eq.\u00a0() by using the following relationship between node and edge betweenness: $$\\label{nodeEdgeBWRelation}\n B_i + (S-1)= \\sum_{j} \\left(B_{j} P^{\\mbox{\\scriptsize ext}}_{ji} + (S-1)P^{\\mbox{\\scriptsize loc}}_{ji}\\right)\\,.$$ The right hand side corresponds to the accumulated fractions of paths that pass through the neighbors of node $i$ and then go to $i$. Each neighbor contributes with two terms, the paths that go through $j$ coming from other nodes, and the paths that start in $j$.\n\n## Effective betweenness in congested transportation networks\n\nThe effective betweenness $\\tilde{B}_i$ of a node $i$, as defined in ref.\u00a0, accounts for the expected amount of vehicles each node $i$ receives per unit time. When the network is not congested and the vehicle generation rate $g_i$ is equal for all nodes, $g_i = \\rho$, the number of vehicles each node receives can be obtained using Eq.\u00a0. However, if the network is congested, the traffic dynamics becomes highly non-linear and the value of $\\sigma_i$ computed in Eq.\u00a0 becomes a poor approximation.\n\nSuppose we focus on a particular congested node $j^{\\ast}$ of the network. For $j^{\\ast}$, being congested means that it is receiving more vehicles that the ones it can process and route. In particular, from the $\\sigma_{j^{\\ast}}+g_{j^{\\ast}}$ vehicles that arrive to the node, only $\\tau_{j^{\\ast}}$ can be processed at each time step.\n\nTherefore, the contribution to the effective betweenness $\\tilde{B}_i$ of the paths from a source\/destination pair, $(s,t)$, that traverse the congested node $j^{\\ast}$ before reaching $i$, must be rescaled by the fraction of processable vehicles: $$\\label{rescalingEffBetweenness}\n s_{j^{\\ast}} = \\frac{\\tau_{j^{\\ast}}}{\\sigma_{j^{\\ast}} + g_{j^{\\ast}}}\\,.$$ When a path traverses multiple congested nodes ${j^{\\ast}},{k^{\\ast}},\\dots$, the remaining fraction of paths that will reach the target node will be the result of the application of the multiple re-scalings $s_{x^{\\ast}}$.\n\nThe computation of $s_{j^{\\ast}}$ is not straightforward. In general, $\\sigma_i$ is not known after the onset of congestion and depends on the effective betweenness that requires, at the same time, to know the $s_{j^{\\ast}}$ fraction for all congested nodes. Thus, an iterative calculation is needed to fit all the parameters at the same time as we do in our Microscopic Congestion Model.\n\nThe effective arrivals $\\tilde{e}_i$ account for the amount of vehicles with destination node $i$ that arrive to node $i$ at each time step. This value in the non-congested phase can be obtained, considering homogeneous source and destination nodes, as $$\\label{effectiveArrivals}\n e_i = \\rho(S-1)\\,.$$ However, congestion affects the variable $e_i$ as well, and needs to be corrected accordingly using the same procedure presented above.\n\n## Traffic Dynamics\n\nTo simulate the traffic dynamics of the road network, we assign a first-in-first-out queue to each junction that simulates the blocking time of vehicles before they are allowed to cross it and continue their trip. We suppose these queues have infinite capacity and a maximum processing rate that simulates the physical constraints of the junction. Vehicles origins and destinations may follow any desired distribution. In this work, we have considered two distributions: a random uniform distribution for the synthetic experiments, and one obtained considering the ingoing and outgoing flux of vehicles of the city of Milan. At each time step (of 1 minute duration) vehicles are generated and arrive to their first junction. During the following time steps, vehicles navigate towards their destination following any routing strategy. Here, we have used two different routing strategies: shortest-path and random local search.\n\nFor the particular case of simulating traffic in the city of Milan we assume a traffic dynamics similar to the \"Home-to-Work\" travel pattern where vehicles arrive from the outskirts of the city, go to the city center and then return to the outskirts. Specifically, in our simulation, traffic is generated in the peripheral junctions of the network, goes to a randomly selected junction within the city and then returns back to a randomly selected peripheral junction. We do not consider trips with origin and destination inside the city center since public transportation systems (e.g., train or subway) usually constitute a better alternative than private vehicles for those trips.\n\nThe maximum crossing rate of each junction $\\tau_i$ accounts, among others, for the existence of traffic lights governing the junction, the width of the street as well as its traffic. We have not been able to get this information for the studied cities, and consequently we cannot set to each junction its precise value. Instead, without loss of generality and for the sake of simplicity, we set to all junctions the same maximum crossing rate, $\\tau_i = 15$ (an estimation of the average of their real values).\n\n# Acknowledgements\n\nThis work has been supported by Ministerio de Econom\u0131\u0301a y Competitividad (Grant FIS2015-71582-C2-1) and European Comission FET-Proactive Projects MULTIPLEX (Grant 317532). A.A.\u00a0also acknowledges partial financial support from the ICREA Academia and the James S. McDonnell Foundation.","meta":{"dup_signals":{"dup_doc_count":18,"dup_dump_count":16,"dup_details":{"curated_sources":1,"2018-26":1,"2018-13":1,"2018-05":1,"2017-47":1,"2017-39":1,"2017-34":1,"2017-30":2,"2017-26":1,"2017-22":1,"2017-17":1,"2017-09":1,"2017-04":1,"2016-50":1,"2016-44":2,"2018-39":1}},"filename":"out\/1604.07728_extract_MicroscopicCongestionModel.tex.md"},"subset":"arxiv"} +{"text":"**Quantum information processing, science of**<\/span>[^1] - The theoretical, experimental and technological areas covering the use of quantum mechanics for communication and computation. Quantum information processing includes investigations in quantum information theory, quantum communication, quantum computation, quantum algorithms and their complexity, and quantum control. The science of quantum information processing is a highly interdisciplinary field. In the context of mathematics it is stimulating research in pure mathematics (e.g. coding theory, $*$-algebras, quantum topology) as well as requiring and providing many opportunities for applied mathematics.\n\nThe science of quantum information processing emerged from the recognition that usable notions of information need to be physically implementable. In the 1960s and 1970s researchers such as R.\u00a0Landauer, C.\u00a0Bennett, C.\u00a0Helstrom and A.\u00a0Holevo realized that the laws of physics give rise to fundamental constraints on the ability to implement and manipulate information. Landauer repeatedly stated that \"information is physical\", providing impetus to the idea that it should be possible to found theories of information on the laws of physics. This is in contrast to the introspective approach which led to the basic definitions of computer science and information theory as formulated by A.\u00a0Church, A.\u00a0Turing, C.\u00a0Shannon and others in the first half of the 20th century.\n\nEarly work in studying the physical foundations of information focused on the effects of energy limitations and the need for dissipating heat in computation and communication. Beginning with S.\u00a0Wiesner's work on applications of quantum mechanics to cryptography in the late 1960s, it was realized that there may be intrinsic advantages to using quantum physics in information processing. Quantum cryptography and quantum communication in general were soon established as interesting and non-trivial extensions of classical communication based on bits. That quantum mechanics may be used to improve the efficiency of algorithms was first realized when attempts at simulating quantum mechanical systems resulted in exponentially complex algorithms compared to the physical resources associated with the system simulated. In the 1980s, P.\u00a0Benioff and R.\u00a0Feynman introduced the idea of a quantum computer for efficiently implementing quantum physics simulations. Models of quantum computers were developed by D.\u00a0Deutsch, leading to the formulation of artificial problems that could be solved more efficiently by quantum than by classical computers. The advantages of quantum computers became widely recognized when P.\u00a0Shor (1994) discovered that they can be used to efficiently factor large numbers \u2014 a problem believed to be hard for classical deterministic or probabilistic computation and whose difficulty underlies the security of widely used public key encryption methods. Subsequent work established principles of quantum error-correction to ensure that quantum information processing was robustly implementable. See\u00a0 for introductions to quantum information processing and a quantum mechanics tutorial.\n\nIn the context of quantum information theory, information in the sense of C.\u00a0Shannon is referred to as *classical* information. The fundamental unit of classical information is the *bit*, which can be understood as an ideal system in one of two states or configurations, usually denoted by $0$ and $1$. The fundamental units of quantum information are qubits (short for \"quantum bits\"), whose states are identified with all \"unit superpositions\" of the classical states. It is common practice to use the bra-ket conventions for denoting states. In these conventions, the classical configurations are denoted by $|{0}\\rangle$ and $|{1}\\rangle$, and superpositions are formal sums $\\alpha|{0}\\rangle+\\beta|{1}\\rangle$, where $\\alpha$ and $\\beta$ are complex numbers satisfying $|\\alpha|^2+|\\beta|^2 = 1$. The states $|{0}\\rangle$ and $|{1}\\rangle$ represent a standard orthonormal basis of a two-dimensional Hilbert space. Their superpositions are unit vectors in this space. The state space associated with $n>1$ qubits is formally the tensor product of the Hilbert spaces of each qubit. This state space can also be obtained as an extension of the state space of $n$ classical bits by identifying the classical configurations with a standard orthonormal basis of a $2^n$ dimensional Hilbert space.\n\nAccess to qubit states is based on the postulates of quantum mechanics with the additional restriction that they are *local* in the sense that elementary operations apply to one or two qubits at a time. Most operations can be expressed in terms of standard measurements of a qubit and two-qubit *quantum gates*. The standard qubit measurement has the effect of randomly projecting the state of the qubit onto one of its classical states; this state is an output of the measurement (accessible for use in a classical computer if desired). For example, using the tensor product representation of the state space of several qubits, a measurement of the first qubit is associated with the two projection operators $P^{(1)}_0=P_0\\otimes I\\otimes\\ldots$ and $1-P^{(1)}_0$, where $P_0|{0}\\rangle = |{0}\\rangle$ and $P_0|{1}\\rangle = 0$. If $\\mathbf{\\psi}$ is the initial state of the qubits, then the measurement outcome is $0$ with probability $p_0=\\|P_0\\mathbf{\\psi}\\|^2$, in which case the new state is $P_0\\mathbf{\\psi}\/p_0$, and the outcome is $1$ with probability $1-p_0=\\|P_1\\mathbf{\\psi}\\|^2$ with new state $P_1\\mathbf{\\psi}\/(1-p_0)$. This is a special case of a *von\u00a0Neumann measurement*. A general two-qubit quantum gate is associated with a unitary operator $U$ acting on the state space of two qubits. Thus $U$ may be represented by a $4\\times 4$ unitary matrix in the standard basis of two qubits. The quantum gate may be *applied* to any two chosen qubits. For example, if the state of $n$ qubits is $\\mathbf{\\psi}$ and the gate is applied to the first two qubits, then the new state is given by $(U\\otimes I\\otimes\\ldots\n)\\mathbf{\\psi}$. Another important operation of quantum information processing is preparation of the $|{0}\\rangle$ state of a qubit, which can be implemented in terms of a measurement and subsequent applications of a gate depending on the outcome.\n\nMost problems of theoretical quantum information processing can be cast in terms of the elementary operations above, restrictions on how they can be used and an accounting of the physical *resources* or *cost* associated with implementing the operations. Since classical information processing may be viewed as a special case of quantum information processing, problems of classical information theory and computation are generalized and greatly enriched by the availability of quantum superpositions. The two main problem areas of theoretical quantum information processing are quantum computation and quantum communication.\n\nIn studies of quantum computation (cf. **quantum computation**) one investigates how the availability of qubits can be used to improve the efficiency of algorithmic problem solving. Resources counted include the number of quantum gates applied and the number of qubits accessed. This can be done by defining and investigating various types of quantum automata, most prominently quantum Turing machines, and studying their behavior using approaches borrowed from the classical theory of automata and languages. It is convenient to combine classical and quantum automata, for example by allowing a classical computer access to qubits as defined above, and then investigating the complexity of algorithms by counting both classical and quantum resources, thus obtaining trade-offs between the two.\n\nMost of the complexity classes for classical computation have analogues for quantum computation, and an important research area is concerned with establishing relationships between these complexity classes. Corresponding to the classical class $\\mathbf{P}$ of polynomially decidable languages is the class of languages decidable in bounded error quantum polynomial time, $\\mathbf{BQP}$. While it is believed that $\\mathbf{P}$ is properly contained in $\\mathbf{BQP}$, whether this is so is at present an open problem. $\\mathbf{BQP}$ is known to be contained in the class $\\mathbf{P}^{\\raisebox{.5pt}{\\#}\\mathbf{P}}$ (languages decidable in classical polynomial time given access to an oracle for computing the permanent of $0$-$1$ matrices), but the relationship of $\\mathbf{BQP}$ to the important class of nondeterministic polynomial time languages $\\mathbf{NP}$ is not known.\n\nIn quantum communication one considers the situation where two or more entities with access to local qubits can make use of both classical and quantum (communication) channels for exchanging information. The basic operations now include the ability to send classical bits and the ability to send quantum bits. There are two main areas of investigation in quantum communication. The first aims at determining the advantages of quantum communication for solving classically posed communication problems with applications to cryptography and to distributed computation. The second is concerned with establishing relationships between different types of communication resources, particularly with respect to noisy quantum channels, thus generalizing classical communication theory.\n\nEarly investigations of quantum channels focused on using them for transmitting classical information by encoding a source of information (cf. **information, source of**) with uses of a quantum channel (cf. **quantum communication channel**). The central result of these investigations is A.\u00a0Holevo's bound (1973) on the amount of classical information that can be conveyed through a quantum channel. Asymptotic achievability of the bound (using block coding of the information source) was shown in the closing years of the twentieth century. With some technical caveats, the bound and its achievability form a quantum information-theoretic analogue of Shannon's capacity theorem for classical communication channels.\n\nQuantum cryptography, distributed quantum computation and quantum memory require transmitting (or storing) quantum states. As a result it is of great interest to understand how one can communicate quantum information through quantum channels. In this case, the source of information is replaced by a source of quantum states, which are to be transmitted through the channel with high *fidelity*. As in the classical case, the state is encoded before transmission and decoded afterwards. There are many measures of fidelity which may be used to evaluate the quality of the transmission protocol. They are chosen so that a good fidelity value implies that with high probability, quantum information processing tasks behave the same using the original or the transmitted states. A commonly used fidelity measure is the Bures-Uhlmann fidelity, which is an extension of the Hilbert space norm to probability distributions of states (represented by *density operators*). In most cases, asymptotic properties of quantum channels do not depend on the details of the fidelity measure adopted.\n\nTo improve the reliability of transmission over a noisy quantum channel, one uses *quantum error-correcting codes* to encode a state generated by the quantum information source with multiple uses of the channel. The theory of quantum codes can be viewed as an extension of classical coding theory. Concepts such as minimum distance and its relationship to error-correction generalize to quantum codes. Many results from the classical theory, including some linear programming upper bounds and the Gilbert-Varshamov lower bounds on the achievable rates of classical codes have their analogues for quantum codes. In the classical theory, linear codes are particularly useful and play a special role. In the quantum theory, this role is played by the *stabilizer* or *additive* quantum codes, which are in one-to-one correspondence with self-dual (with respect to a specific symplectic inner product) classical $\\mathrm{GF}_2$-linear codes over $\\mathrm{GF}_4$ (cf. **finite fields**).\n\nThe capacity of a quantum channel with respect to encoding with quantum codes is not as well understood as the capacity for transmission of classical information. The exact capacity is known only for a few special classes of quantum channels. Although there are information theoretic upper bounds, they depend on the number of channel instances, and whether or not they can be achieved is an open problem. A further complication is that the capacity of quantum channels depends on whether one-way or two-way classical communication may be used to restore the transmitted quantum information\u00a0.\n\nThe above examples illustrate the fact that there are many different types of information utilized in quantum information theory, making it a richer subject than classical information theory. Another physical resource whose properties appear to be best described by information-theoretic means is *quantum entanglement*. A quantum state of more than one quantum system (e.g. two qubits) is said to be entangled if the state can not be factorized as a product of states of the individual quantum systems. Entanglement is believed to play a crucial role in quantum information processing, as demonstrated by its enabling role in effects such as quantum key distribution, superdense coding, quantum teleportation, and quantum error-correction. Beginning in 1995 an enormous amount of effort has been devoted to understanding the principles governing the behavior of entanglement. This has resulted in the discovery of connections between quantum entanglement and classical information theory, the theory of positive maps\u00a0 and majorization\u00a0.\n\nThe investigation of quantum channel capacity, entanglement, and many other areas of quantum information processing involves various quantum generalizations of the notion of entropy, most notably the von\u00a0Neumann entropy. The von\u00a0Neumann entropy is defined as $H(\\rho)=\\mbox{tr}\\rho\\log_2(\\rho)$ for density operators $\\rho$ ($\\rho$ is positive Hermitian and of trace $1$). It has many (but not all) of the properties of the classical information function $H(\\cdot)$ (cf. **information, amount of**). Understanding these properties has been crucial to the development of quantum information processing (see\u00a0 for reviews). Probably the most powerful known result about the von\u00a0Neumann entropy is the strong subadditivity inequality. Many of the bounds on quantum communication follow as easy corollaries of strong subadditivity. Whether still more powerful entropic inequalities exist is not known.\n\nAn important property of both classical and quantum information is that although it is intended to be physically realizable, it is abstractly defined and therefore independent of the details of a physical realization. It is generally believed that qubits encapsulate everything that is finitely realizable using accessible physics. This belief implies that any information processing implemented by available physical systems using resources appropriate for those systems can be implemented as efficiently (with at most polynomial overhead) using qubits. It is noteworthy that there is presently no proof that information processing based on quantum field theory (cf. **quantum field theory**) is not more efficient than information processing with qubits. Furthermore, the as-yet unresolved problem of combining quantum mechanics with general relativity in a theory of quantum gravity prevents a fully satisfactory analysis of the information processing power afforded by fundamental physical laws.\n\nMuch effort in the science of quantum information processing is being expended on developing and testing the technology required for implementing it. An important task in this direction is to establish that quantum information processing can be implemented robustly in the presence of noise. At first it was believed that this was not possible. Arguments against the robustness of quantum information were based on the apparent relationship to analogue computation (due to the continuity of the amplitudes in the superpositions of configurations) and the fact that it seemed difficult to observe quantum superpositions in nature (due to the rapid loss of phase relationships called *decoherence*). However, the work on quantum error-correcting codes rapidly led to the realization that provided the physical noise behaves locally and is not too large, it is at least in principle possible to process quantum information fault tolerantly. Research in how to process quantum information reliably continues; the main problems is improving the estimates on the maximum amount of tolerable noise for general models of quantum noise and for the types of noise expected in specific physical systems. Other issues include the need to take into consideration restrictions imposed by possible architectures and interconnection networks.\n\nThere are many physical systems that can potentially be used for quantum information processing\u00a0. An active area of investigation involves determining the general mathematical features of quantum mechanics required for implementing quantum information. More closely tied to existing experimental techniques are studies of specific physical systems. In the context of communication, optical systems are likely to play an important role, while for computation there are proposals for using electrons or nuclei in solid state, ions or atoms in electromagnetic traps, excitations of superconductive devices etc. In all of these, important theoretical issues arise. These issues include how to optimally use the available means for controlling the quantum systems (*quantum control*), how to best realize quantum information (possibly indirectly), what architectures can be implemented, how to translate abstract sequences of quantum gates to physical control actions, how to interface the system with optics for communication, refining the theoretical models for how the system is affected by noise and thermodynamic effects, and how to reduce the effects of noise.\n\n| |\n|:--------------|\n| E. H. Knill |\n| M. A. Nielsen |\n\nAMS 2000 Subject Classification: 81P68, 68Q05<\/span>\n\n[^1]: Article by E.\u00a0H.\u00a0Knill and M.\u00a0A.\u00a0Nielsen accepted for Supplement III, Encyclopaedia of Mathematics (publication expected Summer 2001). See also \"http:\/\/www.wkap.nl\/series.htm\/ENM\". E.\u00a0H.\u00a0Knill is with the Los Alamos National Laboratory, MS B265, Los Alamos NM 87545, USA, and M.\u00a0A.\u00a0Nielsen is with the Center for Quantum Computer Technology, Department of Physics, University of Queensland 4072, Australia.","meta":{"dup_signals":{"dup_doc_count":11,"dup_dump_count":4,"dup_details":{"curated_sources":1,"2024-18":1,"2024-22":1,"unknown":8}},"filename":"out\/quant-ph0010058.tex.md"},"subset":"arxiv"} +{"text":"abstract: Are there parallels between the furthest reaches of our universe, and the foundations of thought, awareness, perception, and emotion? What are the connections between the webs and structures that define both? What are the differences? \"As Above As Below\" was an exhibition that examined these questions. Conceptualized, curated, and produced by Esther Mallouh, it consisted of six artworks, each of them the product of a collaboration that included at least one artist, astrophysicist, and neuroscientist. The installations explored new parallels between intergalactic and neuronal networks through media such as digital projection, virtual reality, and interactive multimedia, and served to illustrate diverse collaboration practices and ways to communicate across very different fields.\nauthor: Mark Neyrinck, Tamira Elul, Michael Silver, Esther Mallouh,; Miguel Arag\u00f3n-Calvo, Sarah Banducci, Cory Bloyd, Thea Boodhoo, Benedikt Diemer,; Bridget Falck, Dan Feldman, Yoon Chung Han, Jeffrey Kruk, Soo Jung Kwak,; Yagiz Mungan, Miguel Novelo, Rushi Patel, Purin Phanichphant, Joel Primack,; Olaf Sporns, Forest Stearns, Anastasia Victor, David Weinberg, Natalie M. Zahr\nbibliography: refs.bib; refs_res.bib\ndate: SciArt Magazine, February 2020 \ntitle: Exploring Connections Between Cosmos & Mind \n Through Six Interactive Art Installations in \n \"As Above As Below\"\n\nA short video walkthrough of the exhibition is at , and a panel discussion at the opening, featuring many of the authors, is at . See for even more information about the project.\n\n# Introduction\n\nThe visual similarities between cosmic and neural webs are striking. Fig.\u2006A shows a single mouse hippocampal neuron with an elaborate dendritic arbor stemming from a central cell body (large green blob); Fig.\u2006B shows a cluster from a cosmological simulation, in which a whole galaxy like ours would be but a middling yellow dot.\n\nHowever, many networks in the universe exhibit these kinds of visual similarities. For example, other biological systems, such as circulatory and respiratory networks in animals, and branching structures in plants and fungi, appear individually to have nearly maximal fractal dimension, and therefore look similar . These features are also evident in networks created by humans, such as traffic in road and air-travel systems, and non-spatial networks such as the Internet. See for more about the network-science properties of the cosmic web.\n\nWith so many types of similar spatial networks to pick from, why focus on cosmic and neural webs? One compelling reason is that they are each fascinating on their own: one is the network that is able to reflect on and study itself, and the scale of the other is nearly as big as the observable Universe. It is also curious that they are of similar complexity, with numbers of nodes that are within a couple of orders of magnitude of each other .\n\nThe term \"cosmic web\" was first coined by . It refers to the network of filaments and walls of dark matter and gas that connect nearby galaxies, in the standard cosmological paradigm. The cosmic web exists on multiple spatial scales; there are also filaments of galaxies that connect clusters of galaxies. The cosmic web, and structural trussworks like spiderwebs and trees, also share a geometry that has been a source of artistic inspiration . To anthropomorphize: gravity acts like a haunted house explorer, clearing away flexible cobwebs from cosmic voids and gathering structural elements together into thicker filaments . The cosmic web moves at an aeonically slow pace in the expanding Universe, set in motion just after the Big Bang and nearly deterministically forming since then.\n\nThe cosmic web is not a pathway of communication \u2013 the distances are far too great. But the whole point of the neural net is communication, on highways under constant construction and remodeling as the brain processes information and learns. In fact, interneuronal communication is likely to be the driving force in the evolution of neural webs. Like cosmic webs, neural webs exist on various scales: nervous systems are composed of neurons and clusters of neurons that exhibit network organization , dendrites and axons of individual neurons have complex branching structures (Fig.\u2006A), and each neuron contains a molecular cytoskeleton that is made up of a network of microfilaments.\n\nThe principles underlying the development of specific neuron connectivity patterns in the neural web, hypothesized originally by the Spanish neuroanatomist Santiago Ramon y Cajal (and still relevant for theoretical and experimental neuroscientific studies today) are maximum efficiency of communication and meeting the metabolic challenge of making and supporting new connections . This leads to neurons that develop morphologies that approximate minimal spanning trees that contain a set of nodes that are branch points and\/or synapse locations (Cuntz, 2012). Neurons have also inspired many artistic interpretations and visualizations .\n\nThere are yet more differences between neural and cosmic webs. Neurons have objective physical membrane boundaries, while the boundaries of the filaments and walls of the cosmic web are subject to definition . Also, each neuron has a defined polarity (dendritic input and axonal output). In the cosmic web, galaxy clusters can locally look like neurons, but there is no analogous polarity, and no objective boundaries separating galaxy clusters. Another difference is that neurons have objective centers (somas, containing cell nuclei), but the Universe does not, nor do patches of the cosmic web. Any given region of the Universe on really large scales looks pretty much like any other large region of the Universe \u2013 the large scale architecture of the observable Universe doesn't have any special regions. However, our brains are organized very differently \u2013 vertebrate brains are networks with clearly identified specialized regions.\n\n# Art Installations\n\n## *Chamber of (In)finite Potential*\n\nAn interactive installation by Purin Phanichphant (artist), Benedikt Diemer (astrophysicist), and Natalie Zahr (neuroscientist).\n\nIn this piece, we consider a fundamental similarity between the cosmic web of dark matter in the Universe and the complex connections of the human brain: the Universe has the potential to form a nearly limitless number of structures (such as galaxies); brain cells have the potential to form a nearly limitless number of connections. However, both the Universe and the brain have \u2013 at this point in time \u2013 achieved only a fraction of their potential.\n\nFurthermore, both fields of study are in the midst of lively debates as to whether the current understanding fully explains reality. While we tend to think of the Universe as infinite, the growth of structure within it is not: as dark energy accelerates cosmic expansion, gravity will eventually be unable to overcome the stretching, and structures will be frozen. Similarly, while the abundance of neurons in the brain is limited (at birth, the brain has almost all the neurons that it will ever have), the number of connections is potentially infinite; nevertheless, the number of connections reaches a peak and then begins to decay with aging.\n\nThese considerations lead to fundamental questions about the nature of the two systems. Will there be a countable number of galaxies in the Universe at the end of its life? Is the capability of our minds fundamentally limited by a finite number of neurons or their connections? These are the kinds of questions we seek to explore in our installation.\n\nThe seeming contradiction between the infinite and finite nature is expressed through a six-foot wide geodesic dome that is finite and graspable, but with reflective surfaces within that create an infinite visual field. When the viewer enters the dome, his\/her perspective is from the center of this perceptual field, metaphorically at the center of the Universe and self-awareness. The artist aims to reveal the infinite potential that exists inside each and every one of us, and that our existence, both in the neural and cosmic realms, are one and the same.\n\n## *Natural Science*\n\nAn interactive installation by Thea Boodhoo (scenery design, concept), Cory Bloyd (interactive design, concept), Yagiz Mungan (sound), David Weinberg (astrophysicist), and Dan Feldman (neuroscientist), with contributions by Gary Boodhoo (concept), James Morgan (concept, materials), and Lyn Collier (feltwork).\n\nNatural science, or the scientific study of nature, includes ecology, biology, geology, and physics, as well as cosmology and neuroscience. This surpasses the everyday meaning of nature, going far beyond wildlife and forests to include the farthest reaches of the Universe, as well as the deepest inner workings of the mind.\n\nIt could be concluded, from this point of view, that all endeavors of natural science are in fact nature studying itself. This piece invites you to broaden your definition of nature, and what you consider natural, dramatically. Don't forget to include yourself.\n\nThe Science: What do the brain at rest and the cosmos at rest have in common? Science finds that both are surprisingly active. Brain waves cascade from sleeping brains; gravitational waves flood the quietest regions of space. Dan Feldman studies how our neurons process, and often create, what we experience. His ongoing research inspired the setup of our \"experiment,\" as well as a question: could we someday step into another being's dream? What would we encounter? One answer comes from astrophysics. Every brain, and so every dream, is part of the cosmos \u2013 David H. Weinberg's specialty. His research asks: How do galaxies form and cluster? What happens when black holes merge? What was the Universe like before stars, and what will it be like when it finally rests?... Will it dream?\n\nThe Art: A wild fox has chosen a secluded mossy outcrop as a place to nap before getting back to fox business. Little does he know, researchers studying the inner workings of the mind are running an experiment in this part of the woods. Secret prototype devices tune into the dreams of any animal who sleeps here.\n\nThe fox's reverie is projected back as sound and vision, and its contents are quite a surprise. Explorers in these woods will hear massive celestial bodies merging and tiny neuron cells firing. If they approach quietly, they may even influence the fox's dream \u2013 for they, like dreams, are part of the cosmos.\n\n## *I am a <\/u>\u00a0Neuron*\n\nAn interactive installation by Yoon Chung Han (artist), Soo Jung Kwak (artist, sound), Rushi Patel (engineer, software), Jeffrey Kruk (astrophysicist), and Tamira Elul (neuroscientist).\n\nThis installation is inspired by the concept of interactions among neighboring galaxies over cosmic time and among neurons during development of the brain. In the forming and expanding Universe, galaxies interact through gravity, winds, and radiation, whereas in the developing brain, neurons are influenced by mechanical, molecular, and electrical cues from other neurons.\n\n*I am a <\/u>\u00a0Neuron* is an interactive WebVR artwork of creative exploration within a virtual cosmic world. Viewers create their own personalized 3D artificial neurons, interact with them, and observe their interactions. Live feedback from participants controls three factors (gravity, winds, and radiation) to change the environment and neurons. When neurons collide with other neurons, special sounds are emitted, and changes occur in the neurons such as scale, design, opacity, and colors that visually represent interactions of actual neurons in our brains.\n\nThrough this artwork, participants also explore self-representations that interact with other humans and grow, die, interact, collide, metabolize, reproduce and emit sounds. The participants choose one adjective that represents their personalities for a virtual 3D neuron. Participants relate to these neuron avatars by observing how audiovisual representations change through time. The morphing neurons mimic how we are changed through environmental factors, social groups, and human relationships. Through this artwork participants can find personal meanings into what nature is.\n\nOne factor contributing to the success of this collaboration was that the scientists trusted the artist to develop an interpretation of this concept independently, without further trying to influence her process beyond the initial concept.\n\n## *One in the Universe*\n\nA virtual reality experience by Anastasia Victor (artist), Mark Neyrinck (astrophysicist), Miguel Angel Arag\u00f3n-Calvo (astrophysicist), and Michael Silver (neuroscientist).\n\nDespite vast differences in spatial scale, there is a conceptual link between networks in the brain and those in the cosmic web: they both contain branching structures that connect neurons and nodes of dark matter to one another. Emergent network properties that arise from connectivity of individual elements are also evident for human beings and their relationships.\n\n*One in the Universe* is an interactive installation that asynchronously connects people across time and space. Within the current climate of filter bubbles and social division, this piece provides moments of introspection and connection to others.\n\nParticipants in this installation navigate through a virtual reality composed of structures inspired by dark matter webs and the shapes of neurons, and their movements guide the growth of neural branches within this space. Participants also provide verbal responses to prompts about their own experiences of feeling connected, and these responses are recorded and integrated into the virtual Universe.\n\nThe virtual Universe is additive, with each new participant's movements through the space and their verbal responses contributing to the growing patterns of connectivity. Participants directly experience this connectivity by traversing the virtual Universe and hearing verbal responses of previous participants at specific locations in the network. Other visitors to the exhibit observe participants' interactions with the virtual Universe through a projected image that mirrors the visual experience in the virtual reality headset.\n\nA scientific project grew out of this piece, as well: we found preliminarily, and cannot yet explain, that the scaling of the total wiring length of the cosmic web with the number of its nodes is similar to the scaling of the wiring length of a neuron with its branch points. They scale roughly with a power law obeyed by a minimal spanning tree, found for neurons by . We came upon this relationship through using the Cuntz et al.\u00a0algorithm to grow branches of the neuron based on input from the participant's hands.\n\n## *voi!drum!emory*\n\nAn interactive installation by Miguel Novelo (artist), Bridget Falck (astrophysicist), and Sarah Banducci (neuroscientist).\n\nA void is not merely an absence, a loss, an emptiness, or a lack, though it can be these things. By calling something a void, we give it a life of its own. It becomes a negative space that defines the positive. In the Universe, voids push matter away, and where they collide with each other, they create the cosmic web, and galaxies. In brains, the web of interconnected neurons deteriorates due to aging and disease; empty space takes over as the brain atrophies. Both types of voids are not static: they reflect the growing, aging, and dying of the Universe and the people that live in it. Our team of a neuroscientist (Sarah Banducci), a cosmologist (Bridget Falck), and an artist (Miguel Novelo) explored this connection between brain and cosmic voids over a year of emails and video calls.\n\nThe collaboration kicked off with Falck and Banducci, two strangers on opposite sides of the country, sharing their love of science rooted in two very different fields. Pleasantries aside, both scientists dove past the superficial and into the meat of their respective interests. The first brainstorm ended with a list of four potential ideas for overlap between cosmology and neuroscience: density, connectivity, folds, and voids. By the end of the second call two weeks later, it became clear that voids would provide the greatest opportunity for exploration between the two fields.\n\nFor Falck, voids are exciting because they are the largest structures in the cosmic web \u2013 except they aren't structures at all. Voids are the emptiest regions of space, and they are growing. As the Universe ages, gravity causes matter to collapse in on itself and pushes matter away from voids, making relatively empty regions of space become even emptier. The aging of the Universe thus grows empty space.\n\nOn a much smaller scale, Banducci views voids as a byproduct of human aging. In contrast to the cosmic web, where the force of pulling matter together forms emptiness, in the brain, voids are formed by matter deteriorating. This deterioration of neurons and connections between them gives the appearance of a shrinking brain. Evidence of this atrophy looks different for each individual but translates to characteristic cognitive and motor impairments in older adults.\n\nAs an artist, Novelo's initial thoughts were to materialize the void with objects that might have a void or vacuum. From an empty cylinder, his sketch led to a percussive instrument. Once a drum is hit on the top membrane, air moves inside and sound waves reverberate in concave space. This sound illustrates a wholeness of the object, the echo, and decay symbolizing the void in a sonic way. Taking the concept a step further, Novelo explored the idea of experiencing a void by sensing the end of reverberation, paying attention to the time between echoes and the sound fading out. This became the physical representation of the void.\n\nTo add an emotional component to this experience, the team considered, \"What is left after the void?\" Discussing memory in this context, Novelo, Falck, and Banducci came up with poetic voids \u2013 remembrance of what is not there anymore. In the cosmic web, what is left could be the photons finally reaching us from an event that happened millions of years ago; in the brain, the rest of our neurons or gray matter are still intact; and in our memory, previous experiences, nostalgia, and trauma. The art object we conceptualized and created has the characteristic of activating sound and visual cues that symbolize time prior to the void, rapid decay, and lastly a void surrounded by memory.\n\nThe final expression of this project should leave the viewer with the assurance that even when there is a void, some piece of the experience will continue to linger.\n\n## *The Undulating Architecture Illuminating the Individual Sciences*\n\nA video installation by Forest Stearns (artist), Joel Primack (astrophysicist), and Olaf Sporns (neuroscientist).\n\nAs a collective, our team of three spent a good amount of time investigating the superficial similarities between dark matter of the universe and the neural networks of our brains. After much discussion, we collectively agreed that the differences greatly superseded the similarities.\n\nThis being the case, the artist focused on the architectures of both sciences. Using soap bubbles as a medium, the artist found that the undulating bubbles with their walls, filaments, and nodes illuminated the architecture of the universe. The relationships between cosmic webs and soap bubble foams has also been explored by . The soap running down filaments towards central nodes and into a bigger pool is similar to how impulses move across the neural net. The physical mass of bubbles were an abstracted screen, projected onto videos that graphically explain the specific sciences. The camera collected the photons rippling across the bubble field.\n\n# Acknowledgments\n\nMCN is grateful for funding from Basque Government grant IT956-16.\n\nThe first three authors were primarily responsible for this article as it appears, but all others contributed ideas and\/or text. Curator Esther Mallouh originated the project.","meta":{"dup_signals":{"dup_doc_count":17,"dup_dump_count":2,"dup_details":{"curated_sources":2,"unknown":15}},"filename":"out\/2008.05942_extract_aaab.tex.md"},"subset":"arxiv"} +{"text":"abstract: High-stakes assessments, such the Graduate Records Examination, have transitioned from paper to computer administration. Low-stakes Research-Based Assessments (RBAs), such as the Force Concept Inventory, have only recently begun this transition to computer administration with online services. These online services can simplify administering, scoring, and interpreting assessments, thereby reducing barriers to instructors' use of RBAs. By supporting instructors' objective assessment of the efficacy of their courses, these services can stimulate instructors to transform their courses to improve student outcomes. We investigate the extent to which RBAs administered outside of class with the online Learning About STEM Student Outcomes (LASSO) platform provide equivalent data to tests administered on paper in class, in terms of both student participation and performance. We use an experimental design to investigate the differences between these two assessment conditions with 1,310 students in 25 sections of 3 college physics courses spanning 2 semesters. Analysis conducted using Hierarchical Linear Models indicates that student performance on low-stakes RBAs is equivalent for online (out-of-class) and paper-and-pencil (in-class) administrations. The models also show differences in participation rates across assessment conditions and student grades, but that instructors can achieve participation rates with online assessments equivalent to paper assessments by offering students credit for participating and by providing multiple reminders to complete the assessment. We conclude that online out-of-class administration of RBAs can save class and instructor time while providing participation rates and performance results equivalent to in-class paper-and-pencil tests.\nauthor: Jayson M. Nissen; Manher Jariwala; Eleanor W. Close; Ben Van Dusen\nbibliography: RIHE.bib\ndate: Received: date \/ Accepted: date\ntitle: Participation and Performance on Paper- and Computer-Based Low-Stakes Assessments\n\n# Introduction\n\nResearch-based assessments (RBAs), such as the Force Concept Inventory (FCI) , the Conceptual Survey of Electricity and Magnetism (CSEM) , and the Colorado Learning Attitudes about Science Survey (CLASS) , measure students' knowledge of concepts or attitudes that are core to a discipline. The demonstrated efficacy of RBAs in the research literature has led many instructors to use them to assess student outcomes and to develop and disseminate research-based teaching practices, particularly in the STEM disciplines . However, found that instructors face several barriers to using RBAs, including choosing assessments, administering and scoring the assessments, and interpreting results.\n\nEducators and researchers have developed several online resources to support instructors' adoption of RBAs. A central thrust of these efforts is the development of tools to make it easy for instructors to quickly and easily collect high-quality student RBA data. For example,\n\n1. www.physport.org\/<\/a>,\n\n2. hcuboulder.qualtrics.com\/jfe\/form\/SV_086qKlJAMx8VaMl<\/a>, and\n\n3. learningassistantalliance.org\/public\/lasso.php<\/a>.\n\nAs use of online data collection systems increases, it is important to establish whether online administration of RBAs outside of class provides equivalent data to the traditional in-class, paper-and-pencil administration methods .\n\n# Literature Review\n\nWhile substantial research has compared paper-and-pencil tests (PPT) with online computer-based tests (CBT) on graded, high-stakes assessments, little of it has focused on low-stakes RBAs as pretests and posttests in college settings, for which participation may be optional. In investigations of low-stakes assessments, it is critical to look at participation rates as well as performance results. If CBTs lead to lower participation rates or skewing of participation rates towards particular types of student, then using CBTs may lead to misleading or unusable data. If CBTs impact student performance on assessments, then comparisons to PPT data may be difficult or impossible to make. In our review of the literature we will examine what research shows about the impact on student participation rates and performance of transitioning assessments from PPTs to CBTs.\n\n## Participation rates\n\nTo determine normative participation rates for RBAs and what factors are related to them, we reviewed 23 studies using RBAs in courses that were similar to those examined in our study (i.e., introductory physics courses). The studies we identified reported pretest and posttest results for either the FCI, the Force and Motion Conceptual Evaluation (FMCE) , or the Brief Electricity and Magnetism Assessment (BEMA) . Of these 23 published studies, only four provided enough information about their data for us to evaluate the participation rates . Three provided sufficient data to compare participation rates across gender and course grade. Each of the four papers reported only their *matched data* after performing listwise deletion. The studies reported that participation rates ranged from 49% to 80%, that female students were 5% to 19% more likely to participate, and that students who participated had higher grades than those that did not (see Table 1).\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Participation and GPA for students in previous studies. Participation rates varied across the studies, tended to be higher for female students, and higher for students with higher course grades.<\/caption>\n

Source<\/p><\/td>\n

Gen.<\/td>\nPart. Grade<\/td>\nNon-Part. Grade<\/td>\nPart.<\/td>\n\u0394<\/em>g<\/em>r<\/em>a<\/em>d<\/em>e<\/em><\/sub><\/span><\/td>\nOdds<\/td>\n<\/tr>\n
<\/td>\n<\/td>\nMean<\/td>\nN<\/td>\nSD<\/td>\nMean<\/td>\nN<\/td>\nSD<\/td>\nRate<\/td>\n<\/td>\nF\/M<\/td>\n<\/tr>\n

<\/p><\/td>\n

M<\/td>\n2.69<\/td>\n90<\/td>\n1.28<\/td>\n2.1<\/td>\n92<\/td>\n1.28<\/td>\n0.49<\/td>\n0.59<\/td>\n1.37<\/td>\n<\/tr>\n
<\/td>\nF<\/td>\n2.78<\/td>\n27<\/td>\n1.26<\/td>\n2.05<\/td>\n13<\/td>\n1.16<\/td>\n0.68<\/td>\n0.73<\/td>\n<\/tr>\n
<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/tr>\n
al., 2010<\/td>\nM<\/td>\n2.85<\/td>\n1257<\/td>\n0.8<\/td>\n1.93<\/td>\n500<\/td>\n1.1<\/td>\n0.72<\/td>\n0.92<\/td>\n1.11<\/td>\n<\/tr>\n
<\/td>\nF<\/td>\n2.8<\/td>\n447<\/td>\n0.8<\/td>\n1.96<\/td>\n114<\/td>\n1.2<\/td>\n0.80<\/td>\n0.84<\/td>\n<\/tr>\n
<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/tr>\n
al., 2009<\/td>\nM<\/td>\n2.82<\/td>\n1563<\/td>\n0.8<\/td>\n2.14<\/td>\n1152<\/td>\n1.2<\/td>\n0.58<\/td>\n0.68<\/td>\n1.09<\/td>\n<\/tr>\n
<\/td>\nF<\/td>\n2.74<\/td>\n533<\/td>\n0.8<\/td>\n1.89<\/td>\n315<\/td>\n1.1<\/td>\n0.63<\/td>\n0.85<\/td>\n<\/tr>\n
<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/tr>\n
et al.,<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/tr>\n
2014<\/td>\nAll<\/td>\n-<\/td>\n366<\/td>\n-<\/td>\n-<\/td>\n314<\/td>\n-<\/td>\n0.54<\/td>\n-<\/td>\n-<\/td>\n<\/tr>\n
<\/td>\nAll<\/td>\n-<\/td>\n773<\/td>\n-<\/td>\n-<\/td>\n448<\/td>\n-<\/td>\n0.63<\/td>\n-<\/td>\n-<\/td>\n<\/tr>\n
<\/td>\nAll<\/td>\n-<\/td>\n360<\/td>\n-<\/td>\n-<\/td>\n219<\/td>\n-<\/td>\n0.62<\/td>\n-<\/td>\n-<\/td>\n<\/tr>\n
<\/td>\nAll<\/td>\n-<\/td>\n738<\/td>\n-<\/td>\n-<\/td>\n384<\/td>\n-<\/td>\n0.66<\/td>\n-<\/td>\n-<\/td>\n<\/tr>\n
<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\nBecause few studies have investigated student participation on low-stakes assessments in physics learning environments, we expanded our literature review to cover a wider range of fields. Research into student participation rates on low-stakes assessments has primarily focused on end-of-course and end-of-degree evaluations . All of these studies of participation rates examine non-proctored, low-stakes CBTs because high-stakes and proctored tests (e.g. course finals or the GREs) typically require participation. The majority of these studies examine how instructor or institutional practices affect overall student participation rates. These studies found that reminders and incentives for participation increased overall participation rates. In an examination of end-of-course evaluations from over 3,000 courses, disaggregate overall participation rates to test for selection bias in students' participation. They found that there was a positive selection bias that had non-negligible effects on the average evaluation scores. While these studies did not use data from RBAs, they provide context for the instructor practices we examine and the analysis we perform in our research.\n\nwas one of the first to examine student participation rates on RBAs. He examined data from college astronomy courses where assessments were administered both online outside of class as CBTs and in class as PPTs. Students completed a locally made concept inventory and a research-based attitudinal survey. The students (N=559) were randomly assigned to two assessment conditions with either the concept inventory done in-class and the attitudinal survey done outside of class via an online system or the reverse. examined the impact of faculty practices on student participation rates by comparing student participation across classes that offered varying incentives to participate. Student participation rates on the CBTs were 8% to 27% lower than on the PPTs. Courses that offered more credit, reminders in class, and email reminders had higher student participation rates.\n\nIn preliminary work for this study, examined student participation rates on RBA pretests and posttests across several physics courses. The study included 693 students in three physics courses taught by five instructors at a large public university. Instructors used the Learning About STEM Student Outcomes (LASSO) platform to administer the CBTs. The LASSO platform is a free online system that hosts, administers, scores, and analyzes student pretest and posttest scores on science and math RBAs. The LASSO platform is described in detail in the methods section. The researchers employed an experimental design to randomly assign each student an RBA to complete in class on paper and an RBA to complete outside of class using LASSO. Average posttest participation rates for the five instructors ranged from 18% to 90% for CBTs and 55% to 95% for PPTs. While some instructors had significantly lower participation rates for CBTs than for PPTs, others had rates that were quite similar. Interviews of the faculty about their CBT administration practices found several commonalities between the courses with higher participation rates. Instructors with higher CBT participation rates gave their students credit for participating and reminded their students to complete the assessment both over email and during class.\n\nThe general trends in findings for all the studies on participation rates were that participation rates on both PPT and CBT varied, and that there was the potential for skewing of data by student demographics and course grades. Participation rates for CBTs increased when instructors provided students with some form of credit for participating and with reminders to complete the survey. While all studies found similar results, most primarily relied on descriptive statistics to support their claims. The lack of statistical modeling in these publications means they lack precise claims, such as how much difference in participation rates is caused by giving students reminders or offering credit. The studies also largely ignored the impact of student demographics on participation rates. For example, none of the studies examined how student gender or performance in a class impacted their likelihood of participating. These factors must be taken into account to make generalizable claims.\n\n## Performance\n\nSignificant work has gone into examining the impact of CBT and PPT administration on student performance. Interest in the impact of CBTs picked up in the 1990s as testing companies (e.g., the Educational Testing Service and the College Board) transitioned services to computers and digital Learning Management Systems (e.g., Blackboard Learn and Desire2Learn) emerged as common course tools . These shifts in testing practices led to several studies into the impact of computerizing high-stakes, proctored assessments in both K-12 and university settings . Research across these settings generally found that performance on proctored computerized versions of high-stakes assessments was indistinguishable from performance on traditional PPTs. These studies make no claims whether their findings are generalizable to low-stakes RBAs.\n\nOnly a handful of studies have examined the impact of computerized administration of low-stakes RBAs on university student performance. In Bonham's research into college astronomy courses, he drew a matched sample from students who completed the in-class and outside-of-class surveys. He concluded that there was no significant difference between unproctored CBT and PPT data collection. However, examining Bonham's results reveals that there was a small but meaningful difference in the data. The results indicated that the online concept inventory scores were 6% higher than the in-class scores on the posttest. For these data 6% is an effect size of approximately 0.30. While this difference is small, lecture-based courses often have raw gains below 20%; a 6% difference would therefore skew comparisons between data collected with CBT and PPT assessment conditions. Therefore, the results of the study do not clearly show that low-stakes tests provide equivalent data when collected in class with PPTs or outside of class with CBTs.\n\nIn an examination of 136 university students' performance on a biology test and a biology motivation questionnaire, used a Solomon four-group experimental design to assess differences between tests administered as CBTs and PPTs. The participants were 136 undergraduate students in a teacher education program. The researchers created four groups of 34 students and assigned each to one of four assessment conditions: (1) PPT posttest, (2) PPT pretest and posttest, (3) CBT posttest, and (4) CBT pretest and posttest. The posttest was administered two weeks after the pretest. This design allowed the analysis to differentiate between differences caused by taking the pretest and differences caused by doing the test as a CBT instead of PPT. After accounting for the effects of taking the pretest, the researchers found no significant differences between the tests administered as CBTs and those administered as PPTs. While the study uses a strong experimental design, the sample size is small (N=34\/group) which brings the reliability and generalizability of the study into question.\n\nexamined the impact of assessment conditions on student performance on a low-stakes personality test. They assigned the participants (N=728) to one of three assessments conditions: (1) PPT, (2) proctored CBT, and (3) unproctored CBT. They used mean comparison and Item Response Theory to examine participant performance at both the assessment and item levels. Their investigation found no meaningful differences in performance between the three assessment conditions. The authors concluded that their analysis supports the equivalence of CBTs and PPTs for personality tests.\n\nAs described above, even among the studies that are most closely aligned with our research questions, very few of them directly examined how student responses on low-stakes, unproctored administration of CBTs compare to responses on PPTs. Those that have examined these issues tend to have small sample sizes and do not find consistent differences, making it difficult to support reliable and generalizable claims using their data.\n\n# Research Questions\n\nThe purpose of the present study is to examine whether concept inventories and attitudinal surveys administered as low-stakes assessments online outside of class as CBTs provide equivalent data to those administered in class as PPTs. We examine equivalence between CBT and PPT administrations for both participation and performance.\n\nTo examine equivalence of participation, we ask the following three research questions:\n\n1. How do instructor administration practices impact participation rates for low-stakes RBAs, if at all?\n\n2. How are student course grades related to participation rates for low-stakes RBAs, if at all?\n\n3. To what extent does participation differ across demographic groups?\n\nTo examine equivalence of performance, we ask the following research question:\n\n1. How does assessment condition (PPT vs CBT) impact student performance on low-stakes RBAs, if at all?\n\nIf an online data collection platform can provide equivalent quantity and quality of data to paper-based administration, then the platform addresses many of the instructors' needs that identified, and therefore lowers barriers for instructors to assess and transform their own courses. A second major benefit of the widespread use of an online data collection system like the LASSO platform is that they can aggregate, anonymize, and make all the data available for research (more details on the LASSO platform are provided in the Methods). The size and variety of this data set allows researchers to perform investigations that would be underpowered if conducted at only a few institutions or would lack generalizability if only conducted in a few courses at a single institution.\n\n# Methods\n\n## Setting\n\nThe data collection for the study occurred at a large regional public university in the United States that is a Hispanic-Serving Institution (HSI) with an enrollment of approximately 34,000 undergraduate students and 5,000 graduate students. The university has a growing number of engineering majors and large numbers of biology and pre-health majors, all of whom are required to take introductory physics.\n\nWe collected data from 27 sections of three different introductory physics courses (algebra-based mechanics, calculus-based mechanics, and calculus-based electricity & magnetism \\[E&M\\]) over two semesters (Table 2). Algebra-based mechanics was taught in sections of 80-100, without research-based instructional materials or required attendance. The calculus-based courses were taught in sections of 30-50, were supported by Learning Assistants (LAs), and used research-based instructional methods; incentives for attendance varied by instructor. In a typical semester, the Department of Physics offers four to six sections of each of these courses. We discarded data from 2 of the 27 sections due to instructor errors in administering the assessments. The data from the 25 sections analyzed in this study are described in Table 2.\n\n\n\n\n\n\n\n\n\n\n
Course demographic data and instruments used.<\/caption>\n
<\/td>\nSemester 1 (Spring 2016)<\/u><\/td>\nSemester 2 (Fall 2016)<\/u><\/td>\nInstruments<\/u><\/td>\n<\/tr>\n
<\/td>\nSect.<\/td>\nStud.<\/td>\nMale<\/td>\nURM<\/td>\nSect.<\/td>\nStud.<\/td>\nMale<\/td>\nURM<\/td>\nCI\/AS<\/td>\n<\/tr>\n

A Mech<\/p><\/td>\n

2<\/td>\n194<\/td>\n58%<\/td>\n46%<\/td>\n6<\/td>\n490<\/td>\n50%<\/td>\n52%<\/td>\nFCI\/CLASS<\/td>\n<\/tr>\n
C Mech<\/td>\n5<\/td>\n188<\/td>\n74%<\/td>\n45%<\/td>\n4<\/td>\n175<\/td>\n67%<\/td>\n60%<\/td>\nFCI\/CLASS<\/td>\n<\/tr>\n
C E&M<\/td>\n4<\/td>\n117<\/td>\n70%<\/td>\n52%<\/td>\n4<\/td>\n146<\/td>\n74%<\/td>\n47%<\/td>\nCSEM\/CLASS<\/td>\n<\/tr>\n

Total<\/p><\/td>\n

11<\/td>\n499<\/td>\n67%<\/td>\n47%<\/td>\n14<\/td>\n811<\/td>\n58%<\/td>\n53%<\/td>\n-<\/td>\n<\/tr>\n
<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## Design of the data collection\n\nThe study used a between-groups experimental design. We used stratified random sampling to create two groups within each course section with similar gender, race\/ethnicity, and honors status makeups. The institution provided student demographic data. Group 1 completed a concept inventory (CI) online outside of class using the LASSO platform, and an attitudinal survey (AS) in class using paper and pencil (Figure 1). Group 2 completed the CI in class and the AS online outside of class. Within each course, both groups completed the in-class assessment at the same time and had the same window of time to complete the online assessment. Assessments were administered at the beginning and end of the semester.\n\nThe LASSO platform (learningassistantalliance.org\/public\/lasso.php<\/a>) hosts, administers, scores, and analyzes RBAs online. When setting up a course in LASSO, instructors answer a set of questions about their course, select their assessments, and upload a course roster with student emails. When instructors launch a pretest their students receive an email from the LASSO platform with directions on how to participate and a unique link that takes them to their assessment page. The first question students answer is whether they are over 18 years of age and are willing to have their data anonymized and made available to researchers. Students then complete a short set of demographic questions and begin their assessment. Instructors can track which students have participated in real-time and use the LASSO platform to generate reminder emails for students who have not yet completed the assessment. Near the end of the semester, faculty launch the posttest and the process of data collection repeats. After the posttest closes, instructors receive a report on their students' performance. Instructors can access all of their students' responses at any time. Data from participating courses are added to the LASSO database where they are anonymized, aggregated with similar courses, and made available to researchers with approved IRB protocols.\n\nPaper assessments were collected by the instructors, scanned using automated equipment, and uploaded to the LASSO platform, where the research team matched it with the CBT data collected directly through the platform. The research team downloaded the full set of student data from the LASSO platform and combined it with student course grades and demographic data provided by the institution. The data analysis did not include students who joined the class late or dropped\/withdrew from the course because the research team could not assign them to a treatment group. Prior to applying filters to remove these students, the sample was 1,487 students. With these filters applied, the total sample was 1,310 students in 25 course sections.\n\nStudents in both mechanics courses completed the 30 question Force Concept Inventory (FCI) . Students in the E&M course completed the 32 question Conceptual Survey of Electricity and Magnetism (CSEM) . We scored both CIs on a 0-100% scale. Students in all the courses completed the same AS, the Colorado Learning Attitudes about Science Survey (CLASS). The CLASS measures eight separate categories of student beliefs compiled from student responses to 42 questions. Responses are coded as favorable, neutral, or unfavorable based on agreement with expert responses. We analyzed the overall favorable score in the present study on a 0-100% scale. We obtained course grades from the course instructors and student demographics from the institution.\n\nDuring the first semester of data collection , the research team provided the instructors with little guidance on how to motivate students to complete their CBT. Participation rates varied greatly across instructors. The research team asked the instructors what practices they used to motivate students, and identified four instructor practices associated with higher student CBT participation rates. The research team adopted these four instructor practices as recommended practices:\n\n1. multiple email reminders,\n\n2. multiple in-class announcements,\n\n3. participation credit for the pretest, and\n\n4. participation credit for the posttest.\n\nDuring the second semester of data collection, the research team advised all instructors to use the recommended practices to increase student participation. At the end of the second semester, we asked the instructors what they had done to motivate students to participate in their CBTs. We used instructor responses to assign each section a Recommended Practices score ranging from zero to four according to the number of recommended practices they implemented. All analyses presented in this article include both semesters of data.\n\n# Analysis\n\nWe used the HLM 7 software package to analyze the data using Hierarchical Linear Models (HLM). HLM is a method of modeling that leverages information in the structure of nested data. In our data, measurements (student scores on assessments) nested within students and students nested within course sections, as shown in Figure 2. HLM also corrects for the dependencies created in nested data . These dependencies violate the assumptions of normal Ordinary Least Squares regression that each measure is independent of each other, an assumption which is not met when comparing students grouped in different classes. HLM can account for these interdependencies by allowing for classroom-level dependencies. In effect, HLM creates unique equations for each classroom and then uses those classroom-level equations to model an effect estimate across all classrooms. Within the HLM 7 software, we used the hypothesis testing function to generate means and standard errors from the models for plots and comparisons.\n\nWe investigated the performance research questions with one set of HLM models and the participation research questions with a separate set of Hierarchical Generalized Linear Models (HGLM). The two different types of HLM were necessary because the outcome variable was binary in the participation models (students did or did not participate) and continuous in the performance models (RBA score).\n\nFor both the participation and performance models we built each model in several steps by adding variables. We compared both the variance and the coefficients for each model. Comparing the total variance in each of the models informed the strength of the relationship between the variables in the model and students participation. For example, variables that related to participation would reduce the total variance in the models that included them. The more that the variance reduced the stronger the relationship between the variables and participation. In HLM, the variance is also distributed across the levels of the model: our 3-level models measure variance within students, between students within a section, and between sections. We are interested in both the change in the total variance and the change at specific levels when variables are added. For example, when we add the section level variables to the models such as course type or instructor practices, we are interested in how much the variance between sections is reduced. The model coefficients size indicated the strength of the relationship between each variable and the outcome variable. Together, variance and coefficient size allow us to identify the extent to which the variables of interest predict student participation and performance.\n\n## Participation\n\nTo investigate students' participation rates in the computer versus paper-and-pencil assessments, we differentiated between each assessment by assessment condition and assessment timing using four dummy variables: pre-CBT, post-CBT, pre-PPT and post-PPT. Our preliminary HGLM analyses indicated that there was no difference in participation between the AS and CI instruments, so to keep our models concise we did not include variables for instruments in the models we present. We built an HGLM of students' participation rates for the PPT and the CBT on both the pretest and posttest. The HGLM was a population-averaged logistic regression model using Penalized Quasi-Likelihood (PQL) estimation because the outcome variable was binary (whether or not students completed the assessment). We used PQL because it was easily available in the HLM software and less computationally intensive than other estimation techniques. However, PQL overestimates the probability of highly likely events . To address this concern, we compared the 3-level HGLM models we report in this article to four 2-level HGLM models that used adaptive Gaussian quadrature estimation. There were no meaningful differences in the models or the inferences that we would make from the models. For simplicity, we only report the three-level HGLM model that used full PQL estimation.\n\nThe data are nested in three levels (Figure 2): the four measures of participation nested within students, and the students nested within course sections. The outcome variable for these models was whether students had participated in the assessment (0 or 1). In the final model (Equation 1-7), we included dummy variables for the four assessment condition and timings (CBT pre, PPT pre, CBT post, and PPT post) at level 1, students' final grades in the course as four dummy variables (0 or 1 for each of the grades A, B, C, and D) at level 2, gender (male = 0 and female = 1) at level 2, and a continuous variable for recommended practices (0 to 4) at level 3. The structure of these variables is laid out in Table 3. The dummy variable for an F grade is not included in the equation because it is integrated into the intercept value. The models did not include the recommended practices for the PPTs because the practices focused on improving participation on the CBTs. The value of the recommended practices variable was the cumulative number (0 to 4) of recommended practices that faculty used to motivate their students to participate in the CBTs. The models included students' grades in the course because analysis of the raw data showed that students' course grades positively related to participation; we included course grades as dummy variables rather than as a continuous variable because there was a non-linear relationship between course grade and participation. Our preliminary analysis also included a dummy variable for race\/ethnicity but we did not include it in the final model because it was not predictive of student participation.\n\nIn a logistic model, the coefficients for the predictors are logits ($\\eta$), or logarithms of the odds ratio. We generated probabilities for different groups of students participating by using the model to create a logit for that probability and then converting the logit to a probability using Equation 8.\n\n\n\n\n\n\n\n\n\n\n\n\n
Variables used in the final participation and performance models (outcome variables in bold).<\/caption>\n

Model<\/p><\/th>\n

Structure<\/th>\nVariables<\/th>\n<\/tr>\n<\/thead>\n
3-4<\/span> Level<\/td>\n<\/td>\nParticipation<\/td>\nPerformance<\/td>\n<\/tr>\n

<\/p><\/td>\n

Assessment<\/td>\nParticipation (0 or 1)<\/strong><\/td>\nScore 0% to 100%<\/strong><\/td>\n<\/tr>\n
<\/td>\nAssessment condition and timing<\/td>\nAssessment timing<\/td>\n<\/tr>\n

<\/p><\/td>\n

Students<\/td>\nCourse Grade<\/td>\nAssessment condition<\/td>\n<\/tr>\n
<\/td>\nGender<\/td>\n<\/tr>\n

3<\/p><\/td>\n

Sections<\/td>\nRecommended Practices (CBT only)<\/td>\nCourse Type<\/td>\n<\/tr>\n
<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\nLevel-1 Equations $$\\begin{aligned}\nProbability(Participation_{ijk} = 1|\\pi_{jk}) = \\phi_{ijk}\\\\\nlog[\\phi_{ijk}\/(1-\\phi_{ijk})]=\\eta_{ijk} \n\\end{aligned}$$ $$\\begin{gathered}\n\\eta_{ijk}=\\pi_{1jk}*CBT PRE_{ijk}+\\pi_{2jk}*CBT POST_{ijk} \\\\+ \\pi_{3jk}*PPT PRE_{ijk}+\\pi_{4jk}*PPT POST_{ijk}\n\\end{gathered}$$\n\nLevel-2 Equations. There are 4 level-2 equations, one for each $\\pi$. $$\\begin{gathered}\n\\pi_{ijk} = \\beta_{i0k} + \\beta_{i1k}*Gender_{jk} + \\beta_{i2k}*A_{jk} + \\beta_{i3k}*B_{jk} \\\\+ \\beta_{i4k}*C_{jk} + \\beta_{i5k}*D_{jk} + r_{jk}\n\\end{gathered}$$\n\nLevel-3 Equations. There are 24 level-3 equations, 2 include a variable for practices, 22 do not and are illustrated by Equation 7. $$\\begin{aligned}\n\\beta_{10k}=\\gamma_{100}+\\gamma_{101}*Practices_k+ u_{1jk}\\\\\n\\beta_{20k}=\\gamma_{200}+\\gamma_{201}*Practices_k+ u_{2jk}\\\\\n\\beta_{ijk}=\\gamma_{ij0}+ u_{ijk}\n\\end{aligned}$$\n\n$$\\phi=10^\\eta\/(1+10^\\eta)$$\n\nWe built the model in three steps: (1) differentiating between the pretest and posttest for the CBT and PPT assessment conditions, (2) adding the level 3 predictor for the number of recommended practices the instructor used, (3) adding level 2 predictor for course grade and gender. On their own, the effect that the different model coefficients have on participation rates is difficult to interpret because they are expressed in logits. Part of the difficulty is that the size of each coefficient cannot be directly compared because the effect of a coefficient on the probability of participation depends on the other coefficients to which it is being added (e.g., the intercept). For example, a logit of 0 is a 50% probability, 1 is approximately 90%, and 2 is 99%. Thus, a 1.0 shift in logits from 0 to 1 is a much larger change in probability than the 1.0 shift from 1 to 2 logits. The importance of the starting point was particularly salient for interpreting the coefficients in our HGLM models because the intercepts for the pre\/post assessment conditions varied from a low of -2.7 to a high of 2.3. To simplify interpreting the results of the model, we used the hypothesis testing function in the HLM software to generate predicted logits and standard errors for each of the combinations of variables and converted the logits to probabilities with error bars of one standard error. In our analyses we focused on posttest participation rates because they are the more limiting rates for data collection, and because the posttests contain information about the effects of the course whereas the pretests only contain information about the students who enroll in the course.\n\nOur investigation of differences in participation rate by course grade and gender used other analyses in addition to the coefficients and variance output by the HGLM model. For comparing the differences in participation rates by gender we used the odds ratio, which the HGLM produces as an output and which are easily calculated for studies in the published literature. An odds ratio of 1.0 indicates that male and female students were equally likely to participate. An odds ratio greater than 1.0 indicates that female students were more likely to participate than male students. If the confidence interval for the odds ratio includes 1.0, then it is not statistically significant. Comparing the differences in participation by course grade was more difficult because the HGLM does not produce an output that is comparable to the mean grades for participants and nonparticipants, which is the statistic that prior studies report. Therefore, we also reported these raw statistics to situate our study within the existing literature.\n\n## Performance\n\nTo investigate differences in performance between tests administered as CBTs and PPTs (Research Question 4), we built separate HLM models for the CI and AS scores. It was possible to combine these models into a single multivariate HLM. However, multivariate HLMs are more complex to both analyze and report and the HLM software documentation recommends that researchers start with separate models for each variable . After producing our models we concluded that the two models were sufficient for our purposes. The HLM performance models for the CI and AS data had identical structures. All performance models used RBA score as the outcome variable. The models included a level-1 variable (post) to differentiate between the pretest and posttest. The variable of interest for the models that addressed Research Question 4 was assessment condition at level 2. We also included predictor variables at level 3 for each of the three courses because performance varied across the course populations, and it allowed us to make comparisons of the effect of assessment condition across the multiple courses for both the pretest and posttest. These comparisons had the advantage of indicating whether there was a consistent difference in scores (e.g., CBT was always higher), even if that difference was too small to be statistically significant. Initially, we included level-2 variables to control for course grade, gender, and race\/ethnicity because these variables relate to performance on RBAs . However, these demographic variables had no effect on the impact of assessment conditions on student performance in our models. For brevity, we excluded these variables from the models we present here.\n\nThe final performance model included RBA score as the outcome variable and predictor variables for posttest (level 1), assessment condition (level 2), and course (level 3) (Equation 3). The variables used in the final model are shown in Table 3. We built the model in three steps: (1) a level-1 variable for posttest, (2) then add a level-2 variable for assessment condition, and (3) add level-3 variables for each course. To determine how much variance in the data was explained by each of the variables, we compared the total variance between each of the models. The reduction in the variance between the models indicated the strength of the relationship between the variables and performance by showing how much information about performance the added variables provided. For example, if there were large differences in performance between PPTs and CBTs, then the addition of CBT to the model would decrease the total variance. One distinction between HLM and OLS regression is that in OLS additional variables always reduces the unexplained variance, whereas in HLM the variance can increase if a nonsignificant variable is added to the model . We used the hypothesis testing function in the HLM software to generate predicted values and standard error for each of the courses' pretest and posttest scores, for both assessment conditions, to inform the size and reliability of any differences between assessment conditions.\n\nFor the performance analyses, we replaced missing data using Hierarchical Multiple Imputation (HMI) with the mice package in R. We discuss the rate of missing data in the Results Participation section (7.1) below. Multiple Imputation (MI) addresses missing data by (1) imputing the missing data *m* times to create *m* complete data sets, (2) analyzing each data set independently, and (3) combining the *m* results using standardized methods . MI is preferable to listwise deletion because it maximizes the statistical power of the study and has the same basic assumptions. HMI is MI that takes into account students being nested in different courses and that their performance may be related to the course they were in. Our HMI produced *m*=10 complete data sets. In addition to pretest and posttest scores, the HMI included variables for course, course grade, gender, and race\/ethnicity. We used the HLM software to automatically run analyses on the HMI datasets.\n\nLevel-1 Equations $$SCORE_{ijk}=\\pi_{0jk} + \\pi_{1jk}*POST_{ijk}+ e_{ijk}$$\n\nLevel-2 Equations $$\\begin{aligned}\n\\pi_{0jk}=\\beta_{00k} + \\beta_{01k}*Condition_{jk}+r_{0jk}\\\\\n\\pi_{1jk}=\\beta_{10k} + \\beta_{11k}*Condition_{jk}\n\\end{aligned}$$\n\nLevel-3 Equations $$\\begin{aligned}\n\\beta_{00k}= \\gamma_{001}*AlgMech_k+\\gamma_{002}*CalcMech_k+\\gamma_{003}*CalcE\\&M_k +u_{00k}\\\\\n\\beta_{01k}=\\gamma_{011}*AlgMech_k+\\gamma_{012}*CalcMech_k+\\gamma_{013}*CalcE\\&M_k+u_{01k}\\\\\n\\beta_{10k}=\\gamma_{101}*AlgMech_k+\\gamma_{102}*CalcMech_k+\\gamma_{103}*CalcE\\&M_k+u_{10k}\\\\\n\\beta_{11k}=\\gamma_{111}*AlgMech_k+\\gamma_{112}*CalcMech_k+\\gamma_{113}*CalcE\\&M_k+u_{11k}\n\\end{aligned}$$\n\n# Results\n\nFirst, we present the results for the participation analysis. These results include descriptive statistics and the HGLM models. Then we present the results for the performance analysis.\n\n## Participation\n\nWe first compare the raw participation rates for the CBTs and PPTs \u2013 overall, by gender, and by grade \u2013 to participation rates reported in prior studies. This comparison identifies the extent to which participation in this study was similar to participation in prior studies and informs the generalizability of our findings. Prior studies report grade and gender differences in participation in aggregate so we cannot compare their findings to our HGLM outputs, which differentiate between each course grade. Therefore, we compare the raw differences in mean course grades for participating and nonparticipating students in our data to the differences reported in prior studies.\n\nFollowing our comparison of the raw data we present three HGLM models. Model 1 differentiates between the pretests and posttests for the two assessment condition (CBT and PPT). The second model addresses Research Question 1 by accounting for how instructor use of the recommended practices related to student participation. Model 3 addresses Research Questions 2 and 3 by including variables for student gender and course grade.\n\n### Descriptive statistics\n\nThe descriptive statistics show that the overall PPT participation rate is higher than the overall CBT participation rate for pre and post administration of both the CI and AS, as shown in Table 4. These raw participation rates do not account for differences in participation across course sections. These rates all fall within the range found in prior studies shown in Table 1. Gender differences in participation in the raw data for this study are small and are smaller than those reported in prior studies. Differences in course grades between those that did and did not participate are large and are similar in size to those reported in prior studies. However, these comparisons between the present study and prior studies are only approximations. The prior studies reported matched data and in some of these studies it is unclear if they included all students who enrolled in the course, only students who received grades, or only students who took the pretest. The present study includes only students who enrolled in the course prior to the first day of instruction and who received a grade in the course. While these differences between the present study and prior studies make it difficult to compare participation rates, the approximate comparison indicates that the present study is not outside the boundaries of what researchers have reported in prior studies.\n\n\n\n\n\n\n\n\n\n\n\n
Participation rates for pre and post CBT and PPT administered exams comparing participation for the type of instrument, the gender of the participants, and the grades of the participants and nonparticipants.<\/caption>\n
<\/td>\n<\/td>\n<\/td>\n<\/td>\nGender Differences<\/u><\/td>\nMean Course Grade<\/u><\/td>\n<\/tr>\n

Cond.<\/p><\/td>\n

Time<\/td>\nAS<\/td>\nCI<\/td>\nMale<\/td>\nFemale<\/td>\nOdds<\/td>\nPart.<\/td>\nNon-part.<\/td>\n\u0394<\/em><\/span><\/td>\n<\/tr>\n
<\/td>\n<\/td>\n<\/td>\n<\/td>\nN=803<\/td>\nN=507<\/td>\n(F\/M)<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/tr>\n

<\/p><\/td>\n

Pre<\/td>\n71%<\/td>\n67%<\/td>\n66%<\/td>\n76%<\/td>\n0.99<\/td>\n2.86<\/td>\n2.13<\/td>\n0.73<\/td>\n<\/tr>\n
<\/td>\nPost<\/td>\n59%<\/td>\n54%<\/td>\n54%<\/td>\n61%<\/td>\n1.13<\/td>\n2.97<\/td>\n2.20<\/td>\n0.77<\/td>\n<\/tr>\n
PPT<\/td>\nPre<\/td>\n94%<\/td>\n94%<\/td>\n94%<\/td>\n95%<\/td>\n1<\/td>\n2.68<\/td>\n1.95<\/td>\n0.73<\/td>\n<\/tr>\n
Post<\/td>\n75%<\/td>\n74%<\/td>\n74%<\/td>\n75%<\/td>\n1<\/td>\n2.87<\/td>\n1.95<\/td>\n0.92<\/td>\n<\/tr>\n
<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n### The relationship between participation and instructor practices\n\nAfter converting the logits given in Table 5 to probabilities, Model 1 shows participation rates of 83% for the CBT pretest, 66% for the CBT posttest, 100% for the PPT pretest, and 95% for the PPT posttest. These participation rates all exceed those calculated with raw data, a known issue with HGLM models as discussed in the Methods Section. Model 2 includes a variable for the number of recommended practices the instructors used in each section for the CBT pretest and posttests. Including recommended practices did not reduce the variance within assessments or between assessments within students (level 1 and level 2) for any of the assessment conditions, but it did explain a large part of the variance between sections for the CBT pretest and posttest, as shown in the bottom of Table 5. The variance in Model 2 is 15% lower (from 0.820 to 0.700) for CBT pretests and 45% lower (from 1.220 to 0.670) for the CBT posttests than in Model 1. This large decrease in variance indicates that the number of recommended practices instructors used to motivate their students to participate accounted for a large proportion of the difference in participation rates between sections on the assessments administered as CBTs.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
HGLM outputs for models comparing student participation on the CBT and PPT pretest and posttests by recommended practices (level 3), gender (level 2), and course grade (level 2). <\/caption>\n

<\/p><\/td>\n

<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/tr>\n
<\/td>\nModel 1<\/td>\n<\/td>\nModel 2<\/td>\n<\/td>\nModel 3<\/td>\n<\/tr>\n
<\/td>\nCoef.<\/td>\np<\/em><\/td>\n<\/td>\nCoef.<\/td>\np<\/em><\/td>\n<\/td>\nCoef.<\/td>\np<\/em><\/td>\n<\/tr>\n
CBT Pre \u03c0<\/em>1<\/sub><\/span><\/td>\n<\/tr>\n

<\/p><\/td>\n

<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/tr>\n
\u00a0\u00a0For Int. 3 \u03b3<\/em>100<\/sub><\/span><\/td>\n0.679<\/td>\n<0.001<\/td>\n<\/td>\n0.256<\/td>\n0.434<\/td>\n<\/td>\n-0.671<\/td>\n0.081<\/td>\n<\/tr>\n
\u00a0\u00a0Practices \u03b3<\/em>101<\/sub><\/span><\/td>\n-<\/td>\n-<\/td>\n<\/td>\n0.214<\/td>\n0.116<\/td>\n<\/td>\n0.182<\/td>\n0.097<\/td>\n<\/tr>\n
Gender \u03b3<\/em>110<\/sub><\/span><\/td>\n-<\/td>\n-<\/td>\n<\/td>\n-<\/td>\n-<\/td>\n<\/td>\n0.269<\/td>\n0.086<\/td>\n<\/tr>\n
D Grade \u03b3<\/em>120<\/sub><\/span><\/td>\n-<\/td>\n-<\/td>\n<\/td>\n-<\/td>\n-<\/td>\n<\/td>\n0.142<\/td>\n0.699<\/td>\n<\/tr>\n
C Grade \u03b3<\/em>130<\/sub><\/span><\/td>\n-<\/td>\n-<\/td>\n<\/td>\n-<\/td>\n-<\/td>\n<\/td>\n0.558<\/td>\n0.077<\/td>\n<\/tr>\n
B Grade \u03b3<\/em>140<\/sub><\/span><\/td>\n-<\/td>\n-<\/td>\n<\/td>\n-<\/td>\n-<\/td>\n<\/td>\n1.102<\/td>\n0.002<\/td>\n<\/tr>\n
A Grade \u03b3<\/em>150<\/sub><\/span><\/td>\n-<\/td>\n-<\/td>\n<\/td>\n-<\/td>\n-<\/td>\n<\/td>\n1.526<\/td>\n<0.001<\/td>\n<\/tr>\n
CBT Post \u03c0<\/em>2<\/sub><\/span><\/td>\n<\/tr>\n

<\/p><\/td>\n

<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/tr>\n
\u00a0\u00a0For Int. 3 \u03b3<\/em>200<\/sub><\/span><\/td>\n0.296<\/td>\n0.118<\/td>\n<\/td>\n-0.767<\/td>\n0.008<\/td>\n<\/td>\n-2.678<\/td>\n<0.001<\/td>\n<\/tr>\n
\u00a0\u00a0Practices \u03b3<\/em>201<\/sub><\/span><\/td>\n-<\/td>\n-<\/td>\n<\/td>\n0.534<\/td>\n<0.001<\/td>\n<\/td>\n0.573<\/td>\n<0.001<\/td>\n<\/tr>\n
Gender \u03b3<\/em>210<\/sub><\/span><\/td>\n-<\/td>\n-<\/td>\n<\/td>\n-<\/td>\n-<\/td>\n<\/td>\n0.207<\/td>\n0.281<\/td>\n<\/tr>\n
D Grade \u03b3<\/em>220<\/sub><\/span><\/td>\n-<\/td>\n-<\/td>\n<\/td>\n-<\/td>\n-<\/td>\n<\/td>\n0.946<\/td>\n0.013<\/td>\n<\/tr>\n
C Grade \u03b3<\/em>230<\/sub><\/span><\/td>\n-<\/td>\n-<\/td>\n<\/td>\n-<\/td>\n-<\/td>\n<\/td>\n1.395<\/td>\n<0.001<\/td>\n<\/tr>\n
B Grade \u03b3<\/em>240<\/sub><\/span><\/td>\n-<\/td>\n-<\/td>\n<\/td>\n-<\/td>\n-<\/td>\n<\/td>\n2.057<\/td>\n<0.001<\/td>\n<\/tr>\n
A Grade \u03b3<\/em>250<\/sub><\/span><\/td>\n-<\/td>\n-<\/td>\n<\/td>\n-<\/td>\n-<\/td>\n<\/td>\n2.390<\/td>\n<0.001<\/td>\n<\/tr>\n
PPT Pre \u03c0<\/em>3<\/sub><\/span><\/td>\n<\/tr>\n

<\/p><\/td>\n

<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/tr>\n
\u00a0\u00a0For Int. 3 \u03b3<\/em>300<\/sub><\/span><\/td>\n2.290<\/td>\n<0.001<\/td>\n<\/td>\n2.266<\/td>\n<0.001<\/td>\n<\/td>\n1.361<\/td>\n<0.001<\/td>\n<\/tr>\n
Gender \u03b3<\/em>310<\/sub><\/span><\/td>\n-<\/td>\n-<\/td>\n<\/td>\n-<\/td>\n-<\/td>\n<\/td>\n0.130<\/td>\n0.619<\/td>\n<\/tr>\n
D Grade \u03b3<\/em>320<\/sub><\/span><\/td>\n-<\/td>\n-<\/td>\n<\/td>\n-<\/td>\n-<\/td>\n<\/td>\n0.496<\/td>\n0.281<\/td>\n<\/tr>\n
C Grade \u03b3<\/em>330<\/sub><\/span><\/td>\n-<\/td>\n-<\/td>\n<\/td>\n-<\/td>\n-<\/td>\n<\/td>\n0.675<\/td>\n0.062<\/td>\n<\/tr>\n
B Grade \u03b3<\/em>340<\/sub><\/span><\/td>\n-<\/td>\n-<\/td>\n<\/td>\n-<\/td>\n-<\/td>\n<\/td>\n0.835<\/td>\n0.042<\/td>\n<\/tr>\n
A Grade \u03b3<\/em>350<\/sub><\/span><\/td>\n-<\/td>\n-<\/td>\n<\/td>\n-<\/td>\n-<\/td>\n<\/td>\n0.909<\/td>\n0.049<\/td>\n<\/tr>\n
PPT Post \u03c0<\/em>4<\/sub><\/span><\/td>\n<\/tr>\n

<\/p><\/td>\n

<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/tr>\n
\u00a0\u00a0For Int. 3 \u03b3<\/em>400<\/sub><\/span><\/td>\n1.235<\/td>\n<0.001<\/td>\n<\/td>\n1.235<\/td>\n<0.001<\/td>\n<\/td>\n-0.706<\/td>\n0.047<\/td>\n<\/tr>\n
Gender \u03b3<\/em>410<\/sub><\/span><\/td>\n-<\/td>\n-<\/td>\n<\/td>\n-<\/td>\n-<\/td>\n<\/td>\n0.224<\/td>\n0.180<\/td>\n<\/tr>\n
D Grade \u03b3<\/em>420<\/sub><\/span><\/td>\n-<\/td>\n-<\/td>\n<\/td>\n-<\/td>\n-<\/td>\n<\/td>\n1.522<\/td>\n0.001<\/td>\n<\/tr>\n
C Grade \u03b3<\/em>430<\/sub><\/span><\/td>\n-<\/td>\n-<\/td>\n<\/td>\n-<\/td>\n-<\/td>\n<\/td>\n1.514<\/td>\n<0.001<\/td>\n<\/tr>\n
B Grade \u03b3<\/em>440<\/sub><\/span><\/td>\n-<\/td>\n-<\/td>\n<\/td>\n-<\/td>\n-<\/td>\n<\/td>\n2.166<\/td>\n<0.001<\/td>\n<\/tr>\n
A Grade \u03b3<\/em>450<\/sub><\/span><\/td>\n-<\/td>\n-<\/td>\n<\/td>\n-<\/td>\n-<\/td>\n<\/td>\n2.493<\/td>\n<0.001<\/td>\n<\/tr>\n
<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/tr>\n
Level-1 and Level-2 Variance<\/td>\n<\/tr>\n

CBT Pre r<\/em>1<\/sub><\/span><\/p><\/td>\n

1.080<\/td>\n<\/td>\n1.080<\/td>\n<\/td>\n0.805<\/td>\n<\/tr>\n
CBT Post r<\/em>2<\/sub><\/span><\/td>\n1.170<\/td>\n<\/td>\n1.390<\/td>\n<\/td>\n1.077<\/td>\n<\/tr>\n
PPT Pre r<\/em>3<\/sub><\/span><\/td>\n1.130<\/td>\n<\/td>\n1.440<\/td>\n<\/td>\n1.156<\/td>\n<\/tr>\n
PPT Post r<\/em>4<\/sub><\/span><\/td>\n1.100<\/td>\n<\/td>\n1.200<\/td>\n<\/td>\n0.889<\/td>\n<\/tr>\n
<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/tr>\n
Level-3 Variance<\/td>\n<\/tr>\n

CBT Pre u<\/em>10<\/sub><\/span><\/p><\/td>\n

0.820<\/td>\n<\/td>\n0.700<\/td>\n<\/td>\n0.740<\/td>\n<\/tr>\n
CBT Post u<\/em>20<\/sub><\/span><\/td>\n1.220<\/td>\n<\/td>\n0.670<\/td>\n<\/td>\n0.830<\/td>\n<\/tr>\n
PPT Pre u<\/em>30<\/sub><\/span><\/td>\n0.560<\/td>\n<\/td>\n0.485<\/td>\n<\/td>\n0.690<\/td>\n<\/tr>\n
PPT Post u<\/em>40<\/sub><\/span><\/td>\n1.340<\/td>\n<\/td>\n1.370<\/td>\n<\/td>\n1.180<\/td>\n<\/tr>\n
<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\nUsing Model 2 we calculated the predicted participation rates for students on PPTs and CBTs in courses that used different numbers of recommended practices. We calculated the probabilities shown in the graph from the logits and standard errors calculated with the hypothesis testing function in the HLM software. The logit itself is easily calculated from the model. For example, the logit for CBT posttest participation in a course using 3 recommended practices is $\\eta = -0.767 + 3(0.534) = 0.834$. Using Equation 8 this logit gives a probability of 87%. We then plotted these values and their error bars (1 standard error) in Figure 3.\n\nFigure 3 shows that when instructors used none of the recommended practices CBT participation rates were much lower than the PPT rates. When faculty used all four of the recommended practices, however, CBT participation rates matched PPT rates. All the predicted participation rates in these cases exceed 90%. This participation rate is likely an overestimate caused by high probability predictions in HGLM using PQL. The model, however, is likely overestimating all the participation rates by a similar amount. For example, the predicted participation rates for a CBT posttest in a course using 4 recommended practices (96%) and the PPT posttest (95%) are effectively the same, so any overestimation in them should be the same.\n\n### Participation by course grade and gender\n\nModel 3 includes variables for student gender and course grade. The addition of these variables decreased the variance between assessments as well as the variance within assessments between students for all CBT and PPT pretests and posttests by 20% to 26% (for example from 1.080 to 0.805) from Model 2. These variables tended to increase the variance between sections for Model 3 compared to Model 2 (+42% to -14%, for example from 0.485 to 0.690), indicating that there was unaccounted-for variation in how course grade and gender differentially related to participation in the different course sections.\n\nGender differences in participation in Model 3 were not statistically significant. However, all of the coefficients in the model indicated that female students were more likely to participate than male students. This higher participation rate for female students was also reported in all three of the prior studies. Therefore, it is possible that this is a real effect that is simply to small for our complex statistical model to identify as statistically significant. The odds ratios with 95% confidence intervals comparing female to male participation rates calculated by the HLM software were 1.31 \\[0.96, 1.79\\] for the CBT pretest, 1.23 \\[0.84, 1.81\\] for the CBT posttest, 1.14 \\[0.67, 1.95\\] for the PPT pretest, and 1.25 \\[0.90, 1.75\\] for the PPT posttest. These odds ratios all predict higher female participation but have confidence intervals that include the value 1, indicating the difference in participation rates by gender was not statistically significant. These odds ratios, however, all fall within the range of odds ratios found in the three prior studies, which indicates the differences in participation rate by gender may be a consistent but small effect.\n\nWe included student course grades as dummy variables (rather than as a single continuous variable) in Model 3 because our preliminary models indicated that the difference in participation between each grade was not linear. This nonlinear relationship is observable in the values in Model 3. For example, on the PPT posttest, the difference between students who earned Fs and Ds was 1.52 logits, whereas for students who earned As and Bs the difference was only 0.33 logits. In a linear relationship, it would have been approximately the same difference in logits between each adjacent pair of course grades. Entering the grades as four separate variables has the downside of complicating the model; however, these models more accurately portray the differences in participation between each of the course grades.\n\nUsing the hypothesis testing function in the HLM software and Model 3, we generated the logits and standard errors for participation for each course grade under each assessment condition using the population mean for gender (0.39) and plotted these values in Figure 3. We used the mean value for gender so that we could focus on the differences in predicted average participation rates across assessment conditions and course grades. The figure does not include the PPT pretest because the model predicted that the participation rates across all course grades ranged from 96% to 100%, which is too small of a difference to be visible in Figure 4. Model 3 indicates that for both the CBT and PPT posttests all four grades (A-D) were statistically significantly more likely to participate than students who received an F in the course. Receiving a grade of F is not shown in the model because it is incorporated in the intercept. Figure 3 illustrates that students who received an A, B, or C had more similar participation rates than students who received a D or F. This is particularly evident when the participation rates are higher, such as on the PPT posttest or on both CBT assessments when 3 or 4 recommended practices were used. These results indicate that the data collection in these courses disproportionately represented higher achieving students in both assessment conditions. Given that the raw participation rates and differences in grades between those that did and did not participate were both similar to those reported in prior studies, these results strongly suggest that data collection with low-stakes RBAs systematically over represents high achieving students, regardless of assessment administration method.\n\n### Performance differences between RBAs administered as CBTs and PPTs\n\nAs discussed in the Analysis Section, we built separate sets of models for performance on the concept inventories(CI) and on the attitudinal surveys (AS). We built these models in the same three steps to investigate performance differences between CBT and PPTs. The first model differentiated between pretest and posttest scores with a variable at level 1. The second model differentiates between assessments administered as CBTs or PPTs at with a variable at level 2. The third model added variables to differentiate between the three courses at level 3. In our analysis of these models, we first present the change in the variance between the models to identify how much of the variability in scores was explained by whether students took the assessments as CBT or PPT. Following the analysis of the variance, we present the size and consistency of the differences in scores between the two assessment conditions.\n\nFor both the CI models and the AS models the total variance did not meaningfully decrease between Models 1, 2, and 3 (see bottom of Table 6). Model 2 differentiates between students who took the CI as PPT or CBT. For both the AS and CI models this differentiation caused the total variance in the models to increase. The increase in the total variance was very small for the CI models (\\<+1% from 270.8 to 272.7) and small for the AS models (+2.8% from 195.52 to 200.92). Increases in the variance for each sets of models is a strong indication that there were no differences in scores between those administered as CBTs and those administered as PPTs. Increases in the variance for both sets of models emphasizes that the tests provided equivalent data. However, it was possible that there were differences in some of the courses but not in others. To address this possibility, we developed Model 3 to compare CBT and PPT while differentiating between the three courses in the study. The total variance in Model 3 slightly decreased compared to Model 1 for the CI models (-1.7%) and slightly increased for the AS models (+1.4%). Given the shifts in variances' small sizes and disagreements in direction, the change in variance between the three models indicates that student performance on each assessment was equivalent whether administered as CBT or PPT.\n\nCI Model 1 indicates that the average CI pretest score for all students was 31% and that on average students gained 13%. In Model 2, we differentiated between assessments administered as PPT or CBT. Model 2 for both CI and AS indicated that the differences in scores between PPTs and CBTs were very small and that these differences were not statistically significant. Specifically, on the pretest CBT scores were slightly higher than PPT scores (\\<+1%) and on the posttest CBT scores were slightly lower than PPT scores (\\<-1%). In Model 3, we disaggregated the data between the three course types, which also allowed us to differentiate between CI instruments. For the CI Model 3 there were substantial differences between the three courses. For the AS Model 3 there were small differences between the three courses. For both the AS Model 3 and the CI Model 3, the CBT condition was not a statistically significant predictor of score in any course. None of the assessment condition coefficients were statistically significantly different from zero on either the pretest or posttest. The hypothesis testing function in the HLM software generated means and standard errors based on the CI and AS Model 3s, presented in Figure 5. Figure 5 and both Model 3s all show that the differences between CBT and PPT scores were small (ranging from -2.1% to 2.2%) and that scores were not consistently higher in either assessment condition than in the other. In seven cases, the PPT was higher. In five cases, the CBT was higher. These results indicate that there was not a consistent, meaningful, or reliable difference in scores between assessments administered as CBTs and those administered as PPTs.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
HLM outputs for models comparing performance between assessments administered as PPT and CBT for both the CI and AS. The models indicated that performance on the two modes of assessment was similar. <\/caption>\n

<\/p><\/td>\n

<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/tr>\n
<\/td>\nCI Models<\/td>\n<\/td>\nAS Models<\/td>\n<\/tr>\n
2-4<\/span> <\/td>\nCI 1<\/td>\nCI 2<\/td>\nCI 3<\/td>\n<\/td>\nAS 1<\/td>\nAS 2<\/td>\nAS 3<\/td>\n<\/tr>\n
<\/td>\nCoef.<\/td>\nCoef.<\/td>\nCoef.<\/td>\n<\/td>\nCoef.<\/td>\nCoef.<\/td>\nCoef.<\/td>\n<\/tr>\n

<\/p><\/td>\n

<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/tr>\n
\u00a0\u00a0For Intercept 2<\/td>\n<\/tr>\n
\u00a0\u00a0\u00a0\u00a0Intercept 3<\/td>\n30.99***<\/td>\n31.20***<\/td>\n-<\/td>\n<\/td>\n43.97***<\/td>\n44.11***<\/td>\n-<\/td>\n<\/tr>\n
\u00a0\u00a0\u00a0\u00a0Alg. Mech.<\/td>\n-<\/td>\n-<\/td>\n26.93***<\/td>\n<\/td>\n-<\/td>\n-<\/td>\n42.87***<\/td>\n<\/tr>\n
\u00a0\u00a0\u00a0\u00a0Calc. Mech.<\/td>\n-<\/td>\n-<\/td>\n36.36***<\/td>\n<\/td>\n-<\/td>\n-<\/td>\n43.77***<\/td>\n<\/tr>\n
\u00a0\u00a0\u00a0\u00a0Calc. E & M<\/td>\n-<\/td>\n-<\/td>\n29.68***<\/td>\n<\/td>\n-<\/td>\n-<\/td>\n47.19***<\/td>\n<\/tr>\n
\u00a0\u00a0For Condition (CBT)<\/td>\n<\/tr>\n
\u00a0\u00a0\u00a0\u00a0Intercept 3<\/td>\n-<\/td>\n-0.42<\/td>\n<\/td>\n-<\/td>\n-<\/td>\n-0.25<\/td>\n-<\/td>\n<\/tr>\n
\u00a0\u00a0\u00a0\u00a0Alg. Mech.<\/td>\n-<\/td>\n-<\/td>\n0.12<\/td>\n<\/td>\n-<\/td>\n-<\/td>\n-0.61<\/td>\n<\/tr>\n
\u00a0\u00a0\u00a0\u00a0Calc. Mech.<\/td>\n-<\/td>\n-<\/td>\n-0.36<\/td>\n<\/td>\n-<\/td>\n-<\/td>\n1.63<\/td>\n<\/tr>\n
\u00a0\u00a0\u00a0\u00a0Calc. E & M<\/td>\n-<\/td>\n-<\/td>\n-1.18<\/td>\n<\/td>\n-<\/td>\n-<\/td>\n-2.21<\/td>\n<\/tr>\n
For Post<\/td>\n<\/tr>\n
\u00a0\u00a0For Intercept 2<\/td>\n<\/tr>\n
\u00a0\u00a0\u00a0\u00a0Intercept 3<\/td>\n13.04***<\/td>\n12.84***<\/td>\n-<\/td>\n<\/td>\n1.76**<\/td>\n1.33*<\/td>\n-<\/td>\n<\/tr>\n
\u00a0\u00a0\u00a0\u00a0Alg. Mech.<\/td>\n-<\/td>\n-<\/td>\n7.45***<\/td>\n<\/td>\n-<\/td>\n-<\/td>\n1.66<\/td>\n<\/tr>\n
\u00a0\u00a0\u00a0\u00a0Calc. Mech.<\/td>\n-<\/td>\n-<\/td>\n18.61***<\/td>\n<\/td>\n-<\/td>\n-<\/td>\n2.98*<\/td>\n<\/tr>\n
\u00a0\u00a0\u00a0\u00a0Calc. E & M<\/td>\n-<\/td>\n-<\/td>\n11.59***<\/td>\n<\/td>\n-<\/td>\n-<\/td>\n-1.70<\/td>\n<\/tr>\n
\u00a0\u00a0For Condition (CBT)<\/td>\n<\/tr>\n
\u00a0\u00a0\u00a0\u00a0Intercept 3<\/td>\n-<\/td>\n0.42<\/td>\n-<\/td>\n<\/td>\n-<\/td>\n0.84<\/td>\n-<\/td>\n<\/tr>\n
\u00a0\u00a0\u00a0\u00a0Alg. Mech.<\/td>\n-<\/td>\n-<\/td>\n-0.32<\/td>\n<\/td>\n-<\/td>\n-<\/td>\n0.56<\/td>\n<\/tr>\n
\u00a0\u00a0\u00a0\u00a0Calc. Mech.<\/td>\n-<\/td>\n-<\/td>\n-0.80<\/td>\n<\/td>\n-<\/td>\n-<\/td>\n-0.27<\/td>\n<\/tr>\n
\u00a0\u00a0\u00a0\u00a0Calc. E & M<\/td>\n-<\/td>\n-<\/td>\n3.27<\/td>\n<\/td>\n-<\/td>\n-<\/td>\n2.40<\/td>\n<\/tr>\n

<\/p><\/td>\n

<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/tr>\n

Intercept 1<\/p><\/td>\n

135.46<\/td>\n135.22<\/td>\n135.61<\/td>\n<\/td>\n95.12<\/td>\n93.56<\/td>\n93.53<\/td>\n<\/tr>\n
Level - 1<\/td>\n125.66<\/td>\n125.05<\/td>\n124.65<\/td>\n<\/td>\n98.58<\/td>\n98.08<\/td>\n97.24<\/td>\n<\/tr>\n

<\/p><\/td>\n

<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/tr>\n

Int.1\/Int.2<\/p><\/td>\n

4.16<\/td>\n4.35<\/td>\n0.74<\/td>\n<\/td>\n1.01<\/td>\n1.4<\/td>\n0.6<\/td>\n<\/tr>\n
Int.1\/Cond. (CBT)<\/td>\n-<\/td>\n0.89<\/td>\n0.89<\/td>\n<\/td>\n-<\/td>\n4.81<\/td>\n3.45<\/td>\n<\/tr>\n
Post\/Int.2<\/td>\n5.51<\/td>\n5.25<\/td>\n2.43<\/td>\n<\/td>\n0.81<\/td>\n0.44<\/td>\n0.23<\/td>\n<\/tr>\n
Post\/Cond. (CBT)<\/td>\n-<\/td>\n1.94<\/td>\n1.92<\/td>\n<\/td>\n-<\/td>\n2.63<\/td>\n3.12<\/td>\n<\/tr>\n

Total Level 3<\/p><\/td>\n

9.67<\/td>\n12.43<\/td>\n5.98<\/td>\n<\/td>\n1.82<\/td>\n9.28<\/td>\n7.4<\/td>\n<\/tr>\n

Total Variance<\/p><\/td>\n

270.79<\/td>\n272.7<\/td>\n266.24<\/td>\n<\/td>\n195.52<\/td>\n200.92<\/td>\n198.17<\/td>\n<\/tr>\n
<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n# Discussion\n\nOur HLM models indicate that there is no meaningful difference in scores on low-stakes RBAs between students who completed the RBA in class as a PPT and those who completed the RBA online outside of class as a CBT. Differentiating between CBT and PPT in the models increased the variance in the models, indicating that assessment condition (CBT vs PPT) is not a useful predictor of student scores. The differences between the models' predicted scores for students on both the CI and AS for the PPT and CBT conditions were very small, did not consistently favor one assessment condition over the other, and were not statistically significant. These similarities indicate that instructors and researchers can use online platforms to collect valuable and normalizable information about the impacts of their courses without concerns about the legitimacy of comparing that data to prior research that was collected with paper-and-pencil tests.\n\nIn terms of participation, we found that our participation data were comparable to prior research using physics RBAs across several dimensions, including genders and grades. We found that when faculty do little to motivate students to complete online low-stakes assessments, students are much less likely to participate than they are on in-class assessments. Our models show that if faculty follow all of our recommended practices, reminding students in class and online to participate and offering credit for participation, student participation rates for CBT posttests match those for PPT posttests. We focus on the posttests rather than the pretests because the participation rates are lowest on the posttest and they contain important information about the effects of the course. Our findings align with prior research into student participation on other online surveys, such as end-of-course evaluations. These findings indicate that, with intention, faculty can save class time by transferring their low-stakes RBA administrations from in-class PPTs to out-of-class CBTs without lowering their student participation rates.\n\nThe meaningful differences in participation rates across both student course grades and gender in this study are consistent with what we found reported in prior studies. These differences in participation rates indicate that the missing data in this study, and likely in any study using low-stakes assessments, are not missing at random. We expect that our use of HMI minimized the bias that this introduced into our performance analysis. However, we are not aware of any studies that have explicitly looked at how missing data affect results in studies using low-stakes assessments. Given the frequency with which RBAs are used to assess the effectiveness of college STEM courses, the skew that missing data introduce warrants further investigation.\n\n# Conclusion\n\nOnline out-of-class administration of RBAs can provide participation rates and performance results equivalent to in-class paper-and-pencil tests. Instructors should reduce the logistical demands of administering RBAs by using online platforms, such as the LASSO platform, to administer and analyze their low-stakes assessments. paper-and-pencil tests take up already-limited class time and require instructors to use their own time to collect, score, and analyze the assessments. All of these tasks can be easily completed by online platforms, leaving instructors with more time to focus on using the results of the assessments to inform their instruction. Simplifying the process of collecting and analyzing RBA data may lead more instructors to gather this information. By facilitating instructors' examination of their students' outcomes, online platforms may also lead more instructors to start using research-based teaching practices that have been shown to improve student outcomes.\n\nLarge-scale data collection with online platforms can also provide instructors with several additional benefits. The platforms can integrate recommended statistical practices, such as Multiple Imputation to address missing data, that most individual instructors do not have the time or expertise to implement. The large scale of the data collection can also be used to put instructors' student outcomes in the context of outcomes in courses similar to their own. Furthermore, analysis could identify teaching practices that the instructor is using that are making their course above average, or practices that they could adopt to improve their outcomes. For example, is a website that assists faculty in analyzing their existing physics RBA data. The website has a Data Explorer tool that provides instructors with an evaluation of their assessment results and has a series of articles describing highly effective research-based teaching practices that instructors can use to improve student outcomes.\n\nIn addition to supporting instructors, large-scale data collection using online platforms has significant advantages for researchers. It allows investigations into how the implementation and effectiveness of pedagogical practices vary across institutions and populations of students. Large sample sizes provide the statistical power required to investigate differences between populations of students (e.g. gender or ethnicity\/race) that would not be possible in most individual courses due to small sample sizes. Online platforms also allow researchers to disseminate new assessments that they are developing so that those instruments can be evaluated across a broad sample of students. Many existing instruments were developed in courses for STEM majors at research-intensive universities with STEM PhD programs, and it is unclear how effective these instruments are for assessing student outcomes at other types of institutions and in courses for non-STEM majors. Online platforms can facilitate analysis of the validity of existing RBA across broad samples of students from all institution types.\n\nOnline data collection and analysis platforms, such as LASSO, are relatively new and have the potential to alter instructor and researcher practices. While it is not known how the transition from PPT to CBT will impact all RBAs, our findings provide strong evidence that two of the most common concerns with digitizing low-stakes RBAs \u2013 shifts in student participation and performance \u2013 are not borne out by the data. Based on the results of our analyses, we recommend that instructors consider using free online RBA administration platforms in conjunction with our four recommended practices for CBTs.\n\n# Limitations\n\nThis study only examines courses in which students completed a single low-stakes RBA online at the beginning and end of the course. Excessive measurement would likely decrease student participation, performance, and data quality. Higher-stakes assessments would likely incentivize the use of additional materials (e.g. the internet, textbooks, or peers) not available for tests administered in class. It is also possible that the institution where the study was conducted and the populations involved in the study are not representative of physics students or courses broadly. However, the study included three different courses encompassing both calculus-based and algebra-based physics sequences, which supports the generalizability of results to many populations of students.\n\nComparisons of CBT and PPT administered assessments may also be impacted by missing data. Our use of Hierarchical Multiple Imputations (HMI) mitigates the impacts of missing data, but studies that use listwise deletion to address missing data may have different results. The skewing of participation rates by student course grade demonstrates that the data are not missing completely at random and that missing data are therefore non-ignorable.\n\n# Directions for Future Research\n\nThe presence and impact of missing data has received little attention in the RBA literature. Most of the studies we reviewed did not provide sufficient descriptive statistics to determine how much data was missing. The majority of studies we reviewed also used listwise deletion to remove missing data and create a matched dataset. Statisticians have long pointed out that the use of listwise deletion is a poor approach to addressing missing data. Our results and the prior studies we examined that provided sufficient information to assess student participation all indicate that male students and students with lower course grades are less likely to participate in research-based assessments. This skewing of data is likely being amplified through the use of listwise deletion and could have significant impacts research findings. If only the highest performing students reliably participate in an assessment, then the analysis of course data will only indicate the impact on high-performing students and will not be representative of the entire class. We expect that our use of HMI with assessment scores and course grades mitigates the impact on our analysis of the skew in the data. However, almost all studies in Discipline Based Education Research use matched data and do not use appropriate statistical methods for addressing missing data. Future work to measure the impact of missing data and associated data analysis techniques is needed to bring attention to the impact of these issues and provide guidance on methods for limiting their effects.\n\nMany institutions are moving to online data collection for their end-of-course evaluations because this streamlines the collection and analysis of student responses. However, instructors are finding that students are much less likely to participate in these surveys than in traditional in-class paper-and-pencil surveys . These surveys often act as the primary methods for institutions to evaluate the effectiveness of instructors and therefore play an important role in retention and promotion decisions. Our results indicate that providing multiple reminders to complete the surveys and participation credit for completing the surveys can dramatically increase participation rates on course evaluations administered online outside of class.","meta":{"dup_signals":{"dup_doc_count":13,"dup_dump_count":3,"dup_details":{"curated_sources":2,"2024-18":1,"unknown":10}},"filename":"out\/1711.06595_extract_main.tex.md"},"subset":"arxiv"} +{"text":"address: , , , ; , , , ; , , , ; , , , ; , , , ; , , ,\nauthor: ; ; ; ; ; ; ; ; ; ; ; ; \ntitle: BioPreDyn-bench: benchmark problems for kinetic modelling in systems biology\n\n# Background\n\nSystems biology aims at understanding the organization of complex biological systems with a combination of mathematical modelling, experiments, and advanced computational tools. To describe the behaviour of complex systems, models with sufficient level of detail to provide mechanistic explanations are needed. This leads to the use of large-scale dynamic models of cellular processes . By incorporating kinetic information, the range of applications of biological models can be widened. The importance of kinetic models is being increasingly acknowledged in fields such as bioprocess optimization , metabolic engineering , physiology, as well as cell and developmental biology .\n\nSystems identification, or reverse engineering, plays an important part in the model building process. The difficult nature of reverse engineering was stressed in , where the different perspectives that coexist in the area of systems biology were reviewed. Specifically, large-scale dynamic biological models generally have many unknown, non-measurable parameters. For the models to encapsulate as accurately as possible our understanding of the system (i.e. reproducing the available data and, ideally, being capable of making predictions), these parameters have to be estimated. This task, known as parameter estimation, model calibration, or data fitting , consists of finding the parameter values that give the best fit between the model output and a set of experimental data. This is carried out by optimizing a cost function that measures the goodness of this fit. In systems biology models this problem is often multimodal (nonconvex), due to the nonlinear and constrained nature of the system dynamics. Hence, standard local methods usually fail to obtain the global optimum. As an alternative, one may choose a multistart strategy, where a local method is used repeatedly, starting from a number of different initial guesses for the parameters. However, this approach is usually not efficient for realistic applications, and global optimization techniques need to be used instead .\n\nMany methods have been presented for this task, but less effort has been devoted to their critical evaluation. It is clear, however, that to make progress in this research area it is essential to assess performance of the different algorithms quantitatively, in order to understand their weaknesses and strengths. Furthermore, if a new algorithm is to be accepted as a valuable addition to the state of the art, it must be first rigorously compared with the existing plethora of methods. This systematic comparison requires adequate benchmark problems, that is, reference calibration case studies of realistic size and nature that can be easily used by the community. Several collections of benchmarks \u2013 and of methods for generating them \u2013 have already been published . An artificial gene network generator, which allows to choose from different topologies, was presented in . The system, known as A-BIOCHEM, generates pseudo-experimental noisy data in silico, simulating microarray experiments. An artificial gene network with ten genes generated in this way was later used to compare four reverse-engineering methods . More recently, a toolkit called GRENDEL was presented with the same purpose , including several refinements in order to increase the biological realism of the benchmark. A reverse-engineering benchmark of a small biochemical network was presented in . The model describes organism growth in a bioreactor and the focus was placed on model discrimination using measurements of some intracellular components. A proposal for minimum requirements of problem specifications, along with a collection of 44 small benchmarks for ODE model identification of cellular systems, was presented in . The collection includes parameter estimation problems as well as combined parameter and structure inference problems. Another method for generation of dynamical models of gene regulatory networks to be used as benchmarks is GeneNetWeaver , which was used to provide the international Dialogue for Reverse Engineering Assessments and Methods (DREAM) competition with three network inference challenges (DREAM3, DREAM4 and DREAM5) . Subsequent competitions (DREAM6, DREAM7) included also parameter estimation challenges of medium-scale models . Similar efforts have been carried out in related areas, such as in optimization, where BBOB workshops (Black-Box Optimization Benchmarking, ) have been organised since 2009. In this context it is also worth mentioning the collection of large-scale, nonlinearly constrained optimization problems from the physical sciences and engineering (COPS) .\n\nDespite these contributions, there is still a lack of suitable benchmark problems in systems biology that are at the same time (i) dynamic, (ii) large-scale, (iii) ready-to-run, and (iv) available in several common formats. None of the above mentioned collections possesses all these features, although each one has a subset of them. Here we present a collection of medium and large-scale dynamic systems, with sizes of tens to hundreds of variables and hundreds to thousands of estimated parameters, which can be used as benchmarks for reverse-engineering techniques. The collection includes two *Escherichia coli* models , a genome-wide kinetic model of *Saccharomyces cerevisiae* , a metabolic model of Chinese Hamster Ovary (CHO) cells , a signal transduction model of human cells , and a developmental gene regulatory network of *Drosophila melanogaster* .\n\nEnsuring standardisation allows systems biology models to be reused outside of their original context: in different simulators, under different conditions, or as parts of more complex models . Minimum requirements for published systems biology models are set out by the MIRIAM initiative : completeness of documentation, availability in standard formats, and semantic annotations connecting the model to web resources . To this end, we have made five of the six models (the exception is the spatial model of *D. melanogaster*) available in Systems Biology Markup Language (SBML ) format, allowing for their simulation in multiple software tools, including AMIGO and COPASI . Even when defined in a standard format such as SBML, large models such as the genome-wide kinetic model of *S. cerevisiae* may give different results when simulated in different software environments. The inherent size and stiffness of genome-scale systems biology models create new challenges to be addressed for their robust simulation by systems biology tools . To address this problem all the models have been consistently formatted, with their dynamics provided both in C and in Matlab. Additionally, a benchmark consisting of a parameter estimation problem has been defined for every model, for which ready-to-run implementations are provided in Matlab (optionally with the use of the AMIGO toolbox ) and, in some cases, also in COPASI . The availability of ready-to-run implementations is a highly desirable practice in computer science, since it ensures reproducibility of the results. Calibration results with state of the art optimization methods are reported, which can serve as a reference for comparison with new methodologies. Additionally, suggestions on how to compare the performance of several methods are also given in the Results section.\n\n# Problem statement\n\nGiven a model of a nonlinear dynamic system and a set of experimental data, the parameter estimation problem consists of finding the optimal vector of decision variables $p$ (unknown model parameters). This vector consists of the set of parameter values that minimize a cost function that measures the goodness of the fit of the model predictions with respect to the data, subject to a number of constraints. The output state variables that are measured experimentally are called observables. The following elements need to be clearly stated in order to properly define the calibration problem:\n\n- cost function to optimize (i.e. metric which reflects the mismatch between experimental and predicted values)\n\n- dynamics of the systems (in our benchmark models they are given by systems of ordinary differential equations)\n\n- model parameters to be estimated\n\n- initial conditions for the dynamics (possibly unknown, in which case they are included among the parameters to be estimated)\n\n- upper and lower bounds for the parameters\n\n- state variables that can be measured (observed)\n\n- values of external stimuli, also known as control variables\n\n- measurements (over time and\/or space) available for the calibration: number of experiments, stimuli (if any) for each experiment, data points per experiment, etc.\n\n- (optional) type and magnitude of errors considered for the experimental data\n\n- (optional) additional time series for cross-validation of the calibrated model\n\n- solver used to numerically simulate the systems, and the relative and absolute error tolerances used\n\nMathematically, it is formulated as a nonlinear programming problem (NLP) with differential-algebraic constraints (DAEs), where the goal is to find $p$ to minimize an objective function. The objective function, or cost function, is a scalar measure of the distance between data and model predictions. There are several common choices for the objective function. The generalized least squares cost function is given by: $$\\label{eq:statement}\nJ_{lsq} = \\sum^{n_{\\epsilon}}_{\\epsilon=1} \\sum^{n^{\\epsilon}_o}_{o=1} \\sum^{n^{\\epsilon,o}_s}_{s=1} w^{\\epsilon,o}_s \\left(ym^{\\epsilon,o}_s - y^{\\epsilon,o}_s(p) \\right)^2$$ where $n_{\\epsilon}$ is the number of experiments, $n^{\\epsilon}_o$ is the number of observables per experiment, and $n^{\\epsilon,o}_s$ is the number of samples per observable per experiment. The measured data will be denoted as $ym^{\\epsilon,o}_s$ and the corresponding model predictions will be denoted as $y^{\\epsilon,o}_s(p)$. Finally, $w^{\\epsilon,o}_s$ are scaling factors used to balance the contributions of the observables, according to their magnitudes and\/or the confidence in the measurements. When information about the experimental error is available, one may use the maximum (log-)likelihood function to look for the parameters with the highest probability of generating the measured data. Assuming independently identically distributed measurements with normally distributed noise, the likelihood is defined as:\n\n$$\\label{eq:llk}\nJ_{llk} = \\sum^{n_{\\epsilon}}_{\\epsilon=1} \\sum^{n^{\\epsilon}_o}_{o=1} \\sum^{n^{\\epsilon,o}_s}_{s=1} \\frac{ \\left(ym^{\\epsilon,o}_s - y^{\\epsilon,o}_s(p) \\right)^2 } {(\\sigma^{\\epsilon,o}_s)^2 }$$\n\nFor known constant variances the log-likelihood cost function is similar to the generalized least squares, with weights chosen as the inverse of the variance, $w^{\\epsilon,o}_s = 1\/(\\sigma^{\\epsilon,o}_s)^2$. This is the case of most of the benchmark problems presented here (B1, B2, B4, B5). The exceptions are: problem B3, in which no noise has been added to the data, and thus the scaling factors $w^{\\epsilon,o}_s$ are taken as the squared inverse of the maximum experimental value for each observable; and problem B6, where the weights are inversely related to the level of expression. More details are given in the next section. Note that the cost functions used in Matlab\/AMIGO are not exactly the same as the ones used in COPASI, since in COPASI the weights are scaled so that for each experiment the maximal occurring weight is 1.\n\nThe minimization of the objective function is subject to the following constraints: $$\\label{eq:constraints1}\n\\dot{x} = f\\left(x,p,t \\right)$$ $$\\label{eq:constraints2}\nx(t_0) = x_0$$ $$\\label{eq:constraints3}\ny = g(x,p, t)$$ $$\\label{eq:constraints4}\nh_{eq}(x,y,p) = 0$$ $$\\label{eq:constraints5}\nh_{in}(x,y,p) \\leq 0$$ $$\\label{eq:constraints6}\np^L \\leq p \\leq p^U$$ where $g$ is the observation function, $x$ is the vector of state variables with initial conditions $x_0$, $f$ is the set of differential and algebraic equality constraints describing the system dynamics (that is, the nonlinear process model), $h_{eq}$ and $h_{in}$ are equality and inequality constraints that express additional requirements for the system performance, and $p^L$ and $p^U$ are lower and upper bounds for the parameter vector $p$. The problem defined above is the general formulation of a nonlinear least squares optimization subject to dynamic constraints and bounds in the parameters. The problems included in this collection of benchmarks do not make use of constraints (\u2013).\n\n## Remarks on parameter estimation methods\n\nFitting a large, nonlinear model to experimental (noisy) data is generally a multimodal problem. In these circumstances, the use of local optimization methods, which are usually gradient-based, entails the risk of converging to local minima. Hence it is needed to use global optimization methods that provide more guarantees of converging to the globally optimal solution . Global optimization strategies can be roughly classified as deterministic, stochastic and hybrid. Deterministic methods can guarantee the location of the global optimum solution; however, their computational cost makes them unfeasible for large-scale problems. Stochastic methods, which are based on probabilistic algorithms, do not provide those guarantees, but are frequently capable of finding optimal or near-optimal solutions in affordable computation times.\n\nSome of the most efficient stochastic global optimization methods are the metaheuristic approaches. A heuristic is an algorithm originated not from formal analysis, but from an expert knowledge of the task to be solved. A metaheuristic can be seen as a general-purpose heuristic method designed to guide an underlying problem-specific heuristic. It is therefore a method that can be applied to different optimization problems with few modifications. Hybrid methods which combine metaheuristics for global optimization and local methods for accelerating convergence in the vicinity of local minima can be particularly efficient. One such method is the enhanced Scatter Search algorithm, eSS , and its parallel cooperative version, CeSS . Matlab and R implementations are publicly available as part of the MEIGO toolbox . The eSS method is available as a Matlab toolbox and is also included in AMIGO; this latter version is the one used in this work. It should be noted that AMIGO offers more than a dozen optimization solvers, including local and global methods, and the possibility of combining them to form user-defined sequential hybrid methods. In COPASI it is possible to choose among thirteen different optimization methods for parameter estimation, including deterministic and stochastic: Evolutionary Programming, Evolutionary Strategy (SRES), Genetic Algorithm, Hooke and Jeeves, Levenberg\u2013Marquardt, Nelder\u2013Mead, Particle Swarm, Praxis, Random Search, Simulated Annealing, Scatter Search, Steepest Descent, and Truncated Newton.\n\n## Remarks on comparing optimization methods\n\nAlthough the objective of this paper is to present a set of ready-to-run benchmarks, we list below several guidelines on how to compare different optimizers with these problems.\n\nMany optimization methods require an initial point and\/or bounds on the decision variables. For ensuring a fair comparison between different methods, the same bounds and initial points should be set. Obviously, the nominal solution can not be used as an initial point. Special emphasis should be laid on ensuring full reproducibility. This entails providing all source codes and binary files used in computations, as well as specifying all implementation details, such as software and hardware environment (including compiler versions and options, if any). If some aspects of a method can be tuned, these settings must be clearly indicated.\n\nMany different criteria may be used for comparing the performance of optimization methods. It can be expressed as a function of CPU time, number of function evaluations, or iteration counts. When considering several problems, a solver's average or cumulative performance metric can be chosen. If an algorithm fails to converge a penalty can be used, in which case an additional decision is required to fix its value. An alternative is to use ranks instead of numerical values, although this option hides the magnitudes of the performance metric. All approaches have advantages and drawbacks, and their use requires making choices that are subjective to some extent. In an attempt to combine the best features of different criteria, Dolan and Mor\u00e9 proposed to compare algorithms based on their performance profiles, which are cumulative distribution functions for a performance metric. Performance profiles basically rely on calculations of the ratio of the solver resource time versus the best time of all the solvers. It should be noted that, for complex large-scale problems where identifiability is an issue, different methods often arrive at different solutions. In that case the use of performance profiles requires choosing a tolerance to define acceptable solutions. Performance profiles are a convenient way of summarizing results when there are many methods to be compared and many problems on which to test them. When this is not the case, however, more information can be provided by using convergence curves. Convergence curves plot the evolution of the objective function, as defined in Equation (), as a function of the number of evaluations or the computation time (since the overhead is different for each method). They provide information not only about the final value reached by an algorithm, but also about the speed of progression towards that value.\n\nWhen comparing different optimization methods, the best result (cost) and the mean (or median) for N runs should be reported in a table. Similar statistics for computation and number of evaluations should apply to all the methods. However, since the final values can be greatly misleading, convergence curves should be provided in addition to this table.\n\nNote that, in order to make a fair comparison of convergence curves obtained with different software tools and\/or hardware environments, it is a good practice to report any speedup due to parallelism. This can happen in non-obvious situations. For example, COPASI can make use of several threads in multi-core PCs due to its use of the Intel MKL library. In summary, fair comparisons should be made taking into account the real overall computational effort used by each method\/implementation.\n\n## Remarks on identifiability\n\nParameter estimation is just one aspect of what is known as the inverse problem. This larger problem also includes identifiability analysis, which determines whether the unknown parameter values can be uniquely estimated . Lack of identifiability means that there are several possible parameter vectors that give the same agreement between experimental data and model predictions. We may distinguish between a priori structural identifiability and a posteriori or practical identifiability . The parameters are structurally identifiable if they can be uniquely estimated from the designed experiment under ideal conditions of noise-free observations and error-free model structure. Structural identifiability is a theoretical property of the model structure, which can be very difficult to determine for large and complex models . Even if a model is structurally identifiable, it may exhibit practical identifiability issues. Practical identifiability depends on the output sensitivity functions (partial derivatives of the measured states with respect to the parameters). If the sensitivity functions are linearly dependent the model is not identifiable, and sensitivity functions that are nearly linearly dependent result in parameter estimates that are highly correlated. Furthermore, even if they are linearly independent, low sensitivities may lead to an undesirable situation. Practical identifiability can be studied from sensitivity-based criteria like the Fisher information matrix (FIM). The practical identifiability of the models can be analyzed in this way with the AMIGO toolbox . The AMIGO_LRank method ranks the model parameters according to their influence on the model outputs, using several sensitivity measures. In large biological models identifiability issues are the norm rather than the exception . This may be partly due to inconsistent modelling practices, but even when a model has been carefully built and is structurally identifiable, the amount of data required for a perfect calibration (practical identifiability) is usually large. As an illustration, consider the general case of a model described by differential equations, and assume the ideal situation where the structure of the equations is perfectly known. Then, a well-known result states that identification of $r$ parameter values requires $2r+1$ measurements . However, it is frequently the case that a model with more than a thousand parameters has to be calibrated with only dozens or maybe hundreds of measurements. Finally, it should be noted that lack of identifiability does not preclude the use of model-based methods. Unique model predictions can in fact be obtained despite unidentifiability, as discussed by Cedersund .\n\n# Benchmark problems\n\nHere we present a collection of parameter estimation problems and their descriptions. The characteristics of the six dynamic models are summarized in Table . Four of the benchmark problems have been defined using in silico experiments, where pseudoexperimental data have been generated from simulations of the models and addition of artificial noise. The use of simulated data is usually considered the best way of assessing performance of parameter estimation methods, because the true solution is known. Additionally, we provide two benchmark problems that use real data. For each problem we provide the following information (see Supplementary Information online, ):\n\n- Dynamic model.\n\n- Experimental information: initial conditions, input functions, what is measured, measurement times, noise level and type.\n\n- Cost function to be used: its type (least squares, weighted least squares, maximum likelihood, etc), and why it should be chosen.\n\n- Parameters to estimate: lower and upper bounds, initial guesses, nominal values (the latter must not be used during estimations).\n\n- Implementations (Matlab with and without the AMIGO toolbox, C, COPASI): installation, requirements, and usage. Ready-to-run scripts are provided, with examples of how to execute them and their expected output.\n\n```latex\n\\begin{table*}[t]\\begin{flushleft}\n \\caption{\n \\textbf{Models}\n }\n \\begin{tabular}{|l|l|l|l|l|l|l|}\n \\hline\nModel ID & \\textbf{B1} & \\textbf{B2} & \\textbf{B3} & \\textbf{B4} & \\textbf{B5} & \\textbf{B6} \\\\\n\\hline\nModel Ref &\\cite{smallbone2013large} &\\cite{chassagnole2002dynamic} &\\cite{kotte2010bacterial}&\\cite{villaverde2013highconfidence}&\\cite{macnamara2012state}&\\cite{crombach2012efficient}\\\\\nCell & \\textit{S. cerevisiae} & \\textit{E. coli} & \\textit{E. coli} & CHO & Generic & \\textit{Drosophila}\\\\ \n & & & & & & \\textit{melanogaster}\\\\ \nDescription & Metabolic: & Metabolic: & Metabolic: CCM & Metabolic & Signal & Developmental \\\\\nlevel & genome scale & CCM & \\& transcription & & transduction & GRN (spatial) \\\\\nParameters & 1759 & 116 & 178 & 117 & 86 & 37 \\\\\nDynamic states & 276 & 18 & 47 & 34 & 26 & 108 --212 \\\\\nObserved states & 44 & 9 & 47 & 13 & 6 & 108 --212 \\\\\nExperiments & 1 & 1 & 1 & 1 & 10 & 1 \\\\\nData points & 5280 & 110 & 7614 & 169 & 96 & 1804 \\\\ \nData type & simulated & measured & simulated & simulated & simulated & measured \\\\\nNoise level & $20\\%$ & real & no noise & variable & $5\\%$ & real \\\\\n\\hline\n\\end{tabular}\n\\\\\nMain features of the benchmark models.\n\\end{flushleft}\n\\label{tab:models}\n\\end{table*}\n```\n\n## Problem B1: genome-wide kinetic model of *S. cerevisiae*\n\nThe biochemical structure of this model is taken from yeast.sf.net (version 6, ). In decompartmentalised form, this network has 1156 reactions and 762 variables. We fix some experimentally determined exchange fluxes, and use geometric FBA to choose a unique reference flux distribution consistent with the experimental data. We fix some initial concentrations to their experimentally determined levels and assign the remainder typical values. We define reaction kinetics using the common modular rate law, a generalised form of the reversible Michaelis-Menten kinetics that can be applied to any reaction stoichiometry . The final model contains 261 reactions with 262 variables and 1759 parameters. This model has been created according to the pipeline presented in , which ensures consistency with our sparse data set; whilst no data is required to produce the model, it can incorporate any known flux or concentration data or any kinetic constants. As an addition to the model developed in , this version has been alligned with previously unpublished experimental data. The new data consist of 44 steady-state measurements (38 concentrations and 6 fluxes), which are included in Tables 14 and 15 of the Supplementary Information online. The steady state is found to be stable. The number of measurements available at the present stage is not enough for carrying out a proper model calibration. Envisioning that dynamic (time-series) measurements of the 44 observed variables may be available in the near future, we show in this paper how they will be employed for re-estimating the parameter values. With this aim, we have generated pseudo-experimental noisy data corresponding to a pulse in the concentration of extracellular glucose, and have used this simulated data to re-calibrate the model. We generated 120 samples per observable and added artificial measurement noise (20%) to resemble realistic conditions.\n\n## Problem B2: dynamic model of the Central Carbon Metabolism of *E. coli*\n\nThis model, originally published in and available at the BioModels database , reproduces the response to a pulse in extracellular glucose concentration. It includes 18 metabolites in two different compartments: the cytosol (17 internal metabolites), and the extracellular compartment (1 extracellular metabolite: glucose). These metabolites are involved in 48 reactions: 30 kinetic rate reactions, 9 degradation equations, 8 dilution equations, and 1 equation for extracellular glucose kinetics. Additionally, there are 7 analytical functions, thus the model is defined by a total of 55 mathematical expressions. We have reformulated the model to use it as a parameter estimation problem; the 116 parameters to be estimated consist of kinetic parameters and maximum reaction rates. As an addition to the model version available in the Biomodels database, we provide the experimental data that were used in the original publication but had not been published (Klaus Mauch, personal communication). The dataset is given in Table 16 of the Supplementary Information online, and consists of time-course concentration measurements of nine metabolites. The aim of the model calibration in this case is to find a better fit to the experimental data than the one obtained with the nominal parameter vector used in the original publication . Note that this is different from benchmarks 1 and 3\u20135, which use simulated data and where the aim is to recover a fit as good as the one obtained with the nominal parameter vector, with which the data were generated.\n\n## Problem B3: enzymatic and transcriptional regulation of the Central Carbon Metabolism of *E. coli*\n\nThis model simulates the adaptation of *E. coli* to changing carbon sources. Complete information about this model is available as the supplementary information of . It is also included in the BioModels Database . It should be noted that there are some differences in parameter values between the original model and the BioModels version; however, these changes do not alter the simulation results, a fact that indicates unidentifiability. The model contains 47 ODEs and 193 parameters, of which 178 are considered unknown and need to be estimated. The other 15 parameters are constants known to the modeler (number of subunits of the multimers\u2013enzymes\u2013, scaling factors, universal protein degradation rate, and gene expression rate constant). The outputs of the system are the 47 state variables, which represent concentrations. Pseudo-experimental data were generated by simulation of the sixth scenario defined in the simulation files included as supplementary material in . This scenario simulates an extended diauxic shift which consists of three consecutive environments, where the carbon sources are first glucose, then acetate, and finally a mixture of both. Under these conditions, the 47 concentration profiles are sampled every 1000 seconds, for a total of 162 time points (45 hours). This model exhibits large differences in value among concentrations, which span five orders of magnitude. To equalize their contribution to the objective function, we scale each time-series dividing it by the maximum of the experimental value (scaled least squares).\n\n## Problem B4: metabolic model of Chinese Hamster Ovary (CHO) cells\n\nChinese Hamster Ovary cells (CHO) are used for protein production in fermentation processes . This model simulates a batch process with resting cells: no metabolites are fed for a final time horizon of 300 hours. The fermenter medium contains glucose as main carbon source, and leucine and methionine are the main amino acids taken up. Lactate was modelled to be a by-product of the fermentation process. A generated protein serves as main product of the fermentation process. The model comprises 35 metabolites in three compartments (fermenter, cytosol, and mitochondria) and 32 reactions, including protein product formation, Embden-Meyerhof-Parnas pathway (EMP), TCA cycle, a reduced amino acid metabolism, lactate production, and the electron transport chain. The kinetics are modelled as in , and the resulting ODE model comprises 117 parameters in total. Some aspects of this model were partially discussed in . For optimization purposes pseudo-experimental data were generated, mimicking a typical cell behavior. The following 13 metabolites are assumed to be measured: in fermenter, glucose, lactate, product protein, leucine, and methionine; in cytosol, aspartate, malate, pyruvate, oxaloacetate, ATP, and ADP; and in mitochondria, ATP and ADP. Samples were assumed to be daily taken over the whole fermentation time.\n\n## Problem B5: signal transduction logic model\n\nTo illustrate the advantages and disadvantages of different formalisms related to logic models, MacNamara and colleagues constructed a plausible network of interactions consisting of signaling factors known to be activated downstream of $EGF$ and $TNF\\text{-}\\alpha$ . The model consists of 26 ODEs that use a logic-based formalism, which is explained in detail in . In this formalism, state values can vary between 0 and 1 and represent the normalized activity of a given protein, which is typically measured as the level of phosphorylation. In total the model includes 86 continuous parameters, corresponding to the half maximal activations ($k)$, the Hill coefficients ($n$) and a set of parameters controlling the rate of activation\/deactivation of a given protein ($\\tau$). The model incorporates $EGF$ and $TNF\\text{-}\\alpha$ which are treated as stimuli that trigger the pathway response. In addition to these two stimuli, the model includes two kinase inhibitors for $RAF1$ and $PI3K$, which can block the activity of both species. In total the model can be perturbed by these 4 cues, allowing a rich variation in the dynamic profiles of the model signaling components, an essential requirement for parameter estimation. In order to generate a data-set for reverse engineering the model structure, the authors generated data, corresponding to 10 in-silico experiments, where the different cues (stimuli and inhibitors) are added in different combinations. For each experiment 6 strategically located states are observed. Each observable was measured at 16 equidistant time points per experiment. In addition to this Gaussian noise was added to the data in order to mimic a reasonable amount of experimental error. Note that the SBML implementation of this model uses the SBML qual format , an extension of SBML developed for qualitative models of biological networks.\n\n## Problem B6: the gap gene network of the vinegar fly, *Drosophila melanogaster*\n\nOur last benchmark model is slightly different from those previously described, in that it represents a spatial model of pattern formation in multi-cellular animal development, and the data for fitting are based on microscopy, rather than metabolomics or transcriptomics. The gap genes form part of the segmentation gene network, which patterns the anterior\u2013posterior (AP) axis of the *Drosophila melanogaster* embryo. They are the primary regulatory targets of maternal morphogen gradients, and are active during the blastoderm stage in early development. In the model, the embryo is a single row of dividing nuclei along the AP axis, with each nucleus containing the four gap genes and receiving input from four external factors. The gap genes included in the model are *hunchback* (*hb*), *Kr\u00fcppel* (*Kr*), *giant* (*gt*), and *knirps* (*kni*), and the external inputs Bicoid (Bcd), Caudal (Cad), Tailless (Tll), and Huckebein (Hkb). Three processes occur within and between nuclei: (1) regulated gene product synthesis, (2) Fickian gene product diffusion, and (3) linear gene product decay. These processes are formalised with ODEs, and result in the model having 37 unknown parameters. This model implements the gene circuit approach used to reverse-engineer the regulatory interactions of the gap genes by fitting to quantitative spatio-temporal gene expression data, which can be mRNA or protein . The data consist of 9 time points spanning 71 minutes of *Drosophila* development, and at each time point maximally 53 nuclei with data points for the four gap genes, and the four external inputs. The fit is measured with a weighted least squares scheme (WLS) with variable weights, which, in the case of the mRNA data used here , are inversely related to the level of expression. The weights were created from normalized, integrated mRNA expression data according to the formula: $w = 1.0 - 0.9y$, with $y\\in[0,1]$ being the normalized staining intensity. This proportionality of variation with expression level reflects the fact that gap domains (showing high levels of expression) show more variation than those regions of the embryo in which a gene is not expressed .\n\n# Results\n\nWe show how our collection of benchmark problems can be used by reporting selected results using several parameter estimation methods. We emphasize that the purpose of this work is not to provide a comprehensive comparison of all existing approaches, but to provide a useful, versatile, and practical test set and illustrate its use. For simplicity, and to enable direct comparisons among benchmarks, all the computations reported in this section have been carried out in Matlab, using the algorithms available in the AMIGO toolbox . This includes both global and local optimization methods; the latter have been used in a multistart procedure, where multiple instances are launched from different initial points selected randomly within the parameter bounds.\n\nBefore estimating the parameter values we assessed the identifiability of the models. Model parameters were ranked according to their influence on the system output (sensitivity), using the local rank routine (AMIGO_LRank) from the AMIGO toolbox as described in the previous section. As is typical of models of this size, it was found that all benchmarks have identifiability issues, with a portion of their parameters exerting very little influence on the model outputs. Therefore, the goal of these benchmarks is not to obtain accurate estimates of all the parameters, but rather to obtain a good fit to the data: when tested on this collection of benchmarks, optimization methods should be evaluated by their ability to minimize the objective function. As an illustration of the typical outcome that can be obtained from the local rank method, we show in Figure the results of the practical identifiability analysis for problem B2. Figure ranks the parameters in decreasing order of their influence on the system's behaviour, which is quantified by means of the importance factors $\\delta_p^{msqr}$: $$\\delta_p^{msqr} = \\frac{1}{n_{lhs}n_d}\\sqrt{\\sum_{mc=1}^{n_{lhs}}\\sum_{d=1}^{n_{d}}([s_d]_{mc})^2}$$ where $n_{lhs}$ are the different values for each of the parameters selected by Latin Hypercube Sampling, $n_d$ is the number of experiments, and $[s]$ are the relative sensitivities. Figure shows the sensitivity of the state variables with respect to the parameters. From this figure it becomes clear that many parameters such as 8-10, 32-38, 56-64, are not influencing observables. Therefore those parameters are expected to be poorly identifiable.\n\nIn the remainder of this section we show selected results of the best performing optimization methods in every parameter estimation problem. Complete results for every benchmark are reported in the Supplementary Information online.\n\nTo evaluate the performance of local methods we launched repeated local searches in a multistart procedure, starting from initial parameter vectors with values chosen randomly from within the parameter bounds. It should be noted that, while multistarts of local searches are a popular option for parameter estimation, they are usually not the most efficient solution when dealing with large-scale nonlinear models. Due to the multimodal nature of these problems, local methods tend to be stuck in local minima, which can sometimes be very far from the global optimum. Launching local methods from random points leads to spending a large fraction of the computational time in unsuccessful searches. Hence, global optimization methods usually perform better in these cases, especially if\u2013as happens with eSS\u2013they are used in combination with local searches. As an example, Figure shows histograms of the results (i.e., objective function values reached and the frequency with which they were found) obtained with the DHC local method for benchmark B3. Similar outcomes were obtained with the other benchmarks and methods. Complete results for all the benchmarks and with different methods are included in the Supplementary Information online. In all cases, the number of local searches was fixed so that their overall CPU time was comparable to that consumed in optimizations where the global method eSS was used. While there was great variability in the results obtained for the different benchmarks, a conclusion was common to all of them: in all cases, the local methods were outperformed by the global optimization method eSS.\n\nThe convergence curves of the six benchmarks are shown in Figure . Results were obtained with the eSS method on a computer with Intel Xeon Quadcore processor, 2.50 GHz. It can be clearly noticed that, due to the differences in size and complexity, the computational cost of estimating the parameters varies among benchmarks. Results show that they can be naturally classified in three different levels:\n\n- B1 and B3 are the most expensive: in our computers, obtaining a reasonably good fit took at least one week.\n\n- B5 and B6 are intermediate in terms of cost; a good fit could be obtained in one day.\n\n- B2 and B4 are the least expensive, with good fits obtained in one or a few hours.\n\nThese computation times can be used as a reference to select the appropriate benchmarks to test a particular optimization method, depending on its focus and the available time. Due to the stochastic nature of the eSS algorithm, results may vary among optimization runs. Figure shows the dispersion of 20 different optimization results for benchmark B4.\n\nTable summarizes the settings and outcomes of the parameter estimations with eSS, including the local method used for each problem. Note that, while DN2FB is generally recommended , we have realized that for large-scale problems it may not be the most efficient local method, due to the large number of evaluations needed to calculate the derivatives. Hence, for the problems considered here it is outperformed by other methods like DHC, SOLNP, or FMINCON.\n\nOne of the outcomes reported in Table is the cumulative normalized root-mean-square error, $\\sum{}$NRMSE. The root-mean-square error is a standard measure of the goodness of fit obtained for an observable which is defined as $$\\mathrm{RMSE} = \\sqrt{ \\frac{ \\sum^{n_{\\epsilon}}_{\\epsilon=1} \\sum^{n^{\\epsilon,o}_s}_{s=1} \\left(ym^{\\epsilon,o}_s - y^{\\epsilon,o}_s(p) \\right)^2 } {n_{\\epsilon} \\cdot n^{\\epsilon,o}_s } }$$ with the same notation as in equation (). To account for the different magnitudes of the observables it is useful to report the normalized root-mean-square error, NRMSE, which scales the NRMSE by dividing it by the range of values of the observable: $$\\label{eq:NRMSE}\n\\mathrm{NRMSE} = \\frac{\\mathrm{RMSE}}{max( ym^{\\epsilon,o} )-min( ym^{\\epsilon,o} )}$$ The cumulative normalized root-mean-square error, $\\sum{}$NRMSE, is simply the sum of the NRMSE for all observables.\n\n```latex\n\\begin{table*}[t]\\begin{flushleft}\n \\caption{\n \\textbf{Parameter estimation with eSS (AMIGO implementation): settings and results}\n }\n \\begin{tabular}{|l|l|l|l|l|l|l|}\n \\hline \nModel ID & \\textbf{B1} & \\textbf{B2} & \\textbf{B3} & \\textbf{B4} & \\textbf{B5} & \\textbf{B6} \\\\\n\\hline\n$p^U$ & $5\\cdot p_{nom}$ & $10\\cdot p_{nom}^{(ex)}$ & $10\\cdot p_{nom}^{(ex)}$ & $5\\cdot p_{nom}$ & varying & varying \\\\\n$p^L$ & $0.2\\cdot p_{nom}$ & $0.1\\cdot p_{nom}^{(ex)}$ & $0.1\\cdot p_{nom}^{(ex)}$ & $0.2\\cdot p_{nom}$ & varying & varying \\\\ \nLocal method & DHC & FMINCON & none & FMINCON & DHC & FMINCON \\\\ \nCPU time & $\\approx$170 hours & $\\approx$3 hours & $\\approx$336 hours & $\\approx$1 hour & $\\approx$16 hours & $\\approx$24 hours \\\\\nEvaluations & $6.9678\\cdot10^5$ & $9.0728\\cdot10^4$ & $7.2193\\cdot10^6$ & $1.6193\\cdot10^5$ & $8.8393\\cdot10^4$ & $2.0751\\cdot10^6$ \\\\\n$J_0$ & $5.8819\\cdot10^9$ & $3.1136\\cdot10^4$ & $4.6930\\cdot10^{16}$ & $6.6034\\cdot10^8$ & $3.1485\\cdot10^4$ & $8.5769\\cdot10^5$ \\\\\n$J_f$ & $1.3753\\cdot10^4$ & $2.3390\\cdot10^2$ & $3.7029\\cdot10^{-1}$ & $4.5718\\cdot10^1$ & $3.0725\\cdot10^3$ & $1.0833\\cdot10^5$ \\\\\n$J_{nom}$ & $1.0846\\cdot10^6$ & $-$ & $0$ & $3.9068\\cdot10^1$ & $4.2737\\cdot10^3$ & $ -$ \\\\\n$\\sum$NRMSE$_0$ & $3.5834\\cdot10$ & $8.5995\\cdot10^{-2}$ & $3.5457\\cdot10$ & $4.8005\\cdot10$ & $4.0434\\cdot10^1$ & $2.3808\\cdot10^2$ \\\\ \n$\\sum$NRMSE$_f$ & $5.7558$ & $2.4921$ & $2.9298\\cdot10^{-1}$ & $2.8010$ & $2.7430\\cdot10^1$ & $1.6212\\cdot10^2$ \\\\\n$\\sum$NRMSE$_{nom}$ & $3.8203$ & $-$ & $0$ & $2.8273$ & $3.0114\\cdot10^1$ & $ -$ \\\\ \n\\hline\n\\end{tabular}\n\\\\\nOptimization settings and results obtained for each of the benchmarks with the eSS method, using the implementation provided in the AMIGO toolbox. In some cases the lower ($p^L$) and upper ($p^U$) bounds in the parameters are specified as a function of the nominal parameter vector, $p_{nom}$. There may be exceptions to these bounds, in cases where it makes sense biologically to have a different range of values (e.g. Hill coefficients in the range of 1--12). Cases with exceptions are marked by $^{(ex)}$. In other cases all the parameters have specific bounds; this is marked as ``varying''. The initial objective function value, $J_0$, corresponds to the parameter vector $p_0$ used as initial guess in the optimizations, which is randomly selected between the bounds $p^L$ and $p^U$. The only exception is benchmark B2, where $p_0$ is the parameter vector reported in the original publication. \nThe final value achieved in the optimizations is $J_f$. $\\sum$NRMSE is the cumulative normalized root-mean-square error as defined in eq. (\\ref{eq:NRMSE}).\nResults obtained on a computer with Intel Xeon Quadcore processor, 2.50 GHz, using Matlab 22.214.171.1249 (R2009b) 32-bit.\n\\end{flushleft}\n\\label{tab:pe}\n\\end{table*}\n```\n\nNote that, due to the realistic nature of most of these problems, there may be lack of identifiability and optimization may result in overfitting: that is, an optimal solution may be found that gives a better fit to the pseudoexperimental data than the one obtained with the nominal parameter vector used to generate the data. This is explained because, in the presence of measurement noise, the optimal solution manages to fit partially not only the system dynamics, but also the noise itself\u2013which of course cannot be achieved by the nominal solution. Hence in the results reported in Table the optimal objective function value ($J_f$) is sometimes smaller (i.e. better) than the nominal one ($J_{nom}$). This may also happen with the NRMSE values. Note however that, since the objective functions used in the calibration ($J$) and the NRMSE are different metrics, their behavior may be different. For example, for B1 $J_f$NRMSE$_{nom}$, while for B4 the opposite is true: $J_f>J_{nom}$ and NRMSE$_f<$NRMSE$_{nom}$.\n\nAs an example of the fit between data and model output that is obtained after calibration, let us consider benchmark B5, which uses pseudoexperimental data corresponding to ten different experiments. Figure reports a good match between data and model output; notably, the algorithm manages to reproduce the oscillations in NF$\\kappa$B.\n\nThe fit can also be represented with histograms of the residuals, which show the distribution of the errors in the state variables. This kind of plot can also be used for showing the errors in the recovered parameters when compared to the nominal (true) values. An alternative way of visualizing this relation is by plotting the predicted states (or parameter) values as a function of the true values. This results in a diagonal\u2013like plot; the larger the deviations from the diagonal, the larger the prediction errors. When there are identifiability issues, the fit is typically better for the states than for the parameters, because a good fit to the data does not necessarily ensure that the correct parameters have been recovered. Examples of these plots are shown in Figure , which shows the fits obtained for benchmark B4.\n\n# Conclusions\n\nTo address the current lack of ready-to-run benchmarks for large-scale dynamic models in systems biology, we have presented here a collection of six parameter estimation problems. They cover the most common types, including metabolism, transcription, signal transduction, and development. The benchmarks are made available in a number of formats. As a common denominator, all of the models have been implemented in Matlab and C. When possible (i.e. for benchmarks B1\u2013B5), model descriptions are also given in SBML. Ready-to-run implementations of all the benchmarks are provided in Matlab format (both with and without the AMIGO toolbox) and in COPASI (for benchmarks B1\u2013B4). With these files it is straightforward to reproduce the results reported here.\n\nMore importantly, the benchmark files can be easily adapted to test new parameter estimation methods for which a Matlab, C, or COPASI implementation is available. The performance of an existing or newly developed method can be evaluated by comparing its results with those reported here, as well as with those obtained by other methods. To this end, we have provided guidelines for comparing the performance of different optimizers. The problems defined here may also be used for educational purposes, running them as examples in classes or using them as assignments.\n\nFinally, it should be noted that the utility of this collection goes beyond parameter estimation: the models provided here can also be used for benchmarking methods for optimal experimental design, identifiability analysis, sensitivity analysis, model reduction, and in the case of metabolic models also for metabolic engineering purposes.\n\n# Competing interests\n\nThe authors declare that they have no competing interests.\n\n# Author's contributions\n\nJRB, PM, and JJ conceived of the study. JRB and AFV coordinated the study. KS, SB, JS, DCS, AC, JSR, KM, PM, and JJ contributed with models. AFV, DH, KS, and DCS formatted the models and tested the implementations. AFV and DH carried out the computations. AFV, EBC, and JRB analyzed the results. AFV and JRB drafted the manuscript. KS, SB, AC, JSR, KM, EBC, and JJ helped to draft the manuscript. All authors read and approved the final manuscript.\n\n# Supplementary Information\n\n\n\n# Acknowledgments\n\nThis work was supported by the EU project \"BioPreDyn\" (EC FP7-KBBE-2011-5, grant number 289434). We would like to thank Attila G\u00e1bor for reading the manuscript, providing critical comments, and finding bugs in the codes; David Rodr\u00edguez Penas for helping in debugging the codes; and Thomas Cokelaer for providing the SBML qual file of model B5.","meta":{"dup_signals":{"dup_doc_count":39,"dup_dump_count":30,"dup_details":{"curated_sources":2,"2023-14":1,"2022-49":1,"2022-27":1,"2022-05":1,"2021-39":1,"2021-21":1,"2021-10":1,"2020-45":1,"2020-34":1,"2020-24":1,"2019-47":1,"2019-43":1,"2019-30":1,"2019-22":3,"2019-13":2,"2019-04":2,"2018-47":1,"2018-43":1,"2018-39":1,"2018-30":2,"2018-22":1,"2018-13":2,"2018-05":1,"2017-47":1,"2017-39":1,"2017-34":1,"2023-40":1,"2024-26":3,"2024-18":1}},"filename":"out\/1407.5856_extract_benchmarks_arxiv_ARTICLE.tex.md"},"subset":"arxiv"} +{"text":"abstract: Quantum entanglement is one of the core features of quantum theory. While it is typically revealed by measurements along carefully chosen directions, here we review different methods based on so-called *random* or *randomized measurements*. Although this approach might seem inefficient at first, sampling correlations in various random directions is a powerful tool to study properties which are invariant under local-unitary transformations. Based on random measurements, entanglement can be detected and characterized without a shared reference frame between the observers or even if local reference frames cannot be defined.\n .\n This overview article discusses different methods using random measurements to detect genuine multipartite entanglement and to distinguish SLOCC classes. Furthermore, it reviews how measurement directions can efficiently be obtained based on spherical designs.\nauthor: Lukas Knips\ntitle: A Moment for Random Measurements\n\n# Introduction\n\n*Entangled quantum systems are more strongly correlated than classical systems can be.* This simplified statement proves to be remarkably rich by raising several questions, from practical ones - \"How do we measure those correlations?\" - to more subtle ones - \"What does *stronger* mean here and *how* much stronger are the correlations? Are there exceptions to this rule? Can we use this relation to characterize entanglement?\"\n\nWhile entanglement of bipartite or multipartite systems is typically detected using a set of carefully chosen measurements on all subsystems, here, we discuss a simpler and, as it turns out, yet powerful method to analyze correlations and entanglement. There, one samples from the entire set of correlations by measuring in random directions, instead of considering a specific set of correlations. This type of measurement is often called a *random* or *randomized measurement*. Depending on the context, these can be either controlled measurements without a shared reference frame, measurements without active control over (but knowledge of) the measurement direction, or even measurements with neither control nor knowledge about the measurement direction.\n\nPrevious works showed the violation of Bell-type inequalities without the need for a shared reference frame\u00a0, but with the ability to repeat previously conducted measurements. Random measurements in the sense of measurements without control have been used in the context of many-body systems\u00a0, for the verification of quantum devices\u00a0, for the detection of entanglement\u00a0, for the prediction of fidelities, entanglement entropies and various other properties\u00a0 as well as for the characterization and classification of genuine multipartite entanglement\u00a0. Recently, it was shown that even bound entangled states, i.e., states so weakly entangled that their entanglement is not recognized by the PPT (positive partial transpose) criterion\u00a0, can be characterized in a reference-frame independent manner\u00a0.\n\nIn this perspective article, we will discuss a work of Ketterer et al., which recently appeared in *Quantum*\u00a0. Before we will review their means of entanglement detection and classification, we will introduce the general concept of random measurements, provide an intuitive understanding for them and give context by discussing other methods for detection and classification of entanglement in this scenario. Finally, we will discuss their approach for selecting local measurement directions based on spherical $t$-designs.\n\n# Scenario\n\nTo illustrate the scenario, we can first think of a two-qubit state whose subsystems are sent to two observers via unitary but unknown quantum channels as shown in Fig.\u00a0. Hence, the goal is to characterize the quantum state as well as possible despite the lack of a shared reference frame. Interestingly, although the correlation value of the outcomes of both observers (which quantifies the probability for both results being equal or opposite) is random as it depends on the respective measurement directions, the *distribution* of correlation values turns out to be a useful resource for describing the state.\n\nIf the unknown unitary transformations of the quantum channels are furthermore time-dependent, i.e., the channels are *fluctuating* or *noisy*, one could naively perform measurements in any fixed direction. However, the type of fluctuations strongly influences the measurement results: are the fluctuations distributed uniformly (according to the Haar measure) or are we oversampling some and undersampling other directions? To mitigate this problem and avoid any dependence on the type of noise of the quantum channels, the measurement directions can themselves be chosen Haar-randomly. In this way, the concatenation of the quantum channel of the noisy environment featuring possibly biased noise with the channel corresponding to a Haar random distribution of unitary transformations due to intentional rotation removes any bias.\n\n# Intuitive Picture\n\nTo establish an intuitive understanding how entangled and separable quantum states behave in this scenario and how we can make use of the stronger correlations of an entangled state, let us do a short numerical experiment. Take the pure product state $|\\psi^{\\mathrm{prod}}\\rangle=|00\\rangle$ and a Bell state, e.g., $|\\psi^-\\rangle=(|01\\rangle-|10\\rangle)\/\\sqrt{2}$. Now repeatedly draw two $2\\times2$ unitary matrices according to the Haar measure (see, e.g., for a practical recipe). Apply the first (second) of those unitaries to the first (second) qubit of both states and evaluate, e.g., $\\langle\\sigma_z\\otimes\\sigma_z\\rangle$, i.e., the correlation value in the $zz$-direction. A histogram of those two distributions shows the very distinct behavior of the product and the maximally entangled state, as shown in figure . The maximally entangled state results in a uniform distribution of the expectation values of the correlations, whereas small absolute values are much more probable than large ones for the product state.\n\nFor those two states, it is still easy to understand the origin of the corresponding distributions. Consider the schematic arrow diagrams in Fig. , where in a) the two red arrows represent the spins of a product state after applying arbitrary local unitary (LU) transformations each parametrized here by a single angle. A measurement of $\\sigma_z\\otimes\\sigma_z$ is just given by the product of the results, which are obtained by the projection onto the $\\sigma_z$ directions. Both angles, $\\alpha$ and $\\beta$, have to be close to $0$ or to $\\pi$ to give a large absolute correlation value. In the example, we show the case of $\\alpha=75^\\circ$ and $\\beta=60^\\circ$ leading to a $zz$-correlation of about $0.13$.\n\nFor the maximally entangled state in Fig.\u00a0 b), the situation is very different. We illustrate the state as a superposition of both red arrows with both yellow arrows. By applying an LU transformation of the form $U\\otimes U$ on both qubits such that, say, the first qubit is aligned with its measurement direction (i.e., such that $U$ is a rotation by an angle $-\\alpha$), the state does not change (up to a global phase factor). The expectation value of $\\sigma_z\\otimes\\sigma_z$ therefore now obviously only depends on the single relative angle $\\beta-\\alpha$ instead of the two angles $\\alpha$ and $\\beta$. For the same angles as in the product-state case, we now obtain a $zz$-correlation of about $-0.71$.\n\nFormally, we can obtain the distributions of correlations $E$ by integration over the Bloch spheres $S^{2}\\times S^{2}$ as $$\\begin{aligned}\n&p_{\\mathrm{prod}}(E) = \\frac{1}{(4\\pi)^2} \\int_{S^2} {\\mathrm{d}U_1}\\int_{S^2} {\\mathrm{d}U_2} \\delta(E-E_{\\mathrm{prod}}(U_1,U_2)) \\nonumber \\\\\n&= \\frac{1}{4} \\int_0^{\\pi} \\sin(\\theta_1) {\\mathrm{d}\\theta_1}\\int_0^{\\pi} \\sin(\\theta_2){\\mathrm{d}\\theta_2} \\delta(E - \\cos(\\theta_1) \\cos(\\theta_2) ) \\nonumber \\\\\n&= -\\frac{1}{2} \\log(|E|),\\\\\n&p_{\\mathrm{Bell}}(E) = \\frac{1}{(4\\pi)^2} \\int_{S^2} {\\mathrm{d}U_1}\\int_{S^2} {\\mathrm{d}U_2} \\delta(E-E_{\\mathrm{Bell}}(U_1,U_2)) \\nonumber \\\\\n&= \\frac{1}{4} \\int_0^{\\pi} \\sin(\\theta_1) {\\mathrm{d}\\theta_1}\\int_0^{\\pi} \\sin(\\theta_2){\\mathrm{d}\\theta_2} \\delta(E - \\cos(\\theta_1-\\theta_2) ) \\nonumber \\\\\n&= \\frac{1}{2}.\n\\end{aligned}$$ In the histograms of Fig.\u00a0, we have also shown the distributions of two other states. The Werner state, as the mixture of the Bell state (corresponding to a uniform distribution) and the maximally mixed state (corresponding to a Dirac delta peak at $0$ as the maximally mixed state $\\mathbbm{1}\/4$ always results in $\\langle \\sigma_z\\otimes\\sigma_z\\rangle=0$, independently of the measurement directions) with mixing parameter $p$, results in a uniform distribution in the range $[-p,p]$ (shown above for $p=1\/\\sqrt{3}$), i.e., mixing white noise to a Bell state bounds the possible correlations in this sense. The two-qubit marginal of a tripartite $W$ state, however, gives yet another distinct distribution.\n\nDue to the nature of the measurements, two quantum states which are equivalent up to local unitary transformations will show the same distribution of correlations. Therefore, all pure two-qubit product states result in a logarithmic distribution, while, for example, all maximally entangled two-qubit states result in a uniform distribution.\n\n# Statistical Moments\n\nTo characterize such distributions of correlations, statistical moments have proven to be powerful. The $t$-th moment of a probability distribution $p(x)$ is given by $$m^{(t)} = \\int x^t p(x) {\\mathrm{d}x}.$$ The first moment ($t=1$) is the mean value. The next lower *centralized* moments (i.e., the moments after shifting the distribution around its mean) are the variance, the skewness and the kurtosis. As we are dealing with a symmetric distribution here, the mean value vanishes and, hence, the centralized moments are identical to the moments themselves. In our scenario, where all moments are finite and the moment-generating function has a positive radius of convergence, knowing all moments allows one to uniquely determine the distribution\u00a0. Random measurements can be used to obtain the statistical moments of the distribution of expectation values. The scheme naturally generalizes to the case of more than two parties\u00a0 and is not limited to qubits .\n\nBy considering the measurement results on a subset of parties, also distributions of correlations of marginal states can be retrieved. As the second moment is the first non-trivial one here, we will from now on denote it just as $m:= m^{(2)}$ and specify the respective set of parties it pertains to using a subscript. The combined information of the second moments of the *full* distribution ($m_{1,2,\\dots,n}$) involving all $n$ observers together with those of all marginal states ($m_{1}$, $m_{1,2}$, $m_{1,2,3}$, $\\dots$, $m_{(n-1),n}$, $\\dots$, $m_{n}$) allows us to determine the purity of an $n$-qubit state\u00a0 as $$\\begin{aligned}\n\\operatorname{tr}{\\varrho^2}=\\frac{1}{2^n} \\sum_{\\mathcal{A} \\in \\mathbbm{P}(\\mathcal{S})}{ 3^{|\\mathcal{A}|} \\, m_\\mathcal{A}},\n\\end{aligned}$$ where $\\mathbbm{P}(\\mathcal{S})$ is the set of all subsets of $\\mathcal{S}=\\{1,\\ldots,n\\}$ and $|\\mathcal{A}|$ denotes the cardinality (number of elements) of the set $\\mathcal{A}$. Here, $m_\\mathcal{A}$ is the second moment of the distribution involving the observers given by $\\mathcal{A}$. In other words, the purity is given by the weighted sum of second moments of the full distribution and all marginal distributions. In the remainder of this perspective, we will discuss how to detect and to characterize entanglement based on statistical moments.\n\n# Detecting Entanglement\n\nA remarkably simple method to detect entanglement was presented in\u00a0. For any pure, fully separable $n$-qubit state $|\\psi\\rangle=|\\psi_1\\rangle\\otimes|\\psi_2\\rangle\\otimes\\dots|\\psi_n\\rangle$, the length of correlations (sum of the correlations of all basis directions squared) is $1$. If *each* observer $j$ aligns their measurement apparatus along $|\\psi_j\\rangle$, they jointly observe a correlation of $1$, whereas any orthogonal measurement direction will result in the loss of correlated outcomes. This relation holds independently of the number of parties and can be adapted for arbitrary dimensions. Choosing $M$ measurement directions completely randomly with $K$ repetitions each allows to estimate the length of correlations. If one significantly overcomes the threshold of a product state, one can, after proper statistical analysis, conclude that the state must carry *some* entanglement.\n\nSubsequent work\u00a0 studied the length of correlations of various genuinely multipartite entangled states and extended the results of\u00a0 to mixed states. There, entanglement detection based on a single measurement setting is derived explicitly. However, it was found that the length of correlations (alone) is not an entanglement measure as it can increase under local operations and classical communication (LOCC). Nevertheless, random measurements can be used to witness genuine multipartite entanglement, i.e., entanglement truly involving all parties, as we will see below. Moreover, classes of states inequivalent under stochastic local operations and classical communication (SLOCC)\u00a0 can be distinguished in this way, as we will discuss.\n\nYet another approach for entanglement detection is used in . There, a measurement direction is randomly drawn from a specific set. In addition, a framework for the probabilistic use of entanglement witnesses is provided. With this, entanglement can in some cases be detected with a single copy with high confidence\u00a0 or different classes of entanglement can be discriminated using a few copies of the state in a more general approach\u00a0.\n\nAlso note that it is common that entanglement criteria are sufficient, but not necessary. For example, any method, whether based on random measurements or on fully controlled ones, that is only using *full* correlations, i.e., correlations between all $n$ parties without inspecting correlations between subsets of parties, will miss some entangled states: There are genuinely $n$-partite entangled states without any correlations between all $n$ parties\u00a0, which therefore will result in a Dirac-delta peak for the full distribution and are on that level indistinguishable from white noise.\n\n# Detecting Genuine Multipartite Entanglement\n\n## Using Second Moments of Marginals\n\nIn , the second moments of the distributions are used for the detection of genuine multipartite entanglement (entanglement which truly involves all parties and cannot be broken down into pure biseparable states or even mixtures of states biseparable with respect to different bipartitions). The key ingredient of that strategy is to relate the second moment of the distribution of correlations to the second moments of marginal distributions and test that quantifier with a purity-dependent threshold. For a pure product state $|\\psi_{\\mathcal S}\\rangle=|\\psi_{{\\mathcal A}_1}\\rangle\\otimes|\\psi_{{\\mathcal A}_2}\\rangle$, the second moment of the distribution of correlations when considering the set of parties $\\mathcal S$ factorizes into the second moments of the corresponding marginal distributions, i.e., $m_{{\\mathcal S}}=m_{{\\mathcal A}_1}m_{{\\mathcal A}_2}$. For a pure state, genuine multipartite entanglement can therefore be detected by verifying $m_{{\\mathcal S}}>m_{{\\mathcal A}_1}m_{{\\mathcal A}_2}$ for every possible bipartition ${\\mathcal A}_1|{\\mathcal A}_2$ such that ${\\mathcal S}={\\mathcal A}_1\\cup{\\mathcal A}_2$.\n\nUnfortunately, for non-pure biseparable states, the second moment of the full distribution might be larger than the product of the marginals' second moments. To mitigate this problem, purity-dependent bounds for detecting genuine multipartite entanglement can be used. For an $n$-qubit state we can consider $${\\mathcal M}_{n} := m_{{\\mathcal S}} - \\frac{1}{2} \\sum_{{\\mathcal A} \\in \\{\\mathbbm{P}({\\mathcal S}) \\setminus ({\\mathcal S} \\cup \\varnothing)\\}}{ m_{\\mathcal A} m_{{\\mathcal S} \\setminus {\\mathcal A}}}$$ to capture by how much the full distribution's second moment can be expressed in terms of the marginals' ones. For example, it has been found\u00a0 that in the case of $n=4$ qubits, all biseparable states fulfill the relation $$\\begin{aligned}\n{\\mathcal M}_{4}\n\\le \\tfrac{8}{81}(1-\\operatorname{tr}\\varrho^2).\n\\label{eq:M4}\n\\end{aligned}$$ A violation of the latter inequality therefore indicates genuine fourpartite entanglement.\n\nIn Fig.\u00a0, the values of ${\\mathcal M}_{4}$ based on the distribution of fourpartite correlations for experimental photonic states (four qubits encoded in polarization and path degrees of freedom of two photons from type-I spontaneous parametric down-conversion, see\u00a0 for details on the setup) are compared with the purity-dependent threshold given by Eq.\u00a0()\u00a0. As the red (four-qubit GHZ state) and blue (linear four-qubit Cluster state) data points are above the threshold, those states are shown to be genuinely fourpartite entangled. The distributions of a prepared triseparable state \\[$\\propto \\left(|00\\rangle+|11\\rangle\\right)\\otimes|0\\rangle\\otimes|0\\rangle$\\] and of a biseparable state \\[$\\propto \\left(|00\\rangle+|11\\rangle\\right)\\otimes\\left(\\sin\\varphi\\,|00\\rangle+\\cos\\varphi\\,|11\\rangle\\right)$ with $\\varphi\\approx0.2$\\], however, contain only one and two bipartite entangled marginals, respectively, as one finds by considering ${\\mathcal M}_{3}$ for all tripartite and ${\\mathcal M}_{2}$ for all bipartite marginals of those states. Thus, this approach not only allows to detect genuine multipartite entanglement for mixed states, but generally provides insights into the entanglement structure.\n\n## Using Higher Order Moments\n\nThe combination of moments of various orders may in general capture more information about a distribution of correlations and, hence, about the underlying quantum state than restricting the analysis to moments of second order, only. In two recent works\u00a0, the latest of which was published in *Quantum*\u00a0, Ketterer et al. use a combination of the second and the fourth moment, denoted there by $\\mathcal{R}^{(2)}$ and $\\mathcal{R}^{(4)}$ (with $\\mathcal{R}^{(t)}\\equiv m^{(t)}$), respectively. Obviously, those moments are not entirely independent of each other. For example, a vanishing second moment $\\mathcal{R}^{(2)}$ indicates a Dirac delta distribution, which in turn requires $\\mathcal{R}^{(4)}$ to vanish. They discuss possible combinations of those two moments for two, three and four qubits. In a combined analytical and numerical study, the authors identify regions in an $\\mathcal{R}^{(2)}$-$\\mathcal{R}^{(4)}$ plane which allow them to directly indicate that a state is, e.g., biseparable or that it cannot belong to a specific SLOCC class.\n\nIn Fig.\u00a0, three-qubit states are sampled and represented in the $\\mathcal{R}^{(2)}$-$\\mathcal{R}^{(4)}$ plane. The blue-shaded area is outlined by different LU-inequivalent types of biseparable states. Ketterer et al. propose the inequality $$\\mathcal{R}^{(4)} \\geq \\frac{1}{425}\\left[972 \\left(\\mathcal{R}^{(2)}\\right)^2+90\\mathcal{R}^{(2)}-5\\right],\\label{eq:threequbit_bisep}\n%\\mathcal{R}^{(4)}_{\\varrho_{\\mathrm{bisep}}} \\geq \\frac{1}{425}\\left[972 \\left(\\mathcal{R}^{(2)}_{\\varrho_{\\mathrm{bisep}}}\\right)^2+90\\mathcal{R}^{(2)}_{\\varrho_{\\mathrm{bisep}}}-5\\right],\\label{eq:threequbit_bisep}$$ which is shown as the green dashed line, as demarcation between biseparable and genuine multipartite entangled states. Therefore, states for which the fourth moment $\\mathcal{R}^{(4)}$ is below this threshold are not biseparable and are hence shown to be genuinely multipartite entangled.\n\n# Witnessing SLOCC classes\n\nFurthermore, in the authors discuss moments of distributions of correlations for witnessing SLOCC classes, which allows to decide if a state is reversibly convertible into, say, a $W$ state. Figure shows sampled four-qubit states in the $\\mathcal{R}^{(2)}$-$\\mathcal{R}^{(4)}$ plane. The region with the solid border contains states of the $\\mathcal{W}^{(4)}$ class, i.e., the SLOCC class of $W$ states, whereas the region surrounded by the dashed line encompasses its convex hull $\\operatorname{Conv}(\\mathcal{W}^{(4)})$. States whose moments lie outside of the regions enclosed by the solid and dashed lines are shown not to belong to the SLOCC classes $\\mathcal{W}^{(4)}$ and $\\operatorname{Conv}(\\mathcal{W}^{(4)})$, respectively.\n\nWe already see from Fig.\u00a0 that $\\mathcal{R}^{(2)}$ might be very helpful for witnessing that a state is not a member of the mixed $W$ class $\\operatorname{Conv}(\\mathcal{W}^{(4)})$. In contrast, additional consideration of $\\mathcal{R}^{(4)}$ does not improve the ability for detection significantly. More generally, the authors of\u00a0 derive a witness for $\\operatorname{Conv}(\\mathcal{W}^{(n)})$ for an arbitrary number of $n$ qubits. If the second moment $\\mathcal{R}^{(2)}$ of a distribution of correlations is larger than $$\\chi^{(n)}:=\\frac{5-\\frac{4}{n}}{3^n},$$ the $n$-qubit state is not a member of the mixed $W$ class $\\operatorname{Conv}(\\mathcal{W}^{(n)})$\u00a0. For $4$ qubits, the threshold is $\\chi^{(4)}=4\/81\\approx0.049$. Hence, all states with $\\mathcal{R}^{(2)}>\\chi^{(4)}=4\/81$, i.e., on the right-hand side of $|\\mathrm{W}_4\\rangle$ and $|\\phi\\rangle\\otimes|\\mathrm{GHZ}_3\\rangle$ in Fig.\u00a0, are shown not to belong to the mixed $W$ class.\n\nPlease note that this is not a statement about genuine multipartite entanglement. For example, both the biseparable state $|\\mathrm{Bell}\\rangle\\otimes|\\mathrm{Bell}\\rangle$ and the genuinely fourpartite entangled state $|\\mathrm{GHZ}_4\\rangle$ are outside of this region as both are not members of $\\operatorname{Conv}(\\mathcal{W}^{(4)})$.\n\n# Spherical Designs\n\nKetterer et al.\u00a0 not only use higher-order moments to detect and characterize entanglement, but they also follow a different approach for selecting measurement directions. Up to this point of the discussion it has been assumed that the distributions of correlations are obtained by sampling over a large set of random directions, where the distribution of directions should follow a Haar random distribution. Whereas in, e.g., the sampling was done over a large set of Haar randomly distributed measurement directions to describe the distributions of correlations, the approach of Ketterer et al. allows one to fix the number of measurement directions if a specific moment is to be calculated. This significantly reduces the measurement effort at the cost of requiring active control over the local measurement directions.\n\nUsing the notation of Ref.\u00a0, the $t$-th moment $\\mathcal{R}^{(t)}$ of the distribution of correlations of an $n$-qubit state can be obtained from $$\\begin{aligned}\n\\mathcal{R}^{(t)} &= \\!\\!\\int\\limits_{\\mathcal{U}(2)} \\!\\!\\!\\mathrm{d}\\eta (U_1) \\cdots\\!\\! \\int\\limits_{\\mathcal{U}(2)}\\!\\!\\! \\mathrm{d}\\eta(U_n)\\langle U_1\\sigma_z U_1^\\dagger\\otimes \\dots \\otimes U_n\\sigma_z U_n^\\dagger \\rangle^t \\nonumber \\\\\n&= \\frac{1}{\\left(4\\pi\\right)^n} \\int_{S^2} \\mathrm{d}\\vec{u}_1 \\cdots \\int_{S^2} \\mathrm{d}\\vec{u}_n E(\\vec{u}_1,\\dots,\\vec{u}_n)^{t}, \\label{eq:tth_moment_integration}\n\\end{aligned}$$ where $E(\\vec{u}_1,\\dots,\\vec{u}_n):=\\langle \\sigma_{\\vec{u}_1}\\otimes\\dots\\otimes\\sigma_{\\vec{u}_n} \\rangle$ denotes the correlation along specific local measurement directions $\\vec{u}_i$ with $\\sigma_{\\vec{u}_i}=\\vec{u}_i\\cdot\\vec{\\sigma}$, where $\\vec{\\sigma}=\\left(\\sigma_x,\\sigma_y,\\sigma_z\\right)^{T}$ is a vector of the Pauli matrices $\\sigma_x$, $\\sigma_y$ and $\\sigma_z$, while $\\eta$ and $\\mathrm{d}\\vec{u}_i=\\sin\\theta_i\\mathrm{d}\\theta_i\\mathrm{d}\\varphi_i$ are the Haar measure on the unitary group $\\mathcal{U}(2)$ and the uniform measure on the Bloch sphere $S^2$, respectively.\n\nTo determine the average of a homogeneous polynomial $P_{t^\\prime}:S^2\\rightarrow\\mathbb{R}$ of order $t^\\prime$ over the Bloch sphere $S^2$, it is sufficient to sample a finite set of points as shown in\u00a0. For that, they use a so-called spherical $t$-design in dimension three which is defined by the finite set of points $\\{\\vec{u}_i|i=1,\\dots,L^{(t)}\\}\\subset S^2$ such that $$\\int_{S^2} \\mathrm{d}\\vec{u} P_{t^\\prime}(\\vec{u}) = \\frac{1}{L^{(t)}}\\sum_{k=1}^{L^{(t)}}P_{t^\\prime}(\\vec{e}_k)$$ holds for all homogeneous polynomials of order $t^\\prime$ with $t^\\prime\\leq t$. Hence, for the respective spherical $t$-design, $L^{(t)}$ determines the number of measurement directions to consider. Using this framework, Ketterer et al. evaluate the $t$-th moment of the correlations of an $n$-qubit state as $$\\mathcal{R}^{(t)} = \\frac{1}{\\left(L^{(t)}\\right)^n}\\sum_{k_1,\\dots,k_n=1}^{L^{(t)}} \\langle \\sigma_{\\vec{u}_1}\\otimes\\dots\\otimes\\sigma_{\\vec{u}_n} \\rangle^{t},$$ instead of using the integration as in Eq.\u00a0(). Although they also show a similar derivation for qudit states employing *unitary* $t$-designs, we here restrict our discussion to the qubit case using *spherical* $t$-designs.\n\nIn Fig.\u00a0, the $L^{(3)}=6$ directions $\\{\\pm \\vec{e}_i|i=x,y,z\\}$ of a spherical $3$-design as well as the $L^{(5)}=12$ directions of a $5$-design are shown. If one is only interested in the second moment (polynomial of order $2$), the $6$ local measurement directions of $L^{(3)}$ are sufficient. Moreover, as even-order moments are invariant under a parity transformation of the measurement direction, skipping $\\{-\\vec{e}_i|i=x,y,z\\}$ does no harm. For obtaining the fourth moment, $L^{(5)}\/2=6$ local measurement directions are suitable. With this method, the selection of measurement directions from the pseudo-random process of a spherical design allows one to mimic uniform averages over the sphere\u00a0.\n\n# Conclusion and Future Research\n\nIn this perspective, we have discussed what we can learn from random measurements despite the lack of shared or even local reference frames. Distributions of correlations can reveal entanglement and exclude any type of separability. Although considering only the second moment of these distributions is not sufficient for constructing an entanglement measure, it can be used for witnessing SLOCC classes as recently shown in . As random measurements are inherently not sensitive to local unitary transformations, they do not divert one's gaze from LU-invariant properties.\n\nRandom measurements turn out to be a powerful tool for entanglement detection and classification. Here, we did not elaborate on statistical errors involved in those measurements, which require some further research. Also, we focused on quantum *states*. Of course, random measurements are also of interest for characterizing quantum *processes*. Another open question is the tomographic reconstruction using random measurements: Which states can be discriminated and what information will stay hidden? Also, it is worth to discuss how and to what degree random measurements can be employed for applications such as quantum metrology.\n\n# Acknowledgments\n\nI am grateful to Tomasz Paterek, Jasmin D. A. Meinecke, Karen Wintersperger, and Nicolai Friis for their helpful comments and suggestions for improvements of the manuscript.","meta":{"dup_signals":{"dup_doc_count":14,"dup_dump_count":12,"dup_details":{"curated_sources":2,"2023-23":1,"2023-06":1,"2022-40":1,"2022-21":1,"2021-49":1,"2021-31":1,"2021-17":1,"2021-04":1,"2020-50":2,"2023-40":1,"2024-18":1}},"filename":"out\/2011.10591_extract_Lukas_Knips_A_Moment_for_Random_Measurements_arxiv.tex.md"},"subset":"arxiv"} +{"text":"abstract: Detailed modeling and simulation of biochemical systems is complicated by the problem of combinatorial complexity, an explosion in the number of species and reactions due to myriad protein-protein interactions and post-translational modifications. Rule-based modeling overcomes this problem by representing molecules as structured objects and encoding their interactions as pattern-based rules. This greatly simplifies the process of model specification, avoiding the tedious and error prone task of manually enumerating all species and reactions that can potentially exist in a system. From a simulation perspective, rule-based models can be expanded algorithmically into fully-enumerated reaction networks and simulated using a variety of network-based simulation methods, such as ordinary differential equations or Gillespie's algorithm, provided that the network is not exceedingly large. Alternatively, rule-based models can be simulated directly using particle-based kinetic Monte Carlo methods.<\/span> This \"network-free\" approach produces exact stochastic trajectories with a computational cost that is independent of network size. However, memory and run time costs increase<\/span> with the number of particles, limiting the size of system that can be feasibly simulated. Here, we present a hybrid particle\/population simulation method that combines the best attributes of both the network-based and network-free approaches. The method takes as input a rule-based model and a user-specified subset of species to treat as population variables rather than as particles. The model is then transformed by a process of \"partial network expansion\" into a dynamically equivalent form that can be simulated using a population-adapted network-free simulator. The transformation method has been implemented within the open-source rule-based modeling platform , and resulting hybrid models can be simulated using the particle-based simulator . Performance<\/span> tests show that significant memory savings can be achieved using the new approach and a monetary cost analysis provides a practical measure of its utility.\nauthor: Justin S.\u00a0Hogg$^\\textrm{\\Yingyang}$; Leonard A.\u00a0Harris$^\\textrm{\\Yingyang}$; Lori J.\u00a0Stover; Niketh S.\u00a0Nair; James R.\u00a0Faeder\ntitle: Exact hybrid particle\/population simulation of rule-based models of biochemical systems\n\n# Author Summary\n\nRule-based modeling is a modeling paradigm that addresses the problem of combinatorial complexity in biochemical systems. The key idea is to specify only those components of a biological macromolecule that are directly involved in a biochemical transformation. Until recently, this \"pattern-based\" approach greatly simplified the process of model *building* but did nothing to improve the performance of model *simulation*. This changed with the introduction of \"network-free\" simulation methods, which operate directly on the compressed rule set of a rule-based model rather than on a fully-enumerated set of reactions and species. However, these methods represent every molecule in a system as a particle, limiting their use to systems containing less than a few million molecules. Here, we describe an extension to the network-free approach that treats rare, complex species as particles and plentiful, simple species as population variables, while retaining the exact dynamics of the model system. By making more efficient use of computational resources for species that do not require the level of detail of a particle representation, this hybrid particle\/population approach can simulate systems much larger than is possible using network-free methods and is an important step towards realizing the practical simulation of detailed, mechanistic models of whole cells.\n\n# Introduction\n\n## Rule-based modeling\n\nCell signaling encompasses the collection of cellular processes that sample the extracellular environment, process and transmit that information to the interior of the cell, and regulate cellular responses. In a typical scenario, molecules outside of the cell bind to cognate receptors on the cell membrane, resulting in conformational changes or clustering of receptors. A complex series of protein binding and biochemical events then occurs, ultimately leading to the activation or deactivation of proteins that regulate gene expression or other cellular processes . A typical signaling protein possesses multiple interaction sites with activities that can be modified by direct chemical modification or by the effects of modification or interaction at other sites. This complexity at the protein level leads to a combinatorial explosion in the number of possible species and reactions<\/span> at the level of signaling networks .\n\nCombinatorial complexity poses a major barrier to the development of detailed, mechanistic models of biochemical systems. Traditional modeling approaches that require manual enumeration of all potential species and reactions in a network are infeasible or impractical .<\/span> This has motivated the development of rule-based modeling languages, such as the language (BNGL) , Kappa , and others <\/span>, that provide a rich yet concise description of signaling proteins and their interactions. The combinatorial explosion problem is avoided by representing interacting molecules as structured objects and using pattern-based rules to encode their interactions. In the graph-based formalisms of BNGL and Kappa, molecules are represented as graphs and biochemical interactions by graph-rewriting rules. Rules are *local* in the sense that only the properties of the reactants that are transformed, or are required for the transformation to take place, affect their ability to react. As such, each rule defines a class of reactions that share a common set of transformations (e.g., the formation of a bond between molecules) and requirements for those transformations to take place (e.g., that one or more components have a particular covalent modification). The number of reactions encoded by a rule varies depending on the specifics of the model; a rule-based encoding is considered compact if it contains rules that encode large numbers of reactions. Overviews of rule-based modeling with BNGL can be found in Sec.\u00a0S3.1 of Text\u00a0S1 and Refs.\u00a0. A description of the graph-theoretic formalism underlying BNGL is provided in Sec.\u00a0S4.1 of Text\u00a0S1, building on a previous graph-theoretical treatment .<\/span>\n\n## Network-based and network-free simulation of rule-based models<\/span>\n\n An important characteristic of rule-based models is that they can encode both finite and *infinite* reaction networks. If the network is finite and \"not too large\" ($\\lesssim$``{=html}10\u2006000 reactions ) it can be generated from the rule-based model algorithmically by a process known as \"network generation\" . Network generation begins by applying the rules of a rule-based model to a set of initial \"seed\" species, which define the initial state of the model system, to generate new species and reactions. The new species are then matched against the existing species to determine whether or not they are already present in the network . Any species that are not already present are added to the network and an additional round of rule application is performed. This iterative process continues until an iteration is encountered in which no new species are generated. The resulting system of reactions can then be simulated using a variety of network-based deterministic and stochastic simulation methods. For example, network-based simulation methods currently implemented within include SUNDIALS CVODE for ordinary differential equation (ODE)-based simulations, Gillespie's stochastic simulation algorithm (SSA; direct method with dynamic propensity sorting) , and the accelerated-stochastic \"partitioned-leaping algorithm\" . <\/span>\n\n The rule-based methodology also provides a way<\/span> to simulate models with prohibitively large or infinite numbers of species and reactions<\/span>. This \"network-free\" approach involves representing molecular complexes as particles and applying rule transformations to those particles at runtime using a kinetic Monte Carlo update scheme . At each simulation step, reactant patterns are matched to the molecular complexes within the system to calculate rule propensities. The rule to next fire is then selected probabilistically as in the SSA and the particle(s) to participate in the transformation is (are) selected randomly from the set of matches. When the rule fires, transformations are applied to the reactant complexes to create the products. Since the reactants and products are determined at runtime there is no need to enumerate all species and reactions *a\u00a0priori* as in network-based methods. This procedure is a particle-based variant of Gillespie's algorithm and a generalization of the \"n-fold way\" of Bortz\u00a0et\u00a0al.\u00a0, which was originally developed to accelerate the simulation of Ising spin systems. An efficient, open-source implementation that is compatible with BNGL models is , the \"network-free simulator\" . Other network-free simulation tools for rule-based models include , , , and . <\/span>\n\n Since only the current set of molecular complexes and the transformations that can be applied to them are tracked, network-free methods can efficiently simulate systems that are intractable to network-based methods . However, the explicit representation of every molecule in the system is a major shortcoming of the approach. As such, network-free methods can require large amounts of computational memory for systems that contain large numbers of particles, a potential barrier to simulating systems such as the regulatory networks of a whole cell . A typical eukaryotic cell, for example, contains on the order of $10^3$\u2013$10^4$ protein-coding genes, $10^4$\u2013$10^5$ mRNA molecules, and $10^9$\u2013$10^{10}$ protein molecules , along with much larger numbers of metabolites, lipids, and other small molecules. Simulating a cell at this level of detail using a network-free approach would be impractical. There is a need, therefore, for new approaches that can reduce the memory requirements of network-free simulation methods. <\/span>\n\n## Computational complexity\n\nA common measure of the computational cost of an algorithm is its *computational complexity*. In basic terms, computational complexity measures how the computational cost increases as an algorithm is applied to increasingly larger data sets . For the simulation methods considered in this paper, two types of computational complexity are important: (i) *space complexity*, the number of memory units consumed during the execution of an algorithm; (ii) *time complexity*, the number of computational steps required to complete an algorithm.\n\nNetwork-based exact-stochastic simulation methods, like Gillespie's SSA , treat species as lumped variables with a population counter. Therefore, their space complexity is constant in the number of particles in the system. However, representing the reaction network has a space complexity that is linear (or worse if a reaction dependency graph is used ) in the number of reactions. Network-based SSA methods are thus space efficient for systems with large numbers of particles, but less so for systems with large numbers of reactions. The time complexity of SSA methods is more difficult to quantify. It depends on model-specific factors such as the number of reactions in the network and the values of rate constants and species concentrations, as well as methodological factors such as how the next reaction to fire in the system is selected and how reaction propensities are updated after each reaction firing . However, for our purposes, what matters is that the time cost *per event* (reaction firing) for these methods is constant in the number of particles in the system and increases with the number of reactions in the network. <\/span>\n\n Network-free methods, in contrast, represent each particle individually. Thus, their space complexity is *linear* in the number of particles. This is the primary shortcoming of these methods, as it limits the size of system that can be feasibly simulated. However, since reactions are not enumerated, their space complexity is linear in the number of *rules*, rather than the number of reactions. This is a key advantage for models where very large reaction networks are encoded by a small number of rules. Network-free methods also have an advantage over network-based methods in that their time complexity per event also scales with the number of rules, rather than the number of reactions. Since the number of rules in a rule-based model is typically far less than the number of reactions, this can be a substantial improvement. For example, NFsim has been demonstrated to significantly outperform network-based SSA methods for a family of Fc$\\epsilon$ receptor signaling models with large reaction networks . We also note that for many models network-free methods have a time cost per event that is constant in the number of particles. However, for systems in which large aggregates form (e.g., models with polymerization dynamics ) the cost can be significantly higher, scaling with the number of particles . Nevertheless, network-free methods are still usually the best option in these cases because these types of models tend to encode very large reaction networks . <\/span>\n\n In Table\u00a0, we summarize the space and time complexities for different network-based SSA variants and for the network-free algorithm. Of most relevance to the current work are the entries that show: (i) the space complexity of network-based methods is constant in the number of particles and linear (or worse) in the reaction network size; (ii) the space complexity of network-free methods is linear in the number of particles and independent of the reaction network size, depending instead on the number of rules; (iii) the time complexity of network-based methods depends on the number of reactions in the network while for network-free methods it depends on the number of rules. Network-based methods are thus the best choice for systems with large numbers of particles and a small to moderate reaction network, and network-free methods are the best choice for systems with a large reaction network and small to moderate numbers of particles. However, neither method is optimal for systems that contain *both* a large number of particles and a large reaction network. <\/span>\n\n```latex\n\\begin{table*}\n\\centering\n\\caption{\\textbf{Space and time complexities for network-based (SSA) and network-free (NF) stochastic simulation algorithms.} Scalings are shown with respect to particle number, $P$\\\/, and number of reactions, $R$\\\/, or rules, $\\widetilde{R}$\\\/. For combinatorially-complex models, $\\widetilde{R} \\ll R$\\\/. Note that time complexity is given on a ``per event\" (reaction\/rule firing) basis. If a reaction dependency graph \\cite{Gibson2000} is used, the space and time complexities of SSA methods with respect to $R$\\\/ depend on $d$\\\/, the maximum number of reactions updated after each reaction firing \\cite{Gibson2000, Cao2004}. In combinatorially-complex models, $d$\\\/ often increases with $R$\\\/ (see Figure~S3 of the supporting information). The time complexity of SSA methods with respect to $R$\\\/ also depends on the method used for selecting the next reaction to fire in the system. Scalings are shown for three different SSA variants that use different selection methods \\cite{Gillespie1976, Gibson2000, Fricke1995, Schulze2008, Slepoy2008}. Also note that optimized variants of the direct method \\cite{Cao2004, McCollum2006, Fricke1995} have been shown to outperform methods with lower asymptotic complexity in some cases \\cite{Cao2004}. Space and time complexities of the NF algorithm with respect to $\\widetilde{R}$\\\/ assume no dependency graph and that the next rule to fire is selected as in Gillespie's direct method \\cite{Gillespie1976}, although in principle other variants are possible.}\n\\begin{minipage}{\\textwidth}\n\\centering\n%\\framebox[\\textwidth]{}\n\\begin{tabular*}{\\textwidth}{@{\\hspace{20pt}}c@{\\hspace{20pt}}|@{\\hspace{20pt}}cc@{\\hspace{20pt}}|@{\\hspace{20pt}}cc}\n \\multicolumn{1}{c}{} & \\multicolumn{2}{l}{\\makebox[2.7in]{\\textbf{SSA}}} &\\multicolumn{2}{l}{\\makebox[1.7in]{\\textbf{NF}}} \\\\\n & \\textit{Particles} ($P$\\\/) & \\textit{Reactions} ($R$\\\/) & \\textit{Particles} ($P$\\\/) & \\textit{Rules} ($\\widetilde{R}$\\\/) \\\\[2pt] \\hline\\hline\n \\textit{Space} \\rule[-6pt]{0pt}{21pt} & $\\mathit{O}(1)$ & $\\mathit{O}(R)$\\footnote{No dependency graph}, $\\mathit{O}(d(R){\\cdot}R)$\\footnote{Dependency graph \\cite{Gibson2000, Cao2004}} & $\\mathit{O}(P)$ & $\\mathit{O}(\\widetilde{R})^\\mathit{a}$ \\\\ \n \\textit{Time (per event)} \\rule[-6pt]{0pt}{21pt} & $\\mathit{O}(1)$ & $\\mathit{O}(d(R))$\\footnote{Logarithmic classes \\textcolor{black}{(with dependency graph)} \\cite{Fricke1995, Schulze2008, Slepoy2008}}, $\\mathit{O}(d(R) \\log_2 R)$\\footnote{Next-reaction method \\textcolor{black}{(with dependency graph)} \\cite{Gibson2000}}, $\\mathit{O}(R)$\\footnote{Direct method \\textcolor{black}{(with or without dependency graph)} \\cite{Gillespie1976}} & $\\mathit{O}(1)$, $\\mathit{O}(P)$\\footnote{Polymerizing systems in gel phase \\cite{Yang2008, Monine2010} (see Fig.~\\ref{fig:tlbr}B)} & $\\mathit{O}(\\widetilde{R})$\\footnote{Direct method-like implementation} \\\\ \\hline\\hline\n\\end{tabular*}\n\\end{minipage}\n\\label{table:complexity}\n\\end{table*}\n```\n\n## Combining network-based and network-free methodologies\n\nThe key idea pursued in this work is that memory consumption can be reduced in network-free simulators by representing large numbers of identical molecular complexes as single variables with population counters rather than as particles. However, retaining the ability to address combinatorial complexity requires retaining the particle representation for complexes that are comprised of many molecules and\/or have a large number of internal states<\/span>. Here, we present an approach, termed the hybrid particle\/population (HPP) simulation method, that accomplishes this. Given a user-defined set of species to treat as population variables, the HPP method partially expands the network around the population species and then simulates the partially-expanded model using a population-adapted particle-based method. By treating complex species as structured particles, HPP capitalizes on the reduced<\/span> time complexity with respect to network size characteristic of the network-free approach. However, for the subset of species treated as population variables, we take advantage of the constant memory requirements of the network-based methodology.\n\n It is important to emphasize that it is the *system* that is represented in a hybrid manner in the HPP approach, as a collection of particles and population variables. The underlying simulator remains the same particle-based variant of Gillespie's algorithm that is used in existing network-free simulators , but with small modifications to support population variables. This distinguishes HPP from other types of hybrid methods that combine different simulation methodologies, e.g., ODE\/SSA integrators . <\/span>\n\n## Related work<\/span>\n\n While numerous rule-based modeling frameworks have been developed , little has been done with regard to hybrid particle\/population simulation. Kappa has the concept of \"tokens,\" which are structureless population-type species. Modelers can write their models using tokens and simulate them using KaSim\u00a03.0, the most recent version of the Kappa-compatible network-free simulator (). However, there is no facility for transforming a Kappa model written without tokens into a hybrid form with tokens, as our HPP method does. HPP, therefore, may be of particular interest to the Kappa community since it is generally applicable to any rule-based modeling language for which there exists a simulator capable of handling a mixed particle\/population system representation. <\/span>\n\n Another related method is the population-based network-free algorithm (PNFA) of Liu\u00a0et\u00a0al.\u00a0. The PNFA is similar in spirit to the HPP, however it is based on a simplified rule-based modeling formalism that lacks a general representation of intermolecular bonding. As such, it cannot handle combinatorial complexity arising from oligomerization. Moreover, there is no method for transforming a model into an equivalent hybrid form as in HPP. Rather, all single-state (structureless) species are automatically treated as population variables, which may not be optimal in all cases. Nevertheless, by incorporating a population component into the system representation, the PNFA can simulate systems much larger than is possible using purely particle-based methods. <\/span>\n\n Finally, an alternative approach to reducing the computational cost of rule-based simulation is exact model reduction (EMR) . EMR reduces the state space of a rule-based model while preserving the exact system dynamics with respect to observable quantities. These methods can achieve dramatic reductions in model complexity when applied within the context of ordinary differential equations . However, results for stochastic simulations have so far been less encouraging (see stochastic_fragments.pdf<\/a>). In general, EMR fails to achieve substantial reductions for models containing cooperative or allosteric interactions that introduce coupling between sites . <\/span>\n\n# Methods\n\n## Example models\n\nWe have tested the performance of the HPP method by applying it to four example models, summarized in Table\u00a0 and discussed in further detail below. All of the models are biologically relevant and are either taken directly from the literature or are based on models taken from the literature. Complete BNGL encodings, HPP configuration files (containing actions for loading models, defining population maps, and executing simulations), and partially-expanded versions of all example models are provided as Texts\u00a0S5\u2013S17 of the supporting information.<\/span>\n\n```latex\n\\begin{table*}\n\\centering\n\\caption{\\textbf{Summary of example models used to test the performance of the HPP method.} Number of particles is for an \\mbox{NFsim} simulation of a full cell volume ($f\\!=\\!1$). Fractional cell volumes as low as 0.001 and as high as 1 are used in the performance analyses (see ``Example models\" for details). \\textcolor{black}{Number of rules after PNE includes the population-mapping rules (one per population species).}}\n%\n\\begin{tabular*}{\\textwidth}{l@{\\hspace{20pt}}c@{\\hspace{20pt}}c@{\\hspace{20pt}}c@{\\hspace{20pt}}c@{\\hspace{20pt}}c@{\\hspace{20pt}}c@{\\hspace{20pt}}c}\n & & & & \\textbf{Particles} & \\textbf{Population} & \\textbf{Rules after} & \\\\[-3pt]\n \\multicolumn{1}{c}{\\textbf{Model}} & \\textbf{Rules} & \\textbf{Reactions} & \\textbf{Species} & \\textbf{({\\boldmath$f\\!=\\!1$})} & \\textbf{species} & \\textbf{PNE} & \\textbf{\\texttt{t\\_end} (s)} \\\\ \\hline\\hline\n \\textbf{TLBR} \\cite{Monine2010, Sneddon2011, Goldstein1984} \n & 4 & $\\infty$ & $\\infty$ & $5.3\\!\\times\\!10^6$ & 2 & 9 & 500 \\\\\n \\textbf{Actin} \\cite{Sneddon2011, Roland2008} %polymerization \n & 21 & $\\infty$ & $\\infty$ & $1.2\\!\\times\\!10^6$ & 2 & 25 & 1000 \\\\\n \\textbf{Fc{\\boldmath$\\epsilon$}\\\/RI} \\cite{Sneddon2011, Faeder2003, Goldstein2004} \n & 24 & 58\\,276 & 3744 & $6.9\\!\\times\\!10^6$ & 1 \/ 6 & 25 \/ 38 & 2400 \\\\\n \\textbf{EGFR} \\cite{Blinov2006a, Stites2007, Fujioka2006} %signaling \n & 113 & 415\\,858 & 18\\,950 & $2.2\\!\\times\\!10^6$ & 29 & 159 & 1200\\\\\\hline\\hline\n\\end{tabular*}\n\\label{table:models}\n\\end{table*}\n```\n\n### Trivalent-ligand bivalent-receptor\n\nThe trivalent-ligand bivalent-receptor (TLBR) model is a simplified representation of receptor aggregation following multivalent ligand binding. TLBR has biological relevance to antigen-antibody interaction at the cell surface, where bivalent IgE-Fc$\\epsilon$RI receptor complexes aggregate in the presence of multivalent antigen . A theoretical study of the TLBR system was presented by Goldstein and Perelson , who derived analytical conditions for a solution-gel phase transition in terms of binding equilibrium constants, free ligand concentration, and receptors per cell. A more recent study considered the effects of steric constraints and ring closure on the solution-gel phase transition .\n\nDespite its simplicity, the TLBR system experiences a state-space explosion near the solution-gel phase boundary. A computational study by Sneddon\u00a0et\u00a0al.\u00a0using reproduced the analytical results of Goldstein and Perelson. Due to large excesses of ligand and receptor under certain conditions, TLBR is a natural test case for HPP. We simulated the TLBR system using HPP with free ligand and receptor treated as population species. All simulations were performed with parameters as defined in Monine\u00a0et\u00a0al.\u00a0, which lie within the solution-gel phase coexistence region. A cell-scale simulation assumed 1\u00a0nl extracellular volume per cell ($10^6$ cells\/ml) with 8.3\u00a0nM ligand and $3\\!\\times\\!10^5$ receptors per cell. Simulations were performed at fractional cell volumes, $f$, ranging from 0.001 to 0.1 with a lumping rate constant $\\mathtt{k\\_lump}\\!=\\!10\\,000$\/s (see below).\n\n### Actin polymerization\n\nActin polymerization plays a key role in cell morphology and motility . Roland\u00a0et\u00a0al.\u00a0 presented a dynamic model of actin polymerization featuring filament elongation by monomer addition, stabilization by ATP hydrolysis, and severing mediated by actin depolymerizing factor (ADF)\/cofilin. Sneddon\u00a0et\u00a0al.\u00a0 presented a rule-based formulation of the Roland\u00a0et\u00a0al.\u00a0model and replicated their results using . The model features an excess of actin monomer and ADF molecules. Therefore, we speculated that substantial memory reduction would be possible using the hybrid approach. We applied HPP to the Sneddon\u00a0et\u00a0al.\u00a0rule-based model of actin dynamics (hereafter referred to as the Actin model) with actin monomer and ADF treated as population species. A cell-scale simulation assumed 1\u00a0pl intracellular volume with 1\u00a0$\\mu$M actin monomer and 1\u00a0$\\mu$M ADF\/cofilin. Simulations were performed at fractional cell volumes, $f$, ranging from 0.01 to 1 with a lumping rate constant $\\mathtt{k\\_lump}\\!=\\!10\\,000$\/s.\n\n### Fc$\\epsilon$RI signaling\n\nFc$\\epsilon$RI is a membrane receptor that binds IgE antibodies. Signaling through Fc$\\epsilon$RI regulates basophilic histamine release in response to IgE antibody-antigen interaction . Faeder\u00a0et\u00a0al.\u00a0 developed a rule-based model of Fc$\\epsilon$RI receptor assembly and activation in which receptor dimerization\/clustering is mediated by chemically cross-linked IgE, which serve as multivalent ligands. Dimerized receptors are transphosphorylated, leading to Syk and Lyn recruitment and phosphorylation. Sneddon\u00a0et\u00a0al.\u00a0 presented several extensions of the Faeder\u00a0et\u00a0al.\u00a0model, including the *gamma2* variant with two $\\gamma$ phosphorylation sites. Particle-based simulations of the *gamma2* model were found to be substantially faster than network-based SSA simulations.\n\nDue to the excess of free ligand, the HPP method was applied to the *gamma2* model to reduce memory consumption. The method was applied with two different sets of population species. In the first case, only free ligand was treated as a population species (Fc$\\epsilon$RI:1). In the second, cytosolic Lyn and all four phosphorylation states of cytosolic Syk were also treated as populations (Fc$\\epsilon$RI:6). A cell-scale simulation assumed 1\u00a0pl intracellular volume with 1\u00a0nl extracellular space per cell ($10^6$ cells\/ml), 10\u00a0nM ligand, and $4\\!\\times\\!10^5$ receptors per cell. Simulations were performed at fractional cell volumes, $f$, ranging from 0.001 to 0.1 with a lumping rate constant $\\mathtt{k\\_lump}\\!=\\!10\\,000$\/s.\n\n### EGFR signaling\n\nA model of signaling through the epidermal growth factor receptor (EGFR), beginning with ligand binding and concluding with nuclear phospho-ERK activity, was constructed by combining three existing models: (i) a rule-based model of EGFR complex assembly ; (ii) a Ras activation model ; (iii) a pathway model of Raf, MEK and ERK activation . Ras activation was coupled to the EGFR complex assembly by treating receptor-recruited Sos as the Ras GEF. Activated Ras was coupled to the Raf\/MEK\/ERK cascade through RasGTP-Raf binding and subsequent phosphorylation of Raf. Parameters for the combined model were obtained from the respective models. However, parameters governing Ras-GEF (i.e., Sos) activity had to be changed from their original values in order to account for the known GEF-mediated activation of Ras . Specifically, we used $K_{M,\\mathrm{GDP}}\\!=\\!K_{M,\\mathrm{GTP}}\\!=\\!1.56\\!\\times\\!10^{-7}$\u00a0M and $D\\!=\\!1000$ (unitless).\n\nFree EGF and Raf-, MEK-, and ERK-based species were treated as population species in the hybrid variant. Ras-based species were also treated as populations except for those that include a Sos molecule. A cell-scale simulation assumed 0.94\u00a0pl cytosolic and 0.22\u00a0pl nuclear volume, with 0.94\u00a0pl extracellular space, 10\u00a0nM ligand, and $4\\!\\times\\!10^5$ receptors per cell. Simulations were performed at fractional cell volumes, $f$, ranging from 0.01 to 1 with a lumping rate constant $\\mathtt{k\\_lump}\\!=\\!100\\,000$\/s.\n\n## Performance metrics\n\nHPP was evaluated for peak memory use, CPU run time, and accuracy as compared to particle-based simulations. For models where network generation is possible (Fc$\\epsilon$RI and EGFR), comparisons were also made to SSA simulations (as implemented within ). All simulations<\/span> were run<\/span> on a 2 $\\times$ Intel Xeon E5520 @ 2.27\u00a0GHz (8 cores, 16 threads, x86_64 instruction set) with 74\u00a0GB of RAM running the GNU\/Linux operating system. To ensure that each process had access to 100% of the compute cycles of a thread, no more than 12 simulations were run<\/span> simultaneously.\n\n### Peak memory\n\nAverage peak memory usage for each simulation method was calculated based on seven independent simulation runs.<\/span> Peak memory for each run<\/span> was evaluated by peak virtual memory allocation reported by the operating system with the command \"`cat \/proc\/\/status`\". For all tested<\/span> models, peak memory was achieved early in the simulation and remained steady throughout (data not shown).\n\n### CPU run time\n\nAverage CPU run time for each simulation method was calculated based on seven independent simulation runs<\/span> using clock time as a metric. Clock time for each run<\/span> was recorded using the `Time::HiRes` Perl module. Run time included initialization as well as the simulation phase. Partial network expansion for HPP simulations was a one time cost, typically a few seconds, and was not included in the calculation<\/span>.\n\n### Accuracy\n\nSimulation accuracy was quantified using several approaches. First, since HPP, NFsim, and SSA are all exact-stochastic methods, they should all produce statistically the same number of reaction firings. To verify this,<\/span> for all tested<\/span> models the total number of reaction firings was recorded for each of 40 independent simulation runs of each method<\/span> (firings of population-mapping rules were subtracted from the total in HPP simulations). The Mann-Whitney\u00a0U test was then used to test the null hypothesis that none of the methods produces a larger number of reaction firings.\n\nFor the TLBR and Actin models, we further compared<\/span> equilibrium distributions for key observables. These include the number of receptor clusters in the TLBR model and the length of actin polymers in the Actin model. 10\u2006000 samples were collected over 100\u2006000 seconds of simulated time and distributions were compared by binning samples (20 bins) and performing a two-sample chi-squared test . For the Fc$\\epsilon$RI and EGFR models, we compared dynamic<\/span> trajectories for key observables. These include $\\gamma$-phosphorylated receptor and receptor-recruited, $\\alpha$-phosphorylated Syk in the Fc$\\epsilon$RI model, and activated Sos and nuclear phosphorylated ERK in the EGFR model<\/span>. Due to complications of autocorrelation, a statistical test was not applied to the dynamic trajectory comparison. Instead, moving averages and 5\u201395% frequency envelopes, based on 40 simulation runs of each method using a sampling window of 10\u00a0s,<\/span> were plotted for inspection by eye.\n\n## Software\n\nAll HPP and NFsim simulations reported in this work were run using version 1.11, which is available for download at . All simulations (SSA included) were invoked through version 2.2.4, which implements the hybrid model generator and is distributed with \u00a01.11. Instructions for running simulations with (ODE, SSA, and HPP) can be found in Secs.\u00a0S3.2 and S3.3 of Text S1 and Refs.\u00a0. and source code are available at and , respectively. Additional documentation for can be found at .\n\n# Results\n\n## A hybrid particle\/population simulation approach\n\nIn this section, we first present an approach, termed \"partial network expansion,\"<\/span> for transforming a rule-based model into a dynamically-equivalent, partially-expanded form.<\/span> We then describe a a simple modification to the<\/span> network-free simulation protocol that permits simulation of the transformed model as a collection of both particles and population variables<\/span>. We refer to the combination of these methods as the hybrid particle\/population (HPP) simulation method. The basic workflow is shown in Fig.\u00a0.\n\nThe HPP approach is analogous to the coupled procedure of network generation and simulation described above, where a rule-based model is first transformed into a *fully-expanded* reaction network and then simulated as a collection of population variables (i.e., species) using a network-based simulator. The obvious differences are that in HPP the network is only partially expanded and the system can only be simulated stochastically using a population-adapted network-free simulator.<\/span> The partial network expansion algorithm has been implemented within the open-source rule-based modeling package and resulting hybrid models can be simulated using version 1.11 (or later) of the network-free simulator , which has been modified to handle population-type species<\/span>. For convenience, we adhere in this paper to the BNGL syntax, which is summarized in Sec.\u00a0S3.1 of Text\u00a0S1 of the supporting material. However, the HPP method is generally applicable to any rule-based modeling language for which there exists a network-free simulator capable of handling a mixed particle\/population system representation, e.g., KaSim\u00a03.0 for Kappa language models (see ).<\/span>\n\n### Population species and population-mapping rules\n\nGiven a rule-based model, the first step in the HPP approach is to select a subset of species to treat as \"lumped\" population variables. There are no hard-and-fast rules for doing this but, generally speaking, species that are good candidates for a population treatment (i) have a small number of components and internal states, (ii) participate in a small number of rules, and (iii) maintain a large population throughout the course of a simulation. An example is a simple ligand species that exists in great excess in the extracellular environment and interacts with cell surface receptors. It is our experience that these simple rules of thumb, combined with the experience and intuition of the modeler, are usually sufficient for selecting an adequate set of population species. However, in some cases a more systematic approach may be desirable. We will return to this topic below. <\/span>\n\nFor now, however, let us assume that we have selected a suitable set of population species.<\/span> The next step in the HPP approach is to map each of these to an associated *unstructured* species. The mapping is accomplished by defining a *population-mapping rule*, which follows the same syntactic conventions as a standard BNGL rule. For example, the rule\n\n`Egf(r) -> pop_Egf() k_lump`\n\nmaps the unbound EGF ligand, `Egf(r)`, to the unstructured species `pop_Egf()`. To avoid confusion, we will henceforth refer to species on the reactant side of a population-mapping rule, such as `Egf(r)`, as *structured population species* and to those on the product side as *unstructured population species*. Importantly, unstructured population species differ from conventional unstructured molecules in BNGL in that they possess a property, called a *count*, which records their current population (see Sec.\u00a0S3.3 of Text\u00a0S1 and Texts\u00a0S4, S7, S10, S13, S14, and S17 to see how the `population` keyword is used to make this distinction). The action of the population-mapping rule above is thus to delete the `Egf(r)` molecule and to *increment by one* the count of `pop_Egf()`. The role of the rate parameter `k_lump`, termed the *lumping rate constant*, will be explained in detail below. <\/span>\n\n### Partial network expansion\n\nUltimately, our goal in the HPP method is to replace in the simulation environment large numbers of indistinguishable particles with small numbers of lumped objects containing population counters (the unstructured population species)<\/span>, thus significantly reducing memory usage. In order to accomplish this without losing any information regarding the dynamics of the system, we must partially expand the rule set of the original model until all interactions and transformations<\/span> in which the structured population species participate *as reactants* (see below)<\/span> are enumerated. We can then swap the structured species with their unstructured counterparts, which have been specified via the population-mapping rules.<\/span> We refer to this procedure as partial network expansion (PNE).\n\nThe PNE algorithm is comprised of three basic steps, which are applied to each rule of a rule-based model:\n\n1. For each reactant pattern in the rule<\/span>, identify all matches of that pattern into the set of structured population species. Also collect a self-match of the reactant pattern *unless it equals* one of the population species (this can only happen if the reactant pattern is a fully-specified species;<\/span> see below for further discussion).\n\n2. Derive an expanded set of rules by applying the rule to all possible combinations (the cartesian product) of the pattern matches collected in Step\u00a01.\n\n3. For each derived rule from Step\u00a02<\/span>, replace each instance of a structured population species<\/span> with its unstructured population counterpart.\n\nThe result is an expanded rule set consisting of three general types of rules: (i) particle rules, in which all reactants are conventional reactant patterns<\/span>; (ii) mixed particle\/population rules, where at least one reactant is a conventional reactant pattern and one is an unstructured population species<\/span>; (iii) pure population *reactions*, where all reactants are unstructured population species.<\/span> This expanded rule set has the property that every possible action of the original rule set on the population species is enumerated while actions on particle objects remain pattern-based (i.e., non-enumerated). For a more formal presentation of the PNE algorithm, complete with pseudocode, we direct the reader to Sec.\u00a0S4.2 of Text\u00a0S1.\n\n### Role of the population-mapping rules<\/span>\n\n After completion of PNE, the final step in transforming a rule-based model into a form that can be simulated as a hybrid particle\/population system is to append the population-mapping rules to the expanded rule set. The reason for doing this is not immediately obvious. We have seen above that the population-mapping rules specify which structured species are to be replaced in the transformed model with population variables. However, an obvious question to ask is why we have chosen to specify this information via a set of reaction rules, rather than simply as a list of species to be lumped. The answer is combinatorial complexity. <\/span>\n\n As explained above, systems that are combinatorially complex are comprised of a relatively small number of constituent parts but exhibit an explosion in the number of potential species and reactions due to the myriad number of ways in which these parts can be connected and arranged. Rule-based modeling is effective in representing these systems because it focuses only on the portions of molecular complexes that affect biochemical reactivity, not on entire species. However, a consequence of this approach is that there is often ambiguity regarding the products of a reaction rule. A rule may describe the breaking of a bond between two molecules, for example, but the exact composition of the resulting complexes is left necessarily ambiguous (see Fig.\u00a0). <\/span>\n\n With regard to the HPP approach, this ambiguity in the products of a reaction rule complicates the process of PNE. Application of a reaction rule to one complex may produce a population species, whereas application of the same rule to a different complex may not. Distinguishing between cases where population species are produced and where they are not is difficult, and may even be impossible if the system is combinatorially complex. Thus, the strategy that we have adopted here is to expand the network out only to the point where all population species *on the reactant side* are enumerated and to handle the ambiguity in products by adding the population-mapping rules to the rule set. The role of the population-mapping rules is thus to detect any instances of structured population species that appear in the simulation environment as products of a rule application and to gather them up into the unstructured population pool. <\/span>\n\nThis returns us to the issue of the lumping rate constant, `k_lump`. In Step\u00a01 of the PNE algorithm, if a reactant pattern equals a population species then we discard the self-match (the structured version of the population species). To see why we do this, consider the binding rule depicted in Fig.\u00a0A. However, different from Figs.\u00a0B\u2013D, assume that molecules `A` and `B` have only *one* binding site each. If we choose to lump the unbound molecules then we must define the following population-mapping rules:\n\n`A(b) -> pop_A() k_lump`, \n`B(a) -> pop_B() k_lump`. \n\nObviously, these structured population species are equivalent to the reactant patterns in Fig.\u00a0A. However, let us choose *not* to discard the self-matches in this case. PNE would then generate the following four derived rules:\n\n| | | |\n|--------------------:|:----:|:-------------------:|\n| `A(b) + B(a)` | `->` | `A(b!0).B(a!0) kf`, |\n| `pop_A() + B(a)` | `->` | `A(b!0).B(a!0) kf`, |\n| `A(b) + pop_B()` | `->` | `A(b!0).B(a!0) kf`, |\n| `pop_A() + pop_B()` | `->` | `A(b!0).B(a!0) kf`. |\n\nWe see that the first three of these rules have conventional (structured) reactant patterns. However, if `k_lump` is sufficiently large then particle instances of `A(b)` and `B(a)` will never exist in the system long enough to be matched to these patterns. Thus, these rules can be safely discarded, which is equivalent to discarding the self-match in Step\u00a01 of the PNE algorithm. Retaining only the fourth derived rule (the pure population version) simplifies the process and keeps the size of the derived rule set to a minimum.\n\n The consequence of this is obviously that the HPP method is formally exact *only* for an infinite lumping rate constant. From a practical point of view, this could be a problem if the network-free simulator being used does not support infinite rates (e.g., NFsim currently does not). However, our performance tests indicate that as long as `k_lump` is \"large\" with respect to the model dynamics then essentially exact results can be obtained (see Figs.\u00a0\u2013, panels C and D). Nevertheless, we have implemented in a \"safe\" mode for PNE that retains all of the self-matches and, hence, produces exact results for *any* value of `k_lump` (see Sec.\u00a0S3.3 of Text\u00a0S1 for instructions on how to call this method). For a select number of examples, we have confirmed that both approaches give essentially identical results for sufficiently large `k_lump` and that the \"safe\" mode is less efficient (data not shown). <\/span>\n\n### Simple example of PNE\n\nPNE is best illustrated through an example. In Fig.\u00a0, we present a simple rule-based model of receptor activation (for brevity, parameters, initial populations, and output observables are omitted; see Text\u00a0S2<\/span> of the supporting material for the complete model in BNGL format). The model includes a ligand, `L`, its cognate receptor, `R`, and three cytosolic proteins, `A`, `B`, and `C`, that are recruited to the phosphorylated receptor. The 16 rules (six unidirectional and five reversible), describing ligand-receptor binding, receptor phosphorylation\/dephosphorylation, and protein recruitment, encode a reaction network comprised of 56 species and 287 reactions. In applying the HPP method, eight species are selected for lumping: free ligand, free `A`, `B` and `C`, and complexes of `A`, `B` and `C` that exclude the receptor. Receptor complexes are treated as particles because there are many possible receptor configurations (48 total).\n\nPNE example\n\nIn Fig.\u00a0, a step-by-step application of PNE to rule 11f (forward) of Fig.\u00a0 is presented. First, both reactant patterns are matched to the structured population species. Reactant pattern\u00a01 has one match, while reactant pattern\u00a02 has two. Note that since neither reactant pattern exactly equals a species (i.e., is isomorphic to one) the self match (identity automorphism) is added to the reactant match list in both cases. Next, the rule is applied to each possible reactant set (the cartesian product of the reactant match lists). This results in a set of six derived rules. The structured population species are then replaced in these rules by their associated unstructured species, resulting in one pure particle rule (the original rule), three mixed particle\/population rules, and two pure population reactions. Including the population-mapping rules, the hybrid model contains a total of 42 rules, more than the original 16 but significantly less than the 287 reactions of the fully-expanded network. The complete partially-expanded HPP model in BNGL format can be found in Text\u00a0S4 of the supporting material.\n\n### Population-adapted network-free simulation\n\n Although modified relative to the original, the hybrid model generated from PNE remains properly a rule-based model. As such, it can, in principle, be simulated with any of the network-based (after network generation) and network-free simulation methods described above. However, the advantage of recasting the original model into the hybrid form is that it can be represented as a collection of particles and population objects and simulated using a modified network-free method that has the following attributes: <\/span> (i) a population count property for each molecule object; (ii) a transformation that performs population increments and decrements; (iii) a method for calculating population-weighted propensities (rates). Examples of population-adapted network-free simulators are NFsim\u00a01.11 and KaSim\u00a03.0.<\/span>\n\nThe population-weighted propensity of a rule $R_\\mu$ can be calculated as $$a_\\mu = \\frac{k_\\mu}{s_\\mu} \\prod_{r=1}^{M_\\mu} \\left( \\sum_{x=1}^X \\rho(x) \\eta_{\\mu,r}(x) \\right).\n\\label{eq:hybrid_aMu}$$ Here, $k_\\mu$ is the rate constant (more generally, the \"single-site rate law\" )<\/span>, $s_\\mu$ is the symmetry factor (see Note\u00a04.21 of Ref.\u00a0), $M_\\mu$ is the number of reactant patterns in the rule (i.e., the *molecularity*), $X$ is the total number of complexes in the system, $\\rho(x)$ is the population of complex $x$ (unity in the case of particles), and $\\eta_{\\mu,r}(x)$ is the number of matches of reactant pattern $r$ into complex $x$ (unity or zero for unstructured population species, i.e., the species either is the reactant or it is not)<\/span>. The difference between Eq.\u00a0 and the formula used for calculating propensities in standard network-free simulators is the term $\\rho(x)$; a fully particle-based network-free calculation is recovered if all $\\rho(x)=1$. Conversely, the difference between Eq.\u00a0 and the formula used in network-based SSA simulators is the term $\\eta_{\\mu,r}(x)$; a fully population-based calculation is recovered if all $\\eta_{\\mu,r}(x)=0$ or $1$, in which case $X$ is the total number of species in the network. Equation\u00a0 thus generalizes the concept of propensity for hybrid systems comprised of both particles and population variables. <\/span>\n\nAlso note that for symmetric population reactions, e.g., `pop_A() + pop_A() -> A(a!0).A(a!0)`, the possibility of a null event must be calculated in order to prevent reactions involving the same molecule<\/span>. This is accomplished by rejecting the event with probability $1\/\\rho(x)$. Furthermore, since population species have zero components, if complex $x$ is a population species and $\\eta_{\\mu,r}(x)=1$, then $\\eta_{\\mu,r}(y)=0$ for all $y\\!\\ne\\!x$. This property is useful because it guarantees that a reactant pattern matches either particles or population species exclusively, never a mixture of both. Thus, once a rule has been selected to fire, the particles to participate in that rule can be selected from a uniform distribution rather than from a population-weighted distribution.\n\n## Performance analyses\n\n### Peak memory use and CPU run time\n\nIn Figs.\u00a0\u2013, panels A, we show absolute and relative (with respect to ) peak memory use as a function of cell fraction, $f$, for all models considered. We see that in all tested cases HPP requires less memory than . For , we also see the expected linear relationship (Table\u00a0) between peak memory use and particle number (i.e., cell fraction; the slight deviation from linearity is an artifact of how memory is allocated in NFsim<\/span>). For HPP, peak memory use also scales linearly with particle number,<\/span> but with a smaller slope. This is the expected behavior since as the cell fraction is increased (keeping concentrations constant)<\/span> a portion of the added particles, and hence memory cost, is always absorbed by the population portion of the system<\/span>. Furthermore, in cases where network generation is possible (Fc$\\epsilon$RI, Fig.\u00a0A; EGFR, Fig.\u00a0A), we see the expected constant relationship between memory usage and particle number for the SSA (Table\u00a0). We also see that the SSA requires more memory than both and HPP for all cell fractions considered. This is due to the high memory cost of the dependency update graph used in the SSA implementation within <\/span>, which scales with the product of the number of reactions in the network and the number of reactions updated after each reaction firing (see Table\u00a0)<\/span>.\n\nIn Figs.\u00a0\u2013, panels B, we show absolute and relative (with respect to ) CPU run times as a function of cell fraction. Generally speaking, HPP and run times are comparable in all cases, indicating that the reductions in memory use seen in Figs.\u00a0\u2013, panels A, are not achieved at the cost of increased run times. In fact, HPP is slightly faster than in most cases. This is because operations on population species (e.g., increment\/decrement) are less costly than the graph operations applied to particles (e.g., subgraph matching). Also note in Fig.\u00a0B the expected quadratic relationship between run time and particle number for the TLBR model (Table\u00a0), which is due to the formation of a super aggregate near the<\/span> solution-gel phase boundary<\/span> . In Figs.\u00a0B and B, we see that the SSA is slower than both and HPP for all cell fractions considered. The difference is most pronounced at small cell fractions and is much more significant for EGFR than for Fc$\\epsilon$RI. This is expected since previous work has shown that network-free methods perform particularly well for systems with small numbers of particles and large networks<\/span> (the EGFR network is significantly larger than the Fc$\\epsilon$RI network; Table\u00a0). Finally, we see in Fig.\u00a0B that the CPU run time increases as we increase the number of species treated as populations in the Fc$\\epsilon$RI model, even though the memory usage remains constant (Fig.\u00a0A). This is interesting because it suggests that the Fc$\\epsilon$RI:1 variant, with free ligand as the only population species, is near-optimally lumped for the cell fractions considered. We revisit the issue of optimal lumping sets below.<\/span>\n\n### Accuracy\n\nIn Figs.\u00a0\u2013, panels C, we show distributions of the number of reaction firings per simulation run for each of the simulation methods considered. It is evident that for all models the distributions, as illustrated by box plots, are similar for , HPP, and SSA (the latter for Fc$\\epsilon$RI only; Fig.\u00a0C). Statistically speaking<\/span>, the two-sided Mann-Whitney\u00a0U test was unable to reject the null hypothesis in all cases at the 5% significance level (TLBR:\u00a0$p\\!=\\!0.25$; Actin:\u00a0$p\\!=\\!0.90$; Fc$\\epsilon$RI:\u00a0$p\\!=\\!0.27$; EGFR:\u00a0$p\\!=\\!0.07$). There is no evidence, therefore, that HPP does not generate statistically identical numbers of reaction firings to both and SSA. This is as expected since all methods are exact-stochastic approaches.<\/span>\n\nIn Figs.\u00a0\u2013, panels D, we compare distributions obtained from and HPP simulations of all models. In Fig.\u00a0D, we show equilibrium distributions of the number of receptor clusters in the TLBR model ($f\\!=\\!0.01$). In Fig.\u00a0D, equilibrium distributions of polymer lengths in the Actin model are shown ($f\\!=\\!0.01$). In both cases, the and HPP distributions are statistically indistinguishable<\/span> (TLBR:\u00a0$p\\!=\\!0.50$; Actin:\u00a0$p\\!=\\!0.66$). In Fig.\u00a0D, time courses for $\\gamma$-phosphorylated receptor and receptor-recruited, $\\alpha$-phosphorylated Syk are shown ($f\\!=\\!0.01$). In Fig.\u00a0D, time courses for membrane-recruited (active) SOS and nuclear phospho-ERK are shown ($f\\!=\\!0.05$). Although we did not perform any statistical tests, visual inspection of the trajectories clearly shows that in all cases the and HPP results are virtually identical.\n\n### Systematic approach to selecting population species<\/span>\n\n All of the HPP results presented in Figs.\u00a0\u2013 were obtained with \"hand-picked\" sets of population species chosen based on modeler experience and intuition. The significant memory savings seen in these plots imply that this approach will often be sufficient in practice. However, it is fair to ask whether a more systematic approach to selecting population species can achieve additional memory savings. In order to address this question, we considered a variety of different lumping sets for each example model and compared their performance in terms of memory usage and CPU run time. The lumping sets were chosen based on average species populations calculated over the course of a single NFsim pre-simulation at cell fraction $f=0.01$. Specifically, at periodic intervals, the full set of complexes in the system was collected, each complex canonically labeled, and the number of instances of each label (i.e., species) counted. Average values over the entire simulation were then calculated for each species. Sets of population species were constructed by lumping all species with an average population greater than a range of pre-defined thresholds. For convenience, we chose thresholds of $2^n$, $n\\in[0,10]$. Average species populations obtained from each NFsim pre-simulation are provided in supplementary Dataset\u00a0S1. The script that implements this method (for a single threshold) has been included in the recent \u00a02.2.5 release (`auto_hpp.pl` in the `Perl2` subdirectory). <\/span>\n\n In Fig.\u00a0, we show peak memory use and CPU run times for HPP simulations of each model at each lumping set considered. In general, these results illustrate the success of the hand-picked lumping sets, which produced memory savings close to the optimal in most cases. There was, however, some room for improvement in the Fc$\\epsilon$RI model (Fig.\u00a0C). This is because the fourth and fifth most populated species for this model were complexes comprised of five molecular subunits (see Dataset\u00a0S1). Since we did not anticipate this result, these high-population species were not included in the hand-picked lumping set. The majority of the memory savings seen in Fig.\u00a0C for thresholds $>32$ are due to lumping of these species. Thus, our results also illustrate the value of using a more systematic approach to selecting population species in some cases. <\/span>\n\n It is also interesting to note in Figs.\u00a0C and D the presence of an optimal lumping threshold between the maximum and minimum values considered. At high thresholds, most species are treated as particles and higher memory use is expected. At low thresholds, however, the higher memory use is due to the larger size of the partially-expanded network. Also interesting is that the run time results in Fig.\u00a0 show a weak (if any) dependence on the chosen threshold, despite the fact that the time complexity of network-free methods scales linearly with rule set size (Table\u00a0). Presumably, this is because the lower cost operations (increment\/decrement) associated with the population species offset the increased cost of larger rule sets. This robustness of the time cost with respect to the size of the lumping set is a positive attribute of the HPP method. <\/span>\n\n# Discussion\n\nWe have presented a hybrid particle\/population simulation approach for rule-based models of biological systems. The HPP approach is applied in two stages (Fig.\u00a0): (i) transformation of a rule-based model into a dynamically-equivalent hybrid form by partially expanding the network around a selected set of population species; (ii) simulation of the transformed model using a population-adapted network-free simulator. The method is formally exact for an infinite population lumping rate constant, but can produce statistically exact results in practice provided that a sufficiently large value is used (Figs.\u00a0\u2013, panels C and D). As currently implemented, the primary advantage of the HPP method is in reducing memory usage during simulation (Figs.\u00a0\u2013, panels A). Importantly, this is accomplished with little to no impact on simulation run time (Figs.\u00a0\u2013, panels B).\n\nWe have shown that peak memory use for HPP scales linearly with particle number (with a slope that is smaller than for ; Figs.\u00a0\u2013, panels A) and confirmed that when network generation is possible SSA memory use is approximately independent of particle number (Figs.\u00a0A and A). At the system volumes that we have considered here, HPP memory use is significantly less than for SSA.<\/span> However, the linear scaling of HPP and the constant scaling of SSA indicate that with further increases in the system volume there will invariably come a point where HPP memory use exceeds that of SSA. This is because species that are rare at small volumes, and hence chosen to be treated as particles, become plentiful at large volumes. Intuitively, a partially-expanded network should never require more memory than a fully-enumerated network. However, as currently implemented, there is no way to strictly enforce this restriction because HPP requires that population species be chosen prior to PNE. <\/span>\n\n In Fig.\u00a0, we have shown how a systematic approach to choosing population species can optimize memory usage for a given system volume. However, this approach requires running an NFsim pre-simulation, which may not be feasible for systems with extremely large numbers of particles (e.g., whole cells). Thus, we propose to develop a more general version of HPP that dynamically tracks the populations of species during the course of a simulation and automatically selects those to treat as population variables based on some criteria, e.g., that their population exceeds a certain threshold. In this automated version of HPP (aHPP), PNE would be performed every time a new species is lumped. If all species in the system become lumped then the network will naturally become fully enumerated. Hence, the memory load will never exceed that of the fully-expanded network. <\/span> In Fig.\u00a0, we provide a qualitative sketch of how we expect<\/span> the memory usage of this hypothetical aHPP method to scale with system volume (particle number). Included for comparison are scalings for HPP, , and SSA. For models with finite networks (such as Fc$\\epsilon$RI and EGFR), aHPP memory use should plateau once the entire reaction network has been generated.<\/span> For models with infinite networks (such as TLBR and Actin), we expect aHPP memory use at large volumes to scale somewhere between constant and linear (no worse than HPP) depending on the model. A detailed analysis of the space complexity of a hypothetical, \"optimal\" aHPP method is provided in Sec.\u00a0S2 of supplementary Text\u00a0S1.<\/span>\n\nIn order to frame our results within a real-world context, we have estimated the cost of simulation based on hourly rates of on-demand instances on the Amazon Elastic Compute Cloud (EC2). In Fig.\u00a0, we show the hourly cost (per \"effective compute unit\") of simulation as a function of required memory per simulation (details of the calculation can be found in Sec.\u00a0S1 of Text\u00a0S1). Also included in the plot are values for HPP (0.3\u00a0GB), (2.1\u00a0GB), and SSA (22.0\u00a0GB) simulations of the EGFR model at cell fraction $f\\!=\\!1$ (Fig.\u00a0A). Our calculations show that below 1.82\u00a0GB of required memory *High-CPU* instances are the most cost effective. Above this threshold *High-Memory* instances are the better option. The HPP simulation falls below this cutoff while both and SSA lie above. There is a quantifiable benefit, therefore, to reducing memory usage in this case; HPP simulations on the EC2 would be $\\sim$``{=html}2.5 and $\\sim$``{=html}33 times less expensive, respectively, than and SSA (HPP is slightly faster than and significantly faster than SSA; Fig.\u00a0B). Thus, the reduction in memory usage offered by HPP is not simply of academic interest but can impact, in a tangible way, the cost of doing computational research.<\/span>\n\n Finally, even greater benefits are possible if, in addition to reducing memory usage, the speed of HPP simulations can be increased. $\\tau$\u00a0leaping is an approach for accelerating stochastic simulations of chemically reactive systems. With a few exceptions (e.g., Ref.\u00a0), $\\tau$\u00a0leaping has been applied primarily to fully-enumerated reaction networks. We believe that the HPP method provides a unique setting for the application of $\\tau$-leaping because, <\/span> unlike in pure particle-based methods, there exists a partial network of reactions that act on population species. Thus, a network-based $\\tau$-leaping method can be applied exclusively to the population component of a system while retaining the network-free approach in the particle component.<\/span> We have recently implemented a $\\tau$-leaping variant in , known as the partitioned-leaping algorithm , and are actively working on integrating it with the HPP.\n\n# Supporting Information\n\n**Dataset\u00a0S1.** Average species populations from NFsim pre-simulations ($f=0.01$) of all example models considered in Fig.\u00a0.<\/span>\n\n**Figure\u00a0S3.** Average number of reactions that must be updated after each reaction firing (i.e, dependencies) for a collection of Fc$\\epsilon$RI signaling models of varying network size (all models are included in BioNetGen\u00a02.2.5 release available at ).<\/span>\n\n**Text\u00a0S1.** Sec.\u00a0S1<\/u>: Details of the monetary cost analysis shown in Fig.\u00a0; Sec.\u00a0S2<\/u>: Space complexity analyses for the network-based SSA, network-free, HPP, and hypothetical aHPP methods<\/span> (Fig.\u00a0); Sec.\u00a0S3<\/u>: Overview of BNGL, model files, and running HPP simulations with \/<\/span>; Sec.\u00a0S4<\/u>: BNGL formalism and the formal foundation of the PNE algorithm (with pseudocode).<\/span>\n\n**Text\u00a0S2.** Complete BNGL file for the simple receptor activation model of Fig.\u00a0 (`receptor_activation.bngl`).\n\n**Text\u00a0S3.** HPP configuration file for the simple receptor activation model, including population mapping rules and instructions for executing NFsim and HPP simulations (`run_receptor_activation.bngl`). <\/span>\n\n**Text\u00a0S4.** Partially-expanded (HPP) version of the simple receptor activation model of Fig.\u00a0 generated using the method outlined in Fig.\u00a0 (`receptor_activation_hpp.bngl`).\n\n**Text\u00a0S5.** BNGL file for the TLBR model (`tlbr.bngl`).\n\n**Text\u00a0S6.** HPP configuration file for the TLBR model (`run_tlbr.bngl`). <\/span>\n\n**Text\u00a0S7.** HPP version of the TLBR model (`tlbr_hpp.bngl`).\n\n**Text\u00a0S8.** BNGL file for the Actin model (`actin_simple.bngl`).\n\n**Text\u00a0S9.** HPP configuration file for the Actin model (`run_actin_simple.bngl`). <\/span>\n\n**Text\u00a0S10.** HPP version of the Actin model (`actin_simple_hpp.bngl`).\n\n**Text\u00a0S11.** BNGL file for the Fc$\\epsilon$RI model (`fceri_gamma2.bngl`).\n\n**Text\u00a0S12.** HPP configuration file for the Fc$\\epsilon$RI model (`run_fceri_gamma2.bngl`). <\/span>\n\n**Text\u00a0S13.** HPP version of the Fc$\\epsilon$RI model with free ligand treated as the only population species (`fceri_gamma2_hpp1.bngl`).\n\n**Text\u00a0S14.** HPP version of the Fc$\\epsilon$RI model with free ligand, cytosolic Lyn and all four phosphorylation states of cytosolic Syk treated as population species (`fceri_gamma2_hpp6.bngl`).\n\n**Text\u00a0S15.** BNGL file for the EGFR model (`egfr_extended.bngl`).\n\n**Text\u00a0S16.** HPP configuration file for the EGFR model (`run_egfr_extended.bngl`). <\/span>\n\n**Text\u00a0S17.** HPP version of the EGFR model (`egfr_extended_hpp.bngl`).","meta":{"dup_signals":{"dup_doc_count":12,"dup_dump_count":4,"dup_details":{"curated_sources":1,"2024-26":1,"2024-30":1,"unknown":9}},"filename":"out\/1301.6854_extract_hpp_plos_v2.tex.md"},"subset":"arxiv"} +{"text":"bibliography: MMPRefs.bib\n\n# Abstract\n\nWe present schema redescription as a methodology to characterize canalization in automata networks used to model biochemical regulation and signalling. In our formulation, canalization becomes synonymous with redundancy present in the logic of automata. This results in straightforward measures to quantify canalization in an automaton (micro-level), which is in turn integrated into a highly scalable framework to characterize the collective dynamics of large-scale automata networks (macro-level). This way, our approach provides a method to link micro- to macro-level dynamics \u2013 a crux of complexity. Several new results ensue from this methodology: uncovering of dynamical modularity (modules in the dynamics rather than in the structure of networks), identification of minimal conditions and critical nodes to control the convergence to attractors, simulation of dynamical behaviour from incomplete information about initial conditions, and measures of macro-level canalization and robustness to perturbations. We exemplify our methodology with a well-known model of the intra- and inter cellular genetic regulation of body segmentation in *Drosophila melanogaster*. We use this model to show that our analysis does not contradict any previous findings. But we also obtain new knowledge about its behaviour: a better understanding of the size of its wild-type attractor basin (larger than previously thought), the identification of novel minimal conditions and critical nodes that control wild-type behaviour, and the resilience of these to stochastic interventions. Our methodology is applicable to any complex network that can be modelled using automata, but we focus on biochemical regulation and signalling, towards a better understanding of the (decentralized) control that orchestrates cellular activity \u2013 with the ultimate goal of explaining how do cells and tissues 'compute'.\n\n# Introduction and background\n\nThe notion of *canalization* was proposed by Conrad Waddington to explain why, under genetic and environmental perturbations, a wild-type phenotype is less variable in appearance than most mutant phenotypes during development. Waddington's fundamental hypothesis was that the robustness of wild-type phenotypes is the result of a *buffering of the developmental process*. This led Waddington to develop the well-known concept of *epigenetic landscape* , where cellular phenotypes are seen, metaphorically, as marbles rolling down a sloped and ridged landscape as the result of interactions amongst genes and epigenetic factors. The marbles ultimately settle in one of the valleys, each corresponding to a stable configuration that can be reached via the dynamics of the interaction network. In this view, genetic and epigenetic perturbations can only have a significant developmental effect if they force the natural path of the marbles over the ridges of the epigenetic landscape, thus making them settle in a different valley or stable configuration.\n\nCanalization, understood as the buffering of genetic and epigenetic perturbations leading to the stability of phenotypic traits, has re-emerged recently as a topic of interest in computational and systems biology . However, canalization is an emergent phenomenon because we can consider the stability of a phenotypic trait both at the micro-level of biochemical interactions, or at the macro-level of phenotypic behaviour. The complex interaction between micro- and macro-level thus makes canalization difficult to study in biological organisms \u2013 but the field of complex systems has led to progress in our understanding of this concept. For instance, Conrad provided a still-relevant treatment of evolvability by analysing the tradeoff between genetic (micro-level) instability and phenotypic (macro-level) stability. This led to the concept of *extra-dimensional bypass*, whereby most genetic perturbations are buffered to allow the phenotype to be robust to most physiological perturbations, but a few genetic perturbations (e.g. the addition of novel genetic information) provide occasional instability necessary for evolution. Conrad highlighted three (micro-level) features of the organization of living systems that allows them to satisfy this tradeoff: *modularity* (or compartmentalization), *component redundancy*, and *multiple weak interactions*. The latter two features are both a form of redundancy, the first considering the redundancy of components and the second considering the redundancy of interactions or linkages. Perhaps because micro-level redundancy has been posited as one of the main mechanisms to obtain macro-level robustness, the term canalization has also been used \u2013 especially in discrete mathematics \u2013 to characterize redundant properties of automata functions, particularly when used to model micro-level dynamical interactions in models of genetic regulation and biochemical signalling.\n\nAn automaton is typically defined as *canalizing* if there is at least one state of at least one of its inputs that is sufficient to control the automaton's next state (henceforth *transition*), regardless of the states of any other inputs . Clearly, this widely used definition refers to micro-level characteristics of the components of multivariate discrete dynamical systems such as automata networks, and not to canalization as the emergent phenomenon outlined above. Nonetheless, using this definition, it has been shown that (1) canalizing functions are widespread in eukaryotic gene-regulation dynamics ; (2) genetic regulatory networks modelled with canalizing automata are always stable ; and (3) realistic biological dynamics are naturally observed in networks with scale-free connectivity that contain canalizing functions . These observations suggest that the redundancy captured by this micro-level definition of canalization is a mechanism used to obtain stability and robustness at the macro-level of phenotypic traits.\n\nSince the proportion of such 'strictly' canalizing functions drops abruptly with the number of inputs ($k$) , it was at first assumed that (micro-level) canalization does not play a prominent role in stabilizing complex dynamics of gene regulatory networks. However, when the concept of canalization is extended to include *partially canalizing* functions, where subsets of inputs can control the automaton's transition, the proportion of available canalizing automata increases dramatically even for automata with many inputs . Furthermore, partial canalization has been shown to contribute to network stability, without a detrimental effect on 'evolvability' . Reichhardt and Bassler, point out that, even though strictly canalizing functions clearly contribute to network stability, they can also have a detrimental effect on the ability of networks to adapt to changing conditions \u2013 echoing Conrad's tradeoff outlined above. This led them to consider the wider class of partially canalizing functions that confer stable network dynamics, while improving adaptability. A function of this class may ignore one or more of its inputs given the states of others, but is not required to have a single canalizing input. For example, if a particular input is *on*, the states of the remaining inputs are irrelevant, but if that same input is *off*, then the state of a subset of its other inputs is required to determine the function's transition. In scenarios where two or more inputs are needed to determine the transition, the needed inputs are said to be *collectively canalizing*.\n\nReichhardt and Bassler have shown that the more general class of partially canalizing functions dominates the space of Boolean functions for any number of inputs $k$. Indeed, for any value of $k$, there are only two *non-canalizing functions* that always depend on the states of all inputs. Other classes of canalizing functions have been considered, such as *nested canalizing* functions , *Post classes* and *chain functions* . All these classes of functions characterize situations of input redundancy in automata. In other words, micro-level canalization is understood as a form of redundancy, whereby a subset of input states is sufficient to guarantee transition, and therefore its complement subset of input states is redundant. This does not mean that redundancy is necessarily the sole \u2013 or even most basic \u2013 mechanism to explain canalization at the macro-level. But the evidence we reviewed above, at the very least, strongly suggests that micro-level redundancy is a key mechanism to achieve macro-level canalization. Other mechanisms are surely at play, such as the topological properties of the networks of micro-level interactions. Certainly, modularity, as suggested by Conrad, plays a role in the robustness of complex systems and has rightly received much attention recently . While we show below that some types of modularity can derive from micro-level redundancy, other mechanisms to achieve modularity are well-known .\n\nHere, we explore partial canalization, as proposed by Reichhardt and Bassler, to uncover the loci of control in complex automata networks, particularly those used as models of genetic regulation and signalling. Moreover, we extend this notion to consider not only (micro-level) canalization in terms of input redundancy, but also in terms of input-permutation redundancy to account for the situations in which swapping the states of (a subset) of inputs has no effect on an automaton's transition. From this point forward, when we use the term *canalization* we mean it in the micro-level sense used in the (discrete dynamical systems) literature to characterize redundancy in automata functions. Nonetheless, we show that the quantification of such micro-level redundancy uncovers important details of macro-level dynamics in automata networks used to model biochemical regulation. This allows us to better study how robustness and control of phenotypic traits arises in such systems, thus moving us towards understanding canalization in the wider sense proposed by Waddington. Before describing our methodology, we introduce necessary concepts and notations pertaining to Boolean automata and networks, as well as the segment polarity gene-regulation network in *Drosophila melanogaster*, an automata model we use to exemplify our approach.\n\n## Boolean networks\n\nThis type of discrete dynamical system was introduced to build qualitative models of genetic regulation, very amenable to large-scale statistical analysis \u2013 see for comprehensive review. A *Boolean automaton* is a binary variable, $x \\in\n\\{0,1\\}$, where state 0 is interpreted as *false* (*off* or *unexpressed*), and state 1 as *true* (*on* or *expressed*). The states of $x$ are updated in discrete time-steps, $t$, according to a *Boolean state-transition function* of $k$ inputs: $x^{t+1} = f\\left(i_1^t, ..., i_k^t\\right)$. Therefore $f: \\{0,1\\}^k \\rightarrow \\{0,1\\}$. Such a function can be defined by a *Boolean logic formula* or by a *look-up (truth) table* (LUT) with $2^{k}$ entries. An example of the former is $x^{t+1} = f(x,y,z) = x^t \\wedge (y^t \\vee\nz^t)$, or its more convenient shorthand representation $f = x \\wedge\n(y \\vee z)$, which is a Boolean function of $k=3$ input binary variables $x,y,z$, possibly the states of other automata; $\\wedge$, $\\vee$ and $\\neg$ denote logical conjunction, disjunction, and negation respectively. The LUT for this function is shown in Figure . Each LUT entry of an automaton $x$, $f_{\\alpha}$, is defined by (1) a specific *condition*, which is a conjunction of $k$ inputs represented as a unique $k$-tuple of input-variable (Boolean) states, and (2) the automaton's *next state* (transition) $x^{t+1}$, given the condition (see Figure ). We denote the entire state transition function of an automaton $x$ in its LUT representation as $F \\equiv \\{f_\\alpha: \\alpha = 1,...,2^k\\}$.\n\nA *Boolean Network* (BN) is a graph $\\mathcal{B} \\equiv (X,E)$, where $X$ is a set of $n$ Boolean automata *nodes* $x_i \\in X, i=1,...,n$, and $E$ is a set of directed edges $e_{ji} \\in\nE: x_i,x_j \\in X$. If $e_{ji} \\in E$, it means that automaton $x_j$ is an input to automaton $x_i$, as computed by $F_i$. $X_i = \\{ x_j \\in X : e_{ji} \\in E\\}$ denotes the set of input automata of $x_i$. Its cardinality, $k_i = |X_i|$, is the *in-degree* of node $x_i$, which determines the size of its LUT, $|F_i| = 2^{k{_i}}$. We refer to each entry of $F_i$ as $f_{i:\\alpha}, \\alpha = 1...2^{k{_i}}$. The *input nodes* of $\\mathcal{B}$ are nodes whose state does not depend on the states of other nodes in $\\mathcal{B}$. The state of *output nodes* is determined by the states of other nodes in the network, but they are not an input to any other node. Finally, the state of *inner nodes* depends on the state of other nodes and affect the state of other nodes in $\\mathcal{B}$. At any given time $t$, $\\mathcal{B}$ is in a specific *configuration* of node states, $\\bm{x}^t = \\langle x_1, x_2,\n..., x_n\\rangle$. We use the terms *state* for individual automata $(x)$ and *configuration* $(\\bm{x})$ for the collection of states of the set of automata of $\\mathcal{B}$, i.e. the collective network state.\n\nStarting from an initial configuration, $\\bm{x}^0$, a BN updates its nodes with a *synchronous* or *asynchronous* policy. The *dynamics* of $\\mathcal{B}$ is thus defined by the temporal sequence of configurations that ensue, and there are $2^n$ possible configurations. The transitions between configurations can be represented as a *state-transition graph*, $\\textrm{STG}$, where each vertex is a configuration, and each directed edge denotes a transition from $\\bm{x}^t$ to $\\bm{x}^{t+1}$. The STG of $\\mathcal{B}$ thus encodes the network's entire *dynamical landscape*. Under synchronous updating, configurations that repeat, such that $\\bm{x}^{t+\\mu} = \\bm{x}^t$, are known as *attractors*; *fixed point* when $\\mu=1$, and *limit cycle* \u2013 with period $\\mu$ \u2013 when $\\mu > 1$, respectively. The disconnected subgraphs of a STG leading to an attractor are known as *basins of attraction*. In contrast, under asynchronous updating, there are alternative configuration transitions that depend on the order in which nodes update their state. Therefore, the same initial configuration can converge to distinct attractors with some probability . A BN $\\mathcal{B}$ has a finite number $b$ of attractors; each denoted by $\\mathcal{A}_i : i=1,...,b$. When the updating scheme is known, every configuration $\\bm{x}$ is in the basin of attraction of some specific attractor $\\mathcal{A}_i$. That is, the dynamic trajectory of $\\bm{x}$ converges to $\\mathcal{A}_i$. We denote such a dynamical trajectory by $\\sigma(\\bm{x}) \\leadsto\n\\mathcal{A}_i$. If the updating scheme is stochastic, the relationship between configurations and attractors can be specified as the conditional probability $P(\\mathcal{A}_i | \\bm{x})$.\n\n## The segment polarity network\n\nThe methodology introduced in this paper will be exemplified using the well-studied Boolean model of the segment polarity network in *Drosophila melanogaster* . During the early ontogenesis of the fruit fly, like in every arthropod's development, there is body segmentation . The specification of adult cell types in each of these segments is controlled by a hierarchy of around forty genes. While the effect of most of the genes in the hierarchy is only transient, a subset of *segment polarity genes* remains expressed during the life of the fruit fly . The dynamics of the segment polarity network was originally modelled using a system of non-linear ordinary differential equations (ODEs) . This model suggested that the regulatory network of segment polarity genes is a module largely controlled by external inputs that is robust to changes to its internal kinetic parameters. On that basis, Albert and Othmer later proposed a simpler discrete BN model of the dynamics of the *segment polarity network* (henceforth SPN) . This was the first Boolean model of gene regulation capable of predicting the steady state patterns experimentally observed in wild-type and mutant embryonic development with significant accuracy, and has thus become the quintessential example of the power of the logical approach to modelling of biochemical regulation from qualitative data in the literature. Modelling with ODEs, in contrast, is hindered by the need of substantial quantitative data for parameter estimation .\n\nThe SPN model comprises fifteen nodes that represent intra-cellular chemical species and the genes *engrailed (en)*; *wingless (wg)*; *hedgehog (hh)*; *patched (ptc)* and *cubitus interruptus (ci)* . These genes encode a number of proteins such as the transcription factors Engrailed (EN), Cubitus Interruptus (CI), CI Activator (CIA), and CI repressor (CIR); the secreted proteins Wingless (WG) and Hedgehog (HH); and the transmembrane protein Patched (PTC). Other proteins included in the SPN model are Sloppy-Paired (SLP) \u2013 the state of which is previously determined by the *pair-rule* gene family that stabilizes its expression before the segment polarity genes \u2013 as well as Smoothened (SMO) and the PH complex that forms when HH from neighbouring cells binds to PTC. Figure shows the topology and Table lists the logical rules of the nodes in every cell of the SPN. This model consists of a spatial arrangement of four interconnected cells, a *parasegment*. While the regulatory interactions within each cell are governed by the same network, inter-cellular signalling affects neighbouring cells. That is, regulatory interactions in a given cell depend on the states of WG, $hh$ and HH in adjacent cells. Therefore, six additional (inter-cellular) 'spatial signals' are included: $hh_{i \\pm 1}$, $\\text{HH}_{i \\pm 1}$ and $\\text{WG}_{i\n \\pm 1}$, where $i = 1, ... ,4$ is the cell index in the four-cell parasegment. In a parasegment, the cell with index $i=1$ corresponds to its anterior cell and the cell with index $i=4$ to its posterior cell (see Figure ). In simulations, the four-cell parasegments assume periodic boundary conditions (i.e. anterior and posterior cells are adjacent to each other). Since each parasegment has $4 \\times 15 = 60$ nodes, four of which are in a fixed state (SLP), there are $2^{56}$ possible configurations \u2013 a dynamical landscape too large for exhaustive analysis. Even though the original model was not fully synchronous because $\\text{PH}$ and $\\text{SMO}$ were updated instantaneously at time $t$, rather than at $t+1$, here we use the fully equivalent, synchronous version. Specifically, since $\\text{PH}$ is an output node, synchronizing its transition with the remaining nodes at $t+1$ does not impact the model's dynamics. The state of $\\text{SMO}$ influences the updating of $\\text{CIA}$ and $\\text{CIR}$; but since the update of $\\text{SMO}$ is instantaneous, we can include its state-transition function in the state-transition functions of $\\text{CIA}$ and $\\text{CIR}$ (which update at $t+1$) with no change in the dynamics of the model as described in .\n\nThe initial configuration (IC) of the SPN, depicted in Figure , and which leads to the wild-type expression pattern is known : $wg_4 = en_1 = hh_1 =\nptc_{2,3,4} = ci_{2,3,4} = 1$ (*on* or expressed). The remaining nodes in every cell of the parasegment are set to 0 (*off*, or not expressed). Overall, the dynamics of the SPN settles to one of ten attractors \u2013 usually divided into four qualitatively distinct groups, see Figure : (1) wild-type with three extra variations (PTC mutant, double $wg$ bands, double $wg$ bands PTC mutant); (2) Broad-stripe mutant; (3) No segmentation; and (4) Ectopic (with the same variations as wild-type). Albert and Othmer estimated that the number of configurations that converge to the wild-type attractor is approximately $6\\times10^{11}$ \u2013 a very small portion of the total number of possible configurations ($\\approx 7 \\times 10^{16}$) \u2013 and that the broad-stripe mutant attractor basin contains about $90\\%$ of all possible configurations .\n\nThe inner and output nodes of each cell in a parasegment \u2013 that is, every node except the input node SLP \u2013 that has reached a stable configuration (attractor) are always in one of the following five patterns.\n\n- $I1$: all nodes are *off* except PTC, $ci$, CI and CIR.\n\n- $I2$: same as $I1$ but states of $ptc$, PH, SMO, CIA and CIR are negated.\n\n- $I3$: all nodes are *off* except $en,$ EN, $hh$, HH and SMO.\n\n- $I4$: same as $I3$ but PTC and SMO are negated.\n\n- $I5$: negation of $I4$, except PTC and CIR remain as in $I4$.\n\nFor example, the wild-type configuration corresponds \u2013 from anterior to posterior cell \u2013 to the patterns $I3$, $I2$, $I1$ and $I5$. Thus the pattern $I4$ is only seen in mutant expression patterns. The patterns $I1$ to $I5$ can be seen as attractors of the inner- and output-node dynamics of each cell in a parasegment.\n\nBesides the fact that the SPN is probably the most well-known discrete dynamical system model of biochemical regulation, we chose it to exemplify our methodology because (1) it has been well-validated experimentally, despite the assumption that genes and proteins operate like *on\/off* switches with synchronous transitions and (2) the model includes both intra-cellular regulation and inter-cellular signalling in a spatial array of cells. The intra and inter-cellular interactions in the SPN model result in a dynamical landscape that is too large to characterize via an STG, while adding also an extra level of inter-cellular (spatial) regulation. The ability to deal with such multi-level complexity makes our methodology particularly useful. As we show below, we can uncover the signals that control collective information processing in such (spatial and non-spatial) complex dynamics.\n\n# Methodology and Results\n\n## Micro-level canalization via schemata\n\nIn previous work, we used *schema redescription* to demonstrate that we can understand more about the dynamical behaviour of automata networks by analysing the patterns of *redundancy* present in their (automata) components (micro-level), rather than looking solely at their macro-level or collective behaviour . Here we relate the redundancy removed via schema redescription with the concept of *canalization*, and demonstrate that a characterization of the full canalization present in biochemical networks leads to a better understanding of how cells and collections of cells 'compute'. Moreover, we show that this leads to a comprehensive characterization of *control* in automata models of biochemical regulation. Let us start by describing the schema redescription methodology. Since a significant number of new concepts and notations are introduced in this, and subsequent sections, a succinct glossary of terms as well as a table with the mathematical notations used is available in *Supporting text S1*.\n\nFrom the extended view of canalization introduced earlier, it follows that the inputs of a given Boolean automaton do not control its transitions equally. Indeed, substantial redundancy in state-transition functions is expected. Therefore, filtering redundancy out is equivalent to identifying the loci of control in automata. In this section we focus on identifying the loci of control in individual automata by characterizing the canalization present in their transition functions. First, we consider how subsets of inputs in specific state combinations make other inputs *redundant*. Then we propose an additional form of canalization that accounts for *input permutations* that leave a transition unchanged. Later, we use this characterization of canalization and control in individual automata to study networks of automata; this also allows us to analyse robustness and collective computation in these networks.\n\n### Wildcard schemata and *enputs*\n\nConsider the example automaton $x$ in Figure A, where the entire subset of LUT entries in $F$ with transitions to *on* is depicted. This portion of entries in $F$ can be *redescribed* as a set of *wildcard schemata*, $F' \\equiv \\{f'_{\\upsilon}\\}$. A wildcard schema $f'_{\\upsilon}$ is exactly like a LUT entry, but allows an additional *wildcard* symbol, $\\#$ (also represented graphically in grey), to appear in its condition (see Figure B). A wildcard input means that it *accepts any state, leaving the transition unchanged*. In other words, wildcard inputs are *redundant* given the non-wildcard input states specified in a schema $f'_{\\upsilon}$. More formally, when the truth value of an input Boolean variable $i$ in a schema $f'_{\\upsilon}$ is defined by the third (wildcard) symbol, it is equivalent to stating that the truth value of automaton $x$ is unaffected by the truth value of $i$ given the conditions defined by $f'_{\\upsilon}$: $(x | f'_{\\upsilon}, i) = (x\n| f'_{\\upsilon}, \\neg i)$. Each schema redescribes a subset of entries in the original LUT, denoted by $\\Upsilon_{\\upsilon} \\equiv \\{f_{\\alpha}: f_{\\alpha}\n\\rightarrowtail f'_{\\upsilon}\\}$ ($\\rightarrowtail$ means 'is redescribed by').\n\nWildcard schemata are *minimal* in the sense that none of the (non-wildcard) inputs in the condition of a schema can be 'raised' to the wildcard status and still ensure the automaton's transition to the same state. Because wildcard schemata are minimal, $\\Upsilon_{\\upsilon}\n\\nsubseteq \\Upsilon_{\\phi} \\wedge \\Upsilon_{\\phi} \\nsubseteq\n\\Upsilon_{\\upsilon}, \\forall f'_\\upsilon, f'_\\phi \\in F'$. In other words, a wildcard schema is *unique* in the sense that the subset of LUT entries it redescribes is not fully redescribed by any other schema. However, in general $\\Upsilon_{\\upsilon} \\cap \\Upsilon_{\\phi} \\neq\n\\emptyset$. This means that schemata can overlap in terms of the LUT entries they describe. In Figure , $\\Upsilon_1\n\\equiv \\{f_1,f_5,f_9,f_{13}\\}$ and $\\Upsilon_9 \\equiv\n\\{f_4,f_5,f_6,f_7\\}$, therefore $\\Upsilon_1 \\cap \\Upsilon_9 \\equiv\n\\{f_5\\}$. The set of wildcard schemata $F'$ is also *complete*. This means that for a given LUT $F$ there is one and only one set $F'$ that contains all possible minimal and unique wildcard schemata. Since wildcard schemata are *minimal,* *unique* and they form a *complete* set $F'$, they are equivalent to the set of *all prime implicants* obtained during the first step of the Quine & McCluskey Boolean minimization algorithm . Typically, prime implicants are computed for the fraction of the LUT that specifies transitions to *on*. Then a subset of the so-called *essential* prime implicants is identified. The set of essential prime implicants is the subset of prime implicants sufficient to describe (cover) every entry in the input set of LUT entries. However, to study how to control the transitions of automata we use the set of all prime implicants, since it encodes every possible way a transition can take place. The set $F'$ may also contain any original entry in $F$ that could not be subsumed by a wildcard schema. Although the upper bound on the size of $F'$ is known to be $O(3^k\/\\sqrt{k})$ , the more input redundancy there is, the smaller the cardinality of $F'$.\n\nThe condition of a wildcard schema can always be expressed as a logical conjunction of literals (logical variables or their negation), which correspond to its non-wildcard inputs. Since a wildcard schema is a *prime implicant*, it follows that every literal is *essential* to determine the automaton's transition. Therefore, we refer to the literals in a schema as its *essential input states*, or *enputs* for short. To summarize, each enput in a schema is essential, and the conjunction of its enputs is a sufficient condition to *control* the automaton's transition. It also follows that the set $F'$ of wildcard schemata can be expressed as a *disjunctive normal form* (DNF) \u2013 that is, a disjunction of conjunctions that specifies the list of sufficient conditions to control automaton $x$, where each disjunction clause is a schema. The DNF comprising all the prime implicants of a Boolean function $f$ is known as its *Blake's canonical form* . The canonical form of $f$ always preserves the input-output relationships specified by its LUT $F$. Therefore, the basic laws of Boolean logic \u2013 contradiction, excluded middle and de Morgan's laws \u2013 are preserved by the schema redescription.\n\nSchema redescription is related to the work of John Holland on condition\/action rules to model inductive reasoning in cognitive systems and to the general *RR framework* proposed by Annette Karmiloff-Smith to explain the emergence of internal representations and external notations in human cognition . Our methodology to remove redundancy from automata LUTs also bears similarities with the more general *mask analysis* developed by George Klir in his 'reconstructability' analysis, which is applicable to any type of variable . In addition, prime implicants have been known and used for the minimization of circuits in electrical engineering since the notion was introduced by Quine & McCluskey ; similar ideas were also used by Valiant when introducing *Probably Approximately Correct* (PAC) learning.\n\n### Two-symbol schemata\n\nWe now introduce a different and complementary form of redundancy in automata, which we consider another form of canalization. Wildcard schemata identify input states that are sufficient for controlling an automaton's transition (enputs). Now we identify subsets of inputs that can be permuted in a schema without effect on the transition it defines . For this, a further redescription process takes as input the set of wildcard schemata ($F'$) of $x$ to compute a set of two-symbol schemata $F'' \\equiv \\{f''_{\\theta}\\}$ (see Figure C). The additional *position-free symbol* ($\\circ_m$) above inputs in the condition of a schema $f''$ means that *any subset of inputs thus marked can 'switch places' without affecting the automaton's transition.* The index of the position-free symbol, when necessary, is used to differentiate among distinct subsets of 'permutable' inputs. A two-symbol schema $f''_{\\theta}$ redescribes a set $\\Theta_{\\theta} \\equiv\n\\{f_{\\alpha}: f_\\alpha \\rightarrowtail f''_{\\theta}\\}$ of LUT entries of $x$; it also redescribes a subset $\\Theta'_\\theta \\subseteq F'$ of wildcard schemata.\n\nA two-symbol schema $f''_{\\theta}$ captures *permutation redundancy* in a set of wildcard schemata $\\Theta'_\\theta$. More specifically, it identifies subsets of input states whose permutations do not affect the truth value of the condition, leaving the automaton's transition unchanged. In group theory, a permutation is defined as a bijective mapping of a non-empty set onto itself; a permutation group is any set of permutations of a set. Permutation groups that consist of *all* possible permutations of a set are known as *symmetric groups* under permutation . For Boolean functions in general, the study of permutation\/symmetric groups dates back to Shannon and McCluskey (see also ).\n\nTwo-symbol schemata identify subsets of wildcard schemata that form symmetric groups. We refer to each such subset of input states that can permute in a two-symbol schema \u2013 those marked with the same position-free symbol \u2013 as a *group-invariant enput*. Note that a group-invariant enput may include wildcard symbols marked with a position-free symbol. More formally, a two-symbol schema $f''$ can be expressed as a logical conjunction of enputs \u2013 literal or group-invariant. Let us denote the set of literal enputs on the condition of $f''$ by $X_\\ell \\subseteq X$ \u2013 the non-wildcard inputs not marked with the position-free symbol. For simplicity, $n_\\ell = \\left|X_\\ell\\right|$.\n\nA group-invariant enput $g$ is defined by (1) a subset of input variables $X_g \\subseteq X$ that are marked with an identical position-free symbol, and (2) a *permutation constraint* (a bijective mapping) on $X_g$ defined by the expression $n_g = n^0_g +\nn^1_g + n^{\\#}_g$, where $n_g = \\left|X_g\\right|$, $n^0_g$ is the number of inputs in $X_g$ in state $0$ (off), and $n^1_g$ is the number of inputs in $X_g$ in state $1$ (on). We further require that at least two of the quantities $n_g^0, n_g^1$ and $n_g^\\#$ are positive for any group-invariant enput $g$. We can think of these two required positive quantities as *subconstraints*; in particular, we define a group-invariant enput by the two subconstraints $n_g^0, n_g^1$, since $n_g^\\#$ is always derivable from those two given the expression for the overall permutation constraint. This precludes the trivial case of subsets of inputs in the same state from being considered a valid group-invariant enput \u2013 even though they can permute leaving the transition unchanged. A two-symbol schema $f''$ has $n_\\ell$ literal enputs and $\\eta$ group-invariant enputs; each of the latter type of enputs is defined by a distinct permutation constraint for $g = 1,..., \\eta$. An input variable whose truth value is the wildcard symbol in a given schema is never a literal enput (it is not essential by definition). However, it can be part of a group-invariant enput, if it is marked with a position-free symbol. Further details concerning the computation of wildcard and two-symbol schemata are available in *Supporting text S2*.\n\nIn our working example, the resulting two-symbol schema (see Figure C) contains $n_\\ell=2$ literal inputs: $X_\\ell\n= \\{i_2=0, i_3=1\\}$. It also contains one ($\\eta=1$) group-invariant enput $X_g = \\{i_1,i_4,i_5,i_6\\}$ with size $n_g=4$ and subconstraints $n^0_g = 1 \\wedge n^1_g =\n1$. This redescription reveals that the automaton's transition to *on* is determined only by a subset of its six inputs: *as long as inputs 2 and 3 are *off* and *on*, respectively, and among the others at least one is *on* and another is *off*, the automaton will transition to *on**. These minimal control constraints are not obvious in the original LUT and are visible only after redescription.\n\nWe equate *canalization* with redundancy. The more redundancy exists in the LUT of automaton $x$, as input-irrelevance or input-symmetry (group-invariance), the more canalizing it is, and the more compact its two-symbol redescription is, thus $|F''| < |F|$. In other words \u2013 after redescription \u2013 input and input-symmetry redundancy in $F$ is removed in the form of the two symbols. The input states that remain are essential to determine the automaton's transition. Below we quantify these two types of redundancy, leading to two new measures of canalization. Towards that, we must first clearly separate the two forms of redundancy that exist in 2-symbol schemata. The *condition* of a two-symbol schema $f''$ with a single group-invariant enput $g$ \u2013 such as the one in Figure C \u2013 can be expressed as:\n\n$$\\displaystyle{\n\\bigwedge_{i_j \\in X_\\ell^0} \\neg i_j \\bigwedge_{i_j \\in X_\\ell^1} i_j \\wedge\n\\left(\\sum_{i_j \\in X_g} \\neg i_j \\geq n^0_g \\right) \\wedge \\left(\\sum_{i_j \\in\n X_g} i_j \\geq n^1_g \\right)}\n\\label{si:canalogic}$$\n\nwhere $X_\\ell^0$ is the set of literal enputs that must be *off*, and $X_\\ell^1$ is the set of literal enputs that must be *on* (thus $X_\\ell = X_\\ell^1 \\cup X_\\ell^0)$. This expression separates the contributions (as conjunctions) of the literal enputs, and each subconstraint of a group-invariant enput. Since we found no automaton in the target model (see below) with schemata containing more than one group-invariant enput, for simplicity and without lack of generality, we present here only this case ($\\eta=1$). See *Supporting text S3* for the general expression that accounts for multiple group-invariant enputs ($\\eta > 1$).\n\nAll possible transitions of $x$ to *on* are described by a set $F_1''$ of two-symbol schemata. This set can also be expressed in a DNF, where each disjunction clause is given by Expression for all schemata $f''\n\\in F_1''.$ Transitions to *off* are defined by the negation of such DNF expression: $F_0'' \\equiv \\left\\{\\neg f'', \\forall f'' \\in F_1'' \\right\\}$. Canalization of an automaton $x$ is now characterized in terms of two-symbol schemata that capture two forms of redundancy: (1) *input-irrelevance* and (2) *input-symmetry* (group-invariance). We next describe the procedure to compute 2-symbol schemata for a an automaton $x$. Readers not interested in the algorithmic details of this computation can safely move to the next subsection.\n\nThe procedure starts with the set of wildcard schemata $F'$ obtained via the first step of the Quine & McCluskey algorithm (see above). The set $F'$ is then partitioned into subsets $H'_i$ such that,\n\n$$F' = \\bigcup_i H'_i$$\n\nwhere each $H'_i$ contains every schema $x' \\in F'$ with equal number of zeroes ($n^0$), ones ($n^1$), and wildcards ($n^\\#$), with $n^0 + n^1 + n^\\# = k$. In other words, the $H'_i$ are equivalence classes induced on $F'$ by $n^0$, $n^1$, and $n^\\#$. This is a necessary condition for a set of wildcard schemata to form a symmetric group. The algorithm then iterates on each $H'_i$, checking if it contains a symmetric group; i.e. if it contains wildcard schemata with all the permutations of the largest set of inputs variables possible. If it does, it marks those input variables as a group-invariant enput in $H'_i$ and moves to another subset $H'_j$. If it does not, then it checks for symmetric groups in smaller sets of input variables within each set $H'_i$. It does this by iteratively expanding the search space to include all subsets of $H'_i$ with cardinality $|H'_i|-1$. The procedure is repeated, if no symmetric groups are found, until the subsets contain only one wildcard schema.\n\nAlthough several heuristics are implemented to prune the search space, the algorithm is often not suitable for exhaustively searching symmetric groups in large sets of schemata. However, the individual automata found in models of biochemical regulation and signalling networks typically have a fairly low number of inputs. Therefore, schema redescription of their LUT leads to manageable sets of wildcard schemata, which can be exhaustively searched for symmetric groups. Indeed, as shown below, all automata in the SPN model have been exhaustively redescribed into two-symbol schemata. For additional details on the computation of schemata see *Supporting text S2*.\n\n## Quantifying canalization: effective connectivity and input symmetry\n\nSchemata uncover the 'control logic' of automata by making the smallest input combinations that are necessary and sufficient to determine transitions explicit. We equate canalization with the redundancy present in this control logic: the smaller is the set of inputs needed to control an automaton, the more redundancy exists in its LUT and the more canalizing it is. This first type of canalization is quantified by computing the mean number of unnecessary inputs of automaton $x$, which we refer to as *input redundancy*. An upper bound is given by,\n\n$$\\overline{k}_{\\textrm{r}}(x) = \\frac{ \\displaystyle \\sum_{f_{\\alpha} \\in F}\n \\max_{ {\\theta : f_\\alpha \\in \\Theta_\\theta}}\\left(n_\\theta^\\# \\right)}{|F|}\n\\label{upper_k_red}$$\n\nand a lower bound is given by:\n\n$$\\underline{k}_{\\textrm{r}}(x) = \\frac{ \\displaystyle \\sum_{f_{\\alpha} \\in F}\n \\min_{ {\\theta : f_\\alpha \\in \\Theta_\\theta}} \\left(n_\\theta^\\# \\right)}{|F|}\n\\label{lower_k_red}$$\n\nThese expressions compute a mean number of irrelevant inputs associated with the entries of the LUT $F$. The number of irrelevant inputs in a schema $f''_{\\theta}$ is the number of its wildcards $n_{\\theta}^{\\#}$. Because each entry $f_{\\alpha}$ of $F$ is redescribed by one or more schemata $f''_{\\theta}$, there are various ways to compute a characteristic number of irrelevant inputs associated with the entry, which is nonetheless bounded by the maximum and minimum number of wildcards in the set of schemata that redescribe $f_{\\alpha}$. Therefore, the expressions above identify all schemata $f''_{\\theta}$ whose set of redescribed entries $\\Theta_{\\theta}$ includes $f_{\\alpha}$. The upper (lower) bound of input redundancy, Equation (Equation ), corresponds to considering the maximum (minimum) number of irrelevant inputs found for all schemata $f''_{\\theta}$ that redescribe entry $f_{\\alpha}$ of the LUT \u2013 an optimist (pessimist) quantification of this type of canalization. Notice that input redundancy is not an estimated value. Also, it weights equally each entry of the LUT, which is the same as assuming that every automaton input is equally likely.\n\nHere we use solely the upper bound, which we refer to henceforth simply as *input redundancy* with the notation $k_{\\textrm{r}}(x)$. This means that we assume that the most redundant schemata are always accessible for control of the automaton. We will explore elsewhere the range between the bounds, especially in regards to predicting the dynamical behaviour of BNs. The range for input redundancy is $0 \\le\nk_{\\textrm{r}}(x) \\le k$, where $k$ is the number of inputs of $x$. When $k_{\\textrm{r}}(x) = k$ we have full input irrelevance, or maximum canalization, which occurs only in the case of frozen-state automata. If $k_{\\textrm{r}}(x) = 0$, the state of every input is always needed to determine the transition and we have no canalization in terms of input redundancy.\n\nIn the context of a BN, if some inputs of a node $x$ are irrelevant from a control logic perspective, then its *effective* set of inputs is smaller than its in-degree $k$. We can thus infer more about effective control in a BN than what is apparent from looking at structure alone (see analysis of macro-level control below). A very intuitive way to quantify such effective control, is by computing the mean number of inputs needed to determine the transitions of $x$, which we refer to as its *effective connectivity*:\n\n$$k_{\\textrm{e}}(x) = k(x) - k_{\\textrm{r}}(x)\n\\label{lower_eff_k}$$\n\nwhose range is $0 \\le k_{\\textrm{e}}(x) \\le k$. In this case, $k_{\\textrm{e}}(x) = 0$ means full input irrelevance, or maximum canalization, and $k_{\\textrm{r}}(x) = k$, means no canalization.\n\nThe type of canalization quantified by the input redundancy and effective connectivity measures does not include the form of permutation redundancy entailed by group-invariant enputs. Yet this is a genuine form of redundancy involved in canalization, as in the case of nested canalization , since it corresponds to the case in which different inputs can be *alternatively* canalizing. The two-symbol schema redescription allows us to measure this form of redundancy by computing the mean number of inputs that participate in group-invariant enputs, easily tallied by the occurrence of the position-free symbol ($\\circ$) in schemata. Thus we define a measure of *input symmetry* for an automaton $x$, whose upper-bound is given by\n\n$$\\overline{k}_{\\textrm{s}}(x) = \\frac{ \\displaystyle \\sum_{f_{\\alpha} \\in F}\n \\max_{ {\\theta : f_\\alpha \\in \\Theta_\\theta}}\\left( n_\\theta^\\circ \\right) }{|F|}\n\\label{eff_sym_upper}$$\n\nand a lower-bound by,\n\n$$\\underline{k}_{\\textrm{s}}(x) = \\frac{ \\displaystyle \\sum_{f_{\\alpha} \\in F}\n \\min_{ {\\theta : f_\\alpha \\in \\Theta_\\theta}}\\left( n_\\theta^\\circ \\right) }{|F|}\n\\label{eff_sym_lower}$$\n\nwhere $n_\\theta^\\circ$ is the number of position-free symbols in schema $f''_{\\theta}$.\n\nThe upper bound of input symmetry (Equation ) corresponds to considering an optimist quantification of this type of canalization. Here we use solely the upper bound, which we refer to henceforth simply as input symmetry and denote by ${k}_{\\textrm{s}}(x)$. Again, the assumption is that the most redundant schemata are always accessible for control of the automaton. The range for input symmetry is $0 \\le {k}_{\\textrm{s}}(x) \\le k$. High (low) values mean that permutations of input states are likely (unlikely) to leave the transition unchanged.\n\nCanalization in automata LUTs \u2013 the micro-level of networks of automata \u2013 is then quantified by two types of redundancy: *input redundancy* using $k_{\\textrm{r}}(x)$ and *input symmetry* with $k_{\\textrm{s}}(x)$. To be able to compare the canalization in automata with distinct numbers of inputs, we can compute *relative* measures of canalization:\n\n$$k_{\\textrm{r}}^{*}(x) = \\frac{k_{\\textrm{r}}(x)}{k(x)}; \\quad k_{\\textrm{s}}^{*}(x) = \\frac{k_{\\textrm{s}}(x)}{k(x)}\n\\label{relative_canalization_measures}$$\n\nthe range of which is $[0, 1].$ Automata transition functions can have different amounts of each form of canalization, which allows us to consider four broad canalization classes for automata: *class A* with high $k_{\\textrm{r}}(x)$ and high $k_{\\textrm{s}}(x)$, *class B* with high $k_{\\textrm{r}}(x)$ and low $k_{\\textrm{s}}(x)$, *class C* with low $k_{\\textrm{r}}(x)$ and high $k_{\\textrm{s}}(x)$, and *class D* with low $k_{\\textrm{r}}(x)$ and low $k_{\\textrm{s}}(x)$. We will explore these classes in more detail elsewhere. Below, these measures are used to analyse micro-level canalization in the SPN model and discuss the type of functions encountered. Before that, let us introduce an alternative representation of the canalized control logic of automata, which allows us to compute network dynamics directly from the parsimonious information provided by schemata.\n\n## Network representation of a schema\n\nCanalization in an automaton, captured by a set of schemata, can also be conveniently represented as a McCulloch & Pitts threshold network \u2013 introduced in the 1940s to study computation in interconnected simple logical units . These networks consist of binary units that can transition from quiescent to firing upon reaching an activity threshold $(\\tau)$ of the firing of input units. To use this type of network to represent two-symbol schemata we resort to two types of units. One is the *state unit* (s-unit), which represents an input variable in a specific Boolean state; the other is the *threshold unit* (t-unit) that implements the condition that causes the automaton to transition. Two s-units are always used to represent the (Boolean) states of any input variable that participates as enput in the condition of an automaton $x$: one fires when the variable is *on* and the other when it is *off*. To avoid contradiction, the two s-units for a given variable cannot fire simultaneously. Directed fibres link (source) units to (end) units, propagating a pulse \u2013 when the source unit is firing \u2013 that contributes to the firing of the end unit. The simultaneous firing of at least $\\tau$ (threshold) incoming s-units into a t-unit, causes the latter to fire.\n\nIn the example automaton in Figure , the set of schemata $F''$ contains only one schema. This schema can be directly converted to a (2-layer) McCulloch & Pitts network. This conversion, which is possible due to the separation of subconstraints given by Expression , is shown in Figure and explained in its caption. Note that in the McCulloch & Pitts representation, the transition of the automaton is determined in two steps. First, a *layer* of threshold units is used to check that the literal and group-invariant constraints are satisfied; then, a second layer \u2013 containing just one threshold unit \u2013 fires when every subconstraint in Expression has been simultaneously satisfied, determining the transition. This means that in this network representation each schema with literal enputs and at least a group-invariant enput requires two layers and three t-units. Since in McCulloch & Pitts networks each threshold unit has a standard delay of one time step, this network representation of a schema takes two time steps to compute its transition. We introduce an alternative threshold network representation of a two-symbol schema $f''$ that only requires a single t-unit and takes a single time delay to compute a transition. We refer to this variant as the *Canalizing Map* of a schema or CM for short. A CM is essentially the same as a McCulloch and Pitts network, with the following provisos concerning the ways in which s-units and t-units can be connected:\n\n1. Only one fibre originates from each s-unit that can participate as enput in $f''$ and it must always end in the t-unit used to encode $f''$.\n\n2. The fibre that goes from a s-unit to the t-unit can *branch out* into several fibre endings. This means that if the s-unit is firing, a pulse propagates through its outgoing fibre and through its branches. Branching fibres are used to capture group-invariant enputs, as we explain later.\n\n3. Branches from distinct s-units can *fuse* together into a single fibre ending \u2013 the fused fibre increases the end t-unit's firing activity by one if at least one of the fused fibres has a pulse.\n\n4. A fibre that originates in a t-unit encoding a schema $f''$ must end in a s-unit that corresponds to the automaton transition defined by $f''$.\n\nFigure depicts the elements of a single schema's CM. Table summarizes the rules that apply to the interconnections between units. Transitions in CMs occur in the same way as in standard McCulloch & Pitts networks. Once sufficient conditions for a transition are observed at some time $t$, the transition occurs at $t+1$. The firing (or not) of t-units is thus assumed to have a standard delay (one time-step), identical for all t-units. Note that in CMs, s-units can be regarded as a special type of t-unit with threshold $\\tau=1$ that send a pulse through their outgoing fibres instantaneously. Next we describe the algorithm to obtain the CM representation of a schema. Readers not interested in the algorithmic details of this computation can safely bypass the next subsection.\n\n### Algorithm to obtain the canalizing map of a schema\n\nGiven a 2-symbol schema $f''$, there are two steps involved in producing its CM representation. The first is connecting s-units to the t-unit for $f''$ in such a way that it fires, if and only if, the constraints of $f''$ \u2013 defined by Expression \u2013 are satisfied. The second step is determining the appropriate firing threshold $\\tau$ for the t-unit. If the schema does not have group-invariant enputs, the conversion is direct as for the standard McCulloch & Pitts network \u2013 see Figure : The s-units corresponding to literal enputs $i_j \\in X_\\ell$ are linked to the t-unit using a single fibre from each s-unit to the t-unit, which has a threshold $\\tau = n_\\ell$. If the schema has a group-invariant enput, its subconstraints are implemented by branching and fusing fibres connecting the s-units and the t-unit. In cases such as our example automaton $x$ (Figures and ) where the subconstraints $n_g^0 = n_g^1 = 1$, the solution is still simple. To account for subconstraint $n_g^0$, it is sufficient to take an outgoing fibre from each of the s-units $i_j \\in X_g : i_j=0$ and fuse them into a single fibre ending. Therefore, if at least one of these s-units is firing, the fused fibre ending transmits a single pulse to the t-unit, signalling that the subconstraint has been satisfied. Increasing the t-unit's threshold by one makes the t-unit respond to this signal appropriately. The same applies for subconstraint $n_g^1$, using a similar wiring for s-units $i_j \\in X_g : i_j=1$. The final threshold for the t-unit that captures the example schema of Figure is thus $\\tau = n_\\ell + n_g^0 + n_g^1 = 2 + 1 + 1 = 4$, as shown in Figure C.\n\nThe case of general group-invariant constraints is more intricate. Every literal enput $i_j \\in X_\\ell$ is linked to the t-unit via a single fibre exactly as above. Afterwards, the subconstraints $n_g^0$ and $n_g^1$ of a group-invariant enput $g$ are treated separately and consecutively. Note that for every input variable $i_j$ in the set $X_g$ of symmetric input variables, there are two s-units: one representing $i_j$ in state $0$ and another in state $1$. To account for subconstraint $n_g^0$ on the variables of set $X_g$, let $S \\subseteq X_g$ be the set of s-units that represent the variables of the group-invariant enput that can be in state $0$, where $|X_g| = n_g$. Next, we identify all possible subsets of $S$, whose cardinality is $n_g -(n_g^0-1)$. That is, $\\bm{S} = \\left\\{ S_i : S_i\n\\subset S \\wedge |S_i| = n_g -(n_g^0-1)\\right\\}$. For each subset $S_i \\in \\bm{S}$, we take an outgoing fibre from every s-unit in it and fuse them into a single fibre ending as input to the schema t-unit. After subconstraint $n_g^0$ is integrated this way, the threshold of the t-unit is increased by,\n\n$$|\\bm{S}| = {n_g \\choose n_g - (n_g^0 -1)} = {n_g \\choose n_g^0 -1}$$\n\nThis procedure is repeated for the subconstraint $n_g^1$ on $X_g$. The final threshold of the t-unit is,\n\n$$\\tau = n_\\ell + {n_g \\choose n_g^0 -1} + {n_g \\choose n_g^1 -1}\n\\label{eq:tau}$$\n\nThis algorithm is illustrated for the integration of two example subconstraints in Figure ; in Figure , the case of the only schema describing the transitions to *on* of running example automaton $x$ is shown. Further details concerning this procedure are provided in *Supporting text S3*.\n\n### The canalizing map of an automaton\n\nThe algorithm to convert a single schema $f''$ to a CM is subsequently used to produce the CM of an entire Boolean automaton $x$ as follows: Each schema $f'' \\in F''$ is converted to its CM representation. Each state of an input variable is represented by a single s-unit in the resulting threshold network. In other words, there is a maximum of two s-units (one for state $0$ and one for state $1$) for each input variable that is either a literal enput or participates in a group-invariant enput of $x$. The resulting threshold network is the canalizing map of $x$. The connectivity rules of automata CMs include the following provisos:\n\n1. Every s-unit can be connected to a single t-unit with a single outgoing fibre, which can be single or have branches.\n\n2. Therefore, the number of outgoing fibres coming out of a s-unit (before any branching) corresponds to the number of schemata $f''\n \\in F''$ in which the respective variable-state participates as an enput. If such a variable is included in a group-invariant enput, then the fibre may have branches.\n\n3. Any subset set of t-units with threshold $\\tau=1$ for the same automaton transition ($x=0$ or $x=1$) are merged into a single t-unit (also with $\\tau=1$), which receives all incoming fibres of the original t-units. In such scenario, any fused branches can also be de-fused into single fibres. Note that this situation corresponds to schemata that exhibit nested canalization, where one of several inputs settles the transition, but which do not form a symmetric group.\n\nThe CM of $x$ can be constructed from the subset of schemata $F_1''$ (the conditions to *on*), or $F_0''$ (the conditions to *off*). When the conditions are not met for convergence to *on*, one is guaranteed convergence to *off* (and vice-versa). However, since we are interested in exploring scenarios with incomplete information about the states of variables in networks of automata rather than a single automaton (see below), we construct the CM of a Boolean automaton $x$ including all conditions, that is using $F'' \\equiv F_1'' \\cup F_0''$. This facilitates the analysis of transition dynamics where automata in a network can transition to either state. Figure depicts the complete CM of the example automaton $x$ described in Figure \u2013 now including also its transitions to *off*.\n\nBy uncovering the enputs of an automaton, we gain the ability to compute its transition with *incomplete information* about the state of every one of its inputs. For instance, the possible transitions of the automaton in Figure are fully described by the CM (and schemata) in Figure ; as shown, transitions can be determined from a significantly small subset of the input variables in specific state combinations. For instance, it is sufficient to observe $i_3=0$ to know that automaton $x$ transitions to *off*. If $x$ was used to model the interactions that lead a gene to be expressed or not, it is easy to see that to down-regulate its expression, it is sufficient to ensure that the regulator $i_3$ is not expressed. This is the essence of canalization: the transition of an automaton is controlled by a small subset of input states. In the macro-level canalization section below, we use the CM's ability to compute automata transitions with incomplete information to construct an alternative portrait of network dynamics, which we use in lieu of the original BN to study collective dynamics. Let us first apply our micro-level methodology to the SPN model.\n\n## Micro-level canalization in the SPN model\n\nThe automata in the SPN fall in two categories: those that have a single input ($k=1$), the analysis of which is trivial, namely, SLP, WG, EN, HH, $ci$ and CI, and those with $k > 1$. The two-symbol schemata and canalization measures for each automaton in the SPN model are depicted in Figure ; Figure maps the automata to their canalization classes. Schemata easily display all the sufficient combinations of input states (enputs) to control the transitions of the automata in this model, which represent the inhibition or expression of genes and proteins. Indeed, the resulting list of schemata allows analysts to quickly infer how control operates in each node of the network. Wildcard symbols (depicted in Figure as grey boxes) denote redundant inputs. Position-free symbols (depicted in Figure as circles), denote 'functionally equivalent' inputs; that is, sets of inputs that can be alternatively used to ensure the same transition. For example, for $wg$ to be expressed, SLP, the previous state of $wg$ (reinforcing feedback loop) and CIA can be said to be 'functionally equivalent', since any two of these three need to be expressed for $wg$ to be expressed. The several schemata that are listed for the expression or inhibition of a specific node (genes and gene products), give experts alternative 'recipes' available to control the node according to the model \u2013 and which can be experimentally tested and validated. Let us now present some relevant observations concerning micro-level canalization in the SPN model:\n\n1. The inhibition of $wg$ can be attained in one of two ways: either two of the first three inputs (SLP, $wg$, CIA) are *off* (unexpressed), or CIR is *on* (expressed). The expression of $wg$ \u2013 essential in the posterior cell of a parasegment to attain the wild-type expression pattern (Figure )\u2013 is attained in just one way: CIR must be *off* (unexpressed), and two of the other three inputs (SLP, $wg$, CIA) must be *on* (expressed). Note the simplicity of this control logic vis a vis the $2^4 = 16$ possible distinct ways to control $wg$ specified by its LUT, given that it is a function of 4 inputs. This control logic is also not obvious from the Boolean logic expression of node $wg$, as shown in Table ; at the very least, the schemata obtained for $wg$ provide a more intuitive representation of control than the logical expression. Moreover, schema redescription, unlike the logical expression, allows us to directly quantify canalization. The control logic of this gene shows fairly high degree of both types of canalization: even though there are $k=4$ inputs, on average, only $k_e = 1.75$ inputs are needed to control the transition, and $k_s = 2.25$ inputs can permute without effect on the transition (see Figures and ); $wg$ is thus modelled by an automaton of class A.\n\n2. The inhibition of CIR can be attained in one of two simple, highly canalized, ways: either one of its first two inputs (PTC, CI) is *off* (unexpressed), or one of its four remaining inputs ($hh$ and $HH$ in neighbouring cells) is *on* (expressed); all other inputs can be in any other state. The expression of CIR can be attained in only one specific, non-canalized, way: the first two inputs must be *on* (expressed), and the remaining four inputs must be *off* (unexpressed) \u2013 a similar expression behaviour is found for $hh$ and $ptc$. Note the simplicity of this control logic vis a vis the $2^6 = 64$ possible distinct ways to control CIR specified by its LUT, given that it is a function of $6$ inputs. While, in this case, the control logic is also pretty clear from the original Boolean logic expression of node CIR (in Table ), the schemata obtained for CIR provide a more intuitive representation of control and allows us to directly quantify canalization. CIR is a protein with a very high degree of both types of canalization: even though there are $k=6$ inputs, on average, only $k_e = 1.08$ inputs are needed to control the transition, and $k_s\n = 5.25$ inputs can permute without effect on the transition (see Figures and ). This high degree of both types of canalization, which is not quantifiable directly from the logical expression or the LUT, is notable in Figure , where CIR emerges very clearly as an automaton of class A.\n\n3. The control logic of CIA entails high canalization of the input redundancy kind. For instance, its inhibition can be achieved by a single one of its six inputs (CI *off*) and its expression by two inputs only (PTC *off* and CI *on*). On the other hand, there is low canalization of the input symmetry kind, therefore CIA is modelled by an automaton in class B.\n\n4. The expression of $en$ \u2013 essential in the anterior cell of a parasegment to achieve the wild-type phenotype \u2013 depends on the inhibition of (input node) SLP in the same cell, and on the expression of the wingless protein in at least one neighbouring cell.\n\n5. Most automata in the model fall into canalization class B described above. CIR and $wg$ discussed above display greatest input symmetry, and fall in class A (see Figure ).\n\n6. Looking at all the schemata obtained in Figure , we notice a consistent pattern for all spatial signals, $hh_{i \\pm 1} , \\textrm{HH}_{i \\pm 1}$ and $\\textrm{WG}_{i \\pm 1}$. Whenever they are needed to control a transition (when they are enputs in the schemata of other nodes), either they are *off* in both neighbouring cells, or they are *on* in at least one of the neighbouring cells. For instance, for a given cell $i$, HH in neighbouring cells is only relevant if it is unexpressed in both cells ($\\textrm{HH}_{i\n \\pm 1} = 0$), or if it is expressed in at least one of them ($\\textrm{HH}_{i - 1} = 1 \\vee \\textrm{HH}_{i + 1} = 1$). This means that the six nodes corresponding to spatial signals affecting a cell in a parasegment can be consolidated into just three *neighbour nodes*, a similar consolidation of spatial signals was used previously by Willadsen & Wiles to simplify the spatial model into a single-cell non-spatial model. In what follows, we refer to these spatial signals simply as $nhh$, nHH and nWG. If such a node is *off* it means that the corresponding original nodes are *off* in both adjacent cells; if it is *on* it means that at least one of the corresponding original nodes in an adjacent cell is *on*.\n\n7. Only PTC and $wg$ have feedback loops that are active after schema redescription, both for their inhibition and expression; these are self-reinforcing, but also depend on other enputs (see also Figures and ).\n\nBecause this is a relatively simple model, some of the observations about control, especially for nodes with fewer inputs, could be made simply by looking at the original transition functions in Table , since they are available as very simple logical expressions \u2013 this is the case of CIR, but certainly not $wg$ above. However, the *quantification* of canalization requires the additional symbols used in schema redescription to identify redundancy, which are not available in the original automata logical expressions or their LUTs. Moreover, the transition functions of automata in larger Boolean models of genetic regulation and signalling are rarely available as simple logical expressions, and nodes can be regulated by a large number of other nodes, thus making such direct comprehension of control-logic difficult. In contrast, since redescription uncovers canalization in the form of input redundancy and symmetry, the more canalization exists, the more redundancy is removed and the simpler will be the schemata representation of the logic of an automaton. This makes canalizing maps (CM) particularly useful, since they can be used to visualize and compute the minimal control logic of automata. The CMs that result from converting the schemata of each node in the SPN to a threshold-network representation are shown in Figure and Figure . For a biochemical network of interest, such as the SPN or much larger networks, domain experts (e.g. biomedical scientists and systems and computational biologists) can easily ascertain the control logic of each component of their model from the schemata or the corresponding CMs.\n\nIn summary, there are several important benefits of schema redescription of Boolean automata vis a vis the original Boolean logic expression or the LUT of an automaton: (1) a parsimonious and intuitive representation of the control logic of automata, since *redundancy is clearly identified* in the form of the two additional symbols, which gives us (2) the ability to *quantify* all forms of canalization in the straightforward manner described above; finally, as we elaborate next, the integration of the schema redescription (or CMs) of individual automata in a network (micro-level) allows us to (3) *characterize macro-level dynamics* parsimoniously, uncovering minimal control patterns, robustness and the modules responsible for collective computation in these networks.\n\n## Macro-level canalization and control in automata networks\n\nAfter removing redundancy from individual automata LUTs in networks (micro-level), it becomes possible to integrate their canalizing logic to understand control and collective dynamics of automata networks (macro-level). In other words, it becomes feasible to understand how biochemical networks process information collectively \u2013 their emergent or collective computation .\n\n### Dynamics canalization map and dynamical modularity\n\nThe CMs obtained for each automaton of a BN, such as the SPN model (see Figures and ), can be integrated into a single threshold network that represents the control logic of the entire BN. This simple integration requires that (1) each automaton is represented by two unique s-units, one for transition to *on* and another to *off*, and (2) s-units are linked via t-units with appropriate fibres, as specified by each individual CM. Therefore a unique t-unit represents each schema obtained in the redescription process. This results in the *Dynamics Canalization Map* (DCM) for the entire BN. Since the DCM integrates the CMs of its constituent automata, it can be used to identify the *minimal control conditions* that are sufficient to produce transitions in the dynamics of the entire network. Notice that when a node in the original BN undergoes a state-transition, it means that at least one t-unit fires in the DCM. When a t-unit fires, according to the control logic of the DCM, it can cause subsequent firing of other t-units. This allows the identification of the *causal chains of transitions* that are the *building blocks* of macro-level dynamics and information processing, as explained in detail below.\n\nAnother important feature of the DCM is its compact size. While the dynamical landscape of an automata network, defined by its state-transition graph (STG), grows exponentially with the number of nodes \u2013 $2^n$ in Boolean networks \u2013 its DCM grows only linearly with $2n$ units plus the number of t-units needed (which is the number of schemata obtained from redescribing every automaton in the network): $2n + \\sum_{i=1}^n |F''_i|$. Furthermore, the computation of a DCM is tractable even for very large networks with thousands of nodes, provided the in-degree of these nodes is not very large. In our current implementation, we can exhaustively perform schema redescription of automata with $k \\leq k_{\\textrm{max}} \\approx\n20$; that is, LUTs containing up to $2^{20}$ entries. It is very rare that dynamical models of biochemical regulation have molecular species that depend on more than twenty other variables (see e.g. ). Therefore, this method can be used to study canalization and control in all discrete models of biochemical regulation we have encountered in the literature, which we will analyse elsewhere.\n\nIt is important to emphasize that the integration of the CMs of individual automata into the DCM does not change the control logic encoded by each constituent CM, which is equivalent to the logic encoded in the original LUT (after removal of redundancy). Therefore, there is no danger of violating the logic encoded in the original LUT of any automaton in a given BN. However, it is necessary to ensure that any initial conditions specified in the DCM do not violate the laws of contradiction and excluded middle. This means, for instance, that no initial condition of the DCM can have the two (*on* and *off*) s-units for the same automaton firing simultaneously.\n\nThe DCM for a single cell in the SPN model is shown in Figure\u00a0. The spatial signals from adjacent cells are highlighted using units with a double border $(nhh, n\\textrm{HH and $n$WG})$. For the simulations of the spatial SPN model described in subsequent sections, we use four coupled single-cell DCMs (each as in Figure\u00a0) to represent the dynamics of the four-cell parasegment, where nodes that enable inter-cellular regulatory interactions are appropriately linked, as defined in the original model. Also, as in the original model, we assume periodic boundary conditions for the four-cell parasegment: the posterior cell is adjacent to the anterior cell. When making inferences using the DCM, we use *signal* to refer to the firing of a s-unit and the transmission of this information through its output fibres. When a s-unit fires in the DCM, it means that its corresponding automaton node in the original BN transitioned to the state represented by the s-unit. We also use *pathway* to refer to a logical sequence of signals in the DCM.\n\nWe highlight two *pathway modules* in the DCM of the SPN in Figure\u00a0: $\\mathcal{M}_1$ and $\\mathcal{M}_2$. The first is a pathway initiated by either the inhibition of WG in neighbour cells, or the expression of SLP upstream in the same cell. That is, the initial pattern for this module is $\\mathcal{M}_1^0 = \\neg n\\textnormal{WG} \\vee \\textnormal{SLP}$. The initiating signal for $\\mathcal{M}_2$ is defined by the negation of those that trigger the first: $\\mathcal{M}_2^0 = \\neg\n\\mathcal{M}_1^0 = n\\textnormal{WG} \\wedge \\neg \\textnormal{SLP}$. Both modules follow from (external or upstream) input signals to a single cell in the SPN; they do not depend at all on the initial states of nodes (molecular species) of the SPN inside a given cell. Yet, both of these very small set of initial signals necessarily cause a cascade of other signals in the network over time. $\\mathcal{M}_1$ is the only pathway that leads to the inhibition of $en$ (and EN) as well as the expression of $ci$ (and CI). It also causes the inhibition of $hh$ and HH, both of which function as inter-cellular signals for adjacent cells \u2013 this inhibition can be alternatively controlled by the expression of CIR, which is not part of neither $\\mathcal{M}_1$ nor $\\mathcal{M}_2$. Since $\\mathcal{M}_1^0$ is a disjunction, its terms are equivalent: either the inhibition of $n$WG or the upstream expression of SLP control the same pathway, regardless of any other signals in the network. $\\mathcal{M}_2$ is the only pathway that leads to the expression of $en$ (and EN) as well as the inhibition of $ci$ (and CI); It also causes the inhibition of CIA, $ptc$ and CIR \u2013 these inhibitions can be alternatively controlled by other pathways. If the initial conditions $\\mathcal{M}_2^0$ are sustained for long enough (steady-state inputs), the downstream inhibition of CIA and sustained inhibition of SLP lead to the inhibition of $wg$ (and WG); likewise, from sustaining $\\mathcal{M}_2^0$, the downstream expression of EN and inhibition of CIR lead to the expression of $hh$ (and HH). Since $\\mathcal{M}_2^0$ is a conjunction, both terms are required: both the expression of $n$WG and the upstream inhibition of SLP are necessary and sufficient to control this pathway module, regardless of any other signals in the network.\n\n$\\mathcal{M}_1$ and $\\mathcal{M}_2$ capture a cascade of state transitions that are inexorable once their initiating signals ($\\mathcal{M}_1^0$ and $\\mathcal{M}_2^0$) are observed: $\\mathcal{M}_1 = \\{\\neg en,$$\\neg \\textnormal{EN},$ $\\neg\nhh,$ $\\neg$ $\\textnormal{HH},$ $ci,$ $\\textnormal{CI} \\}$ and $\\mathcal{M}_2 = \\{ \\neg ci,$ $\\neg \\textnormal{CI},$ $\\neg\n\\textnormal{CIA},$ $\\neg wg,$ $\\neg \\textnormal{WG},$ $\\neg\n\\textnormal{CIR},$ $\\neg ptc,$ $en,$ $\\textnormal{EN},$ $hh,$ $\\textnormal{HH} \\}$. Furthermore, these cascades are *independent* from the states of other nodes in the network. As a consequence, the transitions within a module are insensitive to delays once its initial conditions are set (and maintained in the case of $\\mathcal{M}_2$ as shown). The *dynamics* within these portions of the DCM can thus be seen as *modular*; these pathway modules can be *decoupled* from the remaining regulatory dynamics, in the sense that they are not affected by the states of any other nodes other than their initial conditions. Modularity in complex networks has been typically defined as sub-graphs with high intra-connectivity. But such structural notion of community structure does not capture the dynamically decoupled behaviour of pathway modules such as $\\mathcal{M}_1$ and $\\mathcal{M}_2$ in the SPN. Indeed, it has been recently emphasized that understanding modularity in complex molecular networks requires accounting for dynamics , and new measures of modularity in multivariate dynamical systems have been proposed by our group . We will describe methods for automatic detection of dynamical modularity in DCMs elsewhere.\n\nCollective computation in the macro-level dynamics of automata networks ultimately relies on the interaction of these pathway modules. Information gets integrated as modules interact with one another, in such a way that the timing of module activity can have an effect on downstream transitions. For instance, the expression of CI via $\\mathcal{M}_1$ can subsequently lead to the expression of CIA, provided that $nhh$ is expressed \u2013 and this is controlled by $\\mathcal{M}_2$ in the adjacent cells. The expression of CI can also be seen as a necessary initial condition to the only pathway that results in the expression of CIR, which also depends on the inhibition of $nhh$ and $n$HH and the expression of PTC, which in turn depends on the interaction of other modules, and so on. As these examples show, pathway modules allow us to uncover the building blocks of macro-level control \u2013 the collective computation of automata network models of biochemical regulation. We can use them, for instance, to infer which components exert most control on a target collective behaviour of interest, such as the wild-type expression pattern in the SPN. Indeed, modules $\\mathcal{M}_1$ and $\\mathcal{M}_2$ in the SPN model, which include a large proportion of nodes in the DCM, highlight how much SLP and the spatial signals from neighbouring cells control the dynamical behaviour of segment polarity gene regulation in each individual cell. Particularly, they almost entirely control the expression and inhibition of EN and WG; as discussed further below. The behaviour of these proteins across a four-cell parasegment mostly define the attractors of the model (including wild-type). The transitions of intra-cellular nodes are thus more controlled by the states of 'external' nodes than by the initial pattern of expression of genes and proteins in the cell itself. This emphasizes the well-known spatial constraints imposed on each cell of the fruit fly's developmental system . We next study and quantify this control in greater detail.\n\n### Dynamical unfolding\n\nA key advantage of the DCM is that it allows us to study the behaviour of the underlying automata network without the need to specify the state of all of its nodes. Modules $\\mathcal{M}_1$ and $\\mathcal{M}_2$ are an example of how the control that a very small subset of nodes exerts on the dynamics of SPN can be studied. This can be done because, given the schema redescription that defines the DCM, subsets of nodes can be assumed to be in an *unknown* state. Since the schema redescription of every automaton in the DCM is *minimal* and *complete* (see micro-level canalization section), every possible transition that can occur is accounted for in the DCM. By implementing the DCM as a threshold network, we gain the ability to study the dynamics of the original BN by setting the states of subsets of nodes. This allows us study convergence to attractors, or other patterns of interest, from knowing just a few nodes.\n\nMore formally, we refer to an initial pattern of interest of a BN $\\mathcal{B}$ as a *partial configuration*, and denote it by $\\bm{\\hat{x}}$. For example, $\\mathcal{M}_1^0$ is a partial configuration $\\bm{\\hat{x}_1} = \\mathcal{M}_1^0 = SLP \\vee \\neg nWG$, where the states of all other nodes is $\\#$, or unknown. We refer to *dynamical unfolding* as the sequence of transitions that necessarily occur after an initial partial configuration $\\bm{\\hat{x}}$, and denote it by $\\sigma(\\bm{\\hat{x}}) \\leadsto \\bm{\\mathcal{P}}$, where $\\bm{\\mathcal{P}}$ is an *outcome pattern* or configuration. From the DCM of the single-cell SPN model (Figure\u00a0), we have $\\sigma(\\mathcal{M}_1^0) \\leadsto\n\\mathcal{M}_1$ and $\\sigma(\\mathcal{M}_2^0) \\leadsto\n\\mathcal{M}_2$. An outcome pattern can be a fully specified attractor $\\mathcal{A}$, but it can also be a partial configuration of an attractor where some nodes remain unknown \u2013 for instance, to study what determines the states of a specific subset of nodes of interest in the network. In the first case, it can be said that $\\bm{\\hat{x}}$ *fully controls* the network dynamics towards attractor $\\mathcal{A}$. In the second, control is exerted only on the subset of nodes with determined logical states.\n\nThe ability to compute the dynamical unfolding of a BN from partial configurations is a key benefit of the methodology introduced here: it allows us to determine how much partial configurations of interest *control* the collective dynamics of the network. For instance, in the SPN model it is possible to investigate how much the input nodes to the regulatory network of each cell control its dynamics. Or, conversely, how much the initial configuration of the intra-cellular regulatory network is irrelevant to determining its attractor. The nodes within each cell in a parasegment of the SPN are sensitive to three inter-cellular (external) input signals: $n$WG, $nhh$ and $n$HH, and one intra-cellular (upstream) input, SLP. Given that the formation of parasegment boundaries in *D. melanogaster* is known to be tightly spatially constrained , it is relevant to investigate how spatio-temporal control occurs in the SPN model. We already studied the control power of SLP and $n$WG, which lead to modules $\\mathcal{M}_1$ and $\\mathcal{M}_2$. We now exhaustively study the dynamical unfolding of all possible states of the intra- and inter-cellular input signals.\n\nWe assume that SLP (upstream) and the (external) spatial signals are in steady-state to study what happens in a single cell. Since the state of $n$HH is the same as $nhh$ after one time step, we consolidate those input signals into a single one: $nhh$. We are left with three input signals to the intra-cellular regulatory network: nodes SLP, $n$WG and $nhh$. Each of these three nodes can be in one of two states (*on*, *off*) and thus there are eight possible combinations of states for these nodes. Such simplification results in a non-spatial model and this was done previously by Willadsen & Wiles . Setting each such combination as the initial partial configuration $\\bm{\\hat{x}}$, and allowing the DCM to compute transitions, yields the results shown in Figure . We can see that only two of the outcome patterns reached by the eight input partial configurations are ambiguous about which of the final five possible attractors is reached. Each individual cell in a parasegment can only be in one of five attractor patterns $I1-I5$ (see \u00a7background). This is the case of groups $G2$ and $G4$ in Figure . For all the other input partial configurations, the resulting outcome pattern determines the final attractor. We also found that for almost every input partial configuration, the states of most of the remaining nodes are also resolved; in particular the nodes that define the signature of the parasegment attractor \u2013 Engrailed (EN) and Wingless (WG) \u2013 settle into a defined steady-state. Notice also that for two of the input partial configurations (groups $G3$ and $G5$ in Figure ), the states of every node in the network settle into a fully defined steady-state. The picture of dynamical unfolding from the intra- and inter-cellular inputs of the single-cell SPN network also allows us to see the roles played by modules $\\mathcal{M}_1$ and $\\mathcal{M}_2$ in the dynamics. The six input configurations in groups G1, G2, and G3 depict the dynamics where $\\mathcal{M}_1$ is involved, while the two input configurations in G4 and G5 refer to $\\mathcal{M}_2$ (node-states of each module in these groups appear shaded in Figure ). By comparing the resulting dynamics, we can see clearly the effect of the additional information provided by knowing if $nhh$ is expressed or inhibited; we also see that the dynamics of the modules is unaffected by other nodes, as expected.\n\nIt is clear from these results that (single-cell) cellular dynamics in the SPN is almost entirely controlled from the inputs alone. We can say that extensive micro-level canalization leads the macro-level network dynamics to be highly canalized by external inputs \u2013 a point we explore in more detail below. For the dynamical unfolding depicted in Figure we assumed that the three input signals to the intra-cellular regulatory network are in steady-state, focusing on a single cell. This is not entirely reasonable since inter-cellular signals are regulated by spatio-temporal regulatory dynamics in the full spatial SPN model. We thus now pursue the identification of *minimal* partial configurations that guarantee convergence to outcome patterns of interest in the spatial SPN model, such as specific (parasegment) attractors.\n\n### Minimal configurations\n\nTo automate the search of minimal configurations that converge to patterns of interest, we rely again on the notion of schema redescription, but this time for network-wide configurations rather than for individual automata LUTs. Notice that the eight input partial configurations used in the dynamical unfolding scenarios described in Figure are wildcard schemata of network configurations: the state of the 14 inner nodes is *unknown* (wildcard), and only three (input) nodes (SLP, nWG,$nhh$) are set to a combination of Boolean states. Each of these eight schemata redescribes $2^{14}$ possible configurations of the single-cell SPN. Six of the eight input schemata converge to one of the five possible attractors for inner nodes in a single cell of the SPN model (Figure ). We can thus think of those six schemata as *minimal configurations* (MCs) that guarantee convergence to patterns (e.g. attractors) of interest.\n\nMore specifically, a MC is a 2-symbol schema $\\bm{x}''$ that redescribes a set of network configurations that converge to target pattern $\\bm{\\mathcal{P}}$; when the MC is a wildcard schema, it is denoted by $\\bm{x}'$. Therefore, $\\sigma(\\bm{x}'') \\leadsto \\bm{\\mathcal{P}}$. MC schemata, $\\bm{x}''$ or $\\bm{x}'$, are network configurations where the truth value of each constituent automaton can be 0, 1, or $\\#$ (unknown); symmetry groups are allowed for $\\bm{x}''$ and identified with position-free symbols $\\circ_m$ (see Micro-level canalization section). An MC schema redescribes a subset $\\Theta$ of the set of configurations $\\bm{X}$: $\\Theta \\equiv \\{\\bm{x} \\in \\bm{X}: \\bm{x}\n\\rightarrowtail \\bm{x}''\\}$. A partial configuration is a MC if no Boolean state in it can be raised to the unknown state ($\\#$) and still guarantee that the resulting partial configuration converges to $\\bm{\\mathcal{P}}$. In the case of a two-symbol schema, no group-invariant enput can be enlarged (include additional node-states) and still guarantee convergence to $\\bm{\\mathcal{P}}$. Finally, the target pattern $\\bm{\\mathcal{P}}$ can be a specific network configuration (e.g. an attractor), or it can be a set of configurations of interest (e.g. when only some genes or proteins are expressed). After redescription of a set of configurations $\\bm{X}$ of a BN \u2013 a subset or its full dynamical landscape \u2013 we obtain a set of two-symbol MCs $\\bm{X}''$; a set of wildcard MCs is denoted by $\\bm{X}'$. Similarly to micro-level schemata, we can speak of enputs of MCs. In this context, they refer to individual and sets of node-states in the network that are essential to guarantee convergence to a target pattern.\n\nThe dynamical unfolding example of the single-cell SPN model shows that to converge to the attractor $I1$ (Figure\u00a0, G1), only the states of the three input nodes need to be specified, in one of three possible Boolean combinations: $000, 100$ or $110$ for the nodes SLP, $n$WG and $nhh$; all other (inner) nodes may be unknown ($\\#$). Moreover, these three initial patterns can be further redescribed into two schemata: $\\bm{X}' = \\{ \\{\\#, 0,0\\}, \\{1, \\#,0\\} \\}$. This shows that to guarantee converge to $I1$, we only need to know the state of two (input) nodes: either $n$WG $= nhh\n= 0$, or SLP = 1 and $nhh = 0$. All other nodes in the single-cell model can remain unknown. Therefore, the MCs for attractor pattern $I1$ are:\n\n$$\\begin{aligned}\n\\bm{X}' = \\{ &\\#\\#\\#\\#\\#\\#\\#\\#\\#\\#\\#\\#\\#\\#\\#00, \\nonumber \\\\\n &\\#\\#\\#\\#\\#\\#\\#\\#\\#\\#\\#\\#\\#\\#1\\#0 \\}\n\\label{eq:MCs_SPN_I1}\n\\end{aligned}$$\n\nwhere the order of the inner nodes is the same as in Figure\u00a0, and the last three nodes are SLP, $n$WG and $nhh$ in that order. Notice that in this case there is no group-invariance, so $\\bm{X}'' = \\bm{X}'$. Any initial configuration not redescribed by $\\bm{X}'$, does not converge to pattern $I1$. Therefore, these MCs reveal the enputs (minimal set of node-states) that *control* network dynamics towards attractor $I1$: $nhh$ must remain unexpressed, and we must have either SLP expressed, or $n$WG unexpressed. However, as mentioned above, this example refers to the case when the three input nodes are in steady-state. For the single-cell SPN, the steady-state assumption is reasonable. But for the spatial SPN, with parasegments of four cells, we cannot be certain that the spatial signals ($n$WG and $nhh$) have reached a steady-state at the start of the dynamics. Therefore, we now introduce a procedure for obtaining MCs, without the steady-state assumption, which we apply to the spatial SPN network model.\n\nIt was discussed previously that individual automata in BN models of biochemical regulation and signalling very rarely have large numbers of input variables. This allows tractable computation of two-symbol schema redescription of their LUTs (see micro-level section). In contrast, computing MCs for network configurations easily becomes more computationally challenging. Even for fairly small networks with $n \\approx 20$, the size of their dynamical landscape becomes too large to allow full enumeration of the possible configurations and the transitions between them. As shown above, it is possible to identify pathway modules, and to compute dynamical unfolding on the DCM, without knowing the STG of very large BNs, but it remains not feasible to fully redescribe their entire dynamical landscape.\n\nOne way to deal with high-dimensional spaces is to resort to *stochastic search* (see e.g. ). We use stochastic search to obtain MCs that are guaranteed to converge to a pattern of interest $\\bm{\\mathcal{P}}$. We start with a *seed* configuration known to converge to $\\bm{\\mathcal{P}}$. Next, a random node in a Boolean state is picked, and changed to the unknown state. The resulting partial configuration is then allowed to unfold to determine if it still converges to $\\bm{\\mathcal{P}}$. If it does, the modified configuration becomes the new seed. The process is repeated until no more nodes can be 'raised' to the unknown state and still ensure convergence to $\\bm{\\mathcal{P}}$. Otherwise, the search continues picking other nodes. The output of this algorithm (detailed in *Supporting text S4*) is thus a single wildcard MC. Afterwards, the goal is to search for *sets* of MCs that converge to $\\bm{\\mathcal{P}}$. We do this in two steps: first we search for a set of MCs derived from a single seed, followed by a search of the space of possible different seeds that still converge to $\\bm{\\mathcal{P}}$. We use two 'tolerance' parameters to determine when to stop searching. The first, $\\delta$, specifies the number of times a single seed must be 'reused' in the first step. When the algorithm has reused the seed $\\delta$ consecutive times without finding any new MCs, the first step of the MC search stops. The second tolerance parameter, $\\rho$, is used to specify when to stop searching for new seeds from which to derive MCs. When $\\rho$ consecutively generated random (and different) seeds are found to be already redescribed by the current set of MCs, the algorithm stops. Both parameters are reset to zero every time a new MC is identified. These two steps are explained in greater detail in *Supporting text S4*.\n\nThe two-step stochastic search process results in a set of wildcard schemata $\\bm{X}'$ that redescribe a given set of configurations $\\bm{X}$ guaranteed to converge to pattern $\\bm{\\mathcal{P}}$. We next obtain a set of two-symbol MCs $\\bm{X}''$ from $\\bm{X}'$, by identifying group-invariant subsets of nodes using the same method described in the micro-level canalization section. Since $\\bm{X}'$ can be quite large (see below), this computation can become challenging. In this case, we restrict the search for symmetric groups in $X'$ that redescribe a minimum number $\\beta$ of wildcard MCs $\\bm{x}'$.\n\nNotice that it is the DCM, implemented as a threshold network, that allows us to pursue this stochastic search of MCs. With the original BN, we cannot study dynamics without setting every automaton to a specific Boolean truth value. With the DCM, obtained from micro-level canalization, we are able to set nodes to the unknown state and study the dynamical unfolding of a partial configuration (see previous subsection) to establish convergence to a pattern of interest. Therefore, the DCM helps us link micro-level canalization to macro-level behaviour. Let us exemplify the approach with the SPN model.\n\nWe started our study of MCs in the spatial SPN model, with a set of *seed* configurations $\\bm{X}_{\\textrm{bio}}$ that contains the known initial configuration of the SPN (shown in Figure ), the wild-type attractor (Figure a), and the five configurations in the dynamic trajectory between them. When searching for MCs using these seed configurations we set $\\delta=10^5$. This resulted in a set containing 90 wildcard MCs $\\bm{X}'_{\\textrm{bio}}$ (available in *Supporting data S7*). Using the set $\\bm{X}'_{\\textrm{bio}}$, we performed the two-step stochastic search with $\\rho=10^6$ and $\\delta=10^5$. This resulted in a much larger set of 1745 wildcard MCs (available in *Supporting data S8*) which guarantee convergence to wild-type: $\\bm{X}'_{\\textrm{wt}} \\supset \\bm{X}'_{\\textrm{bio}}$. The number of literal enputs in each MC contained in this set varies from 23 to 33 \u2013 out of the total 60 nodes in a parasegment. In other words, from all configurations in $\\bm{X}_{\\textrm{wt}}$ we can ascertain that to guarantee convergence to the wild-type attractor, we need only to control the state of a minimum of 23 and a maximum of 33 of the 60 nodes in the network. Equivalently, 27 to 37 nodes are irrelevant in steering the dynamics of the model to the wild-type attractor \u2013 a high degree of canalization we quantify below.\n\nWe chose to study two further subsets of $\\bm{X}'_{\\textrm{wt}}$ separately: $\\bm{X}'_{\\textrm{noP}}$ and $\\bm{X}'_{\\textrm{min}}$. The first (available in *Supporting data S9*) is the subset of MCs that do not have enputs representing expressed (*on*) proteins, except SLP$_{3,4}$ \u2013 since SLP in cells 3 and 4 is assumed to be present from the start, as determined by the pair-rule gene family (see and introductory section). This is a subset of interest because it corresponds to the expected control of the SPN at the start of the segment-polarity dynamics, including its known initial configuration (Figure ); thus $\\bm{X}'_{\\textrm{noP}} \\subset \\bm{X}'_{\\textrm{wt}}$. The second, $\\bm{X}'_{\\textrm{min}} \\subset \\bm{X}'_{\\textrm{wt}}$ is the subset of MCs with the smallest number of enputs (available in *Supporting data S10*. This corresponds to the set of 32 MCs in $\\bm{X}'_{\\textrm{wt}}$ that have only 23 enputs each. This is a subset of interest because it allows us to study how the unfolding to wild-type can be guaranteed with the smallest possible number of enputs. Notice that $\\bm{X}'_{\\textrm{min}}$ redescribes a large subset of configurations in $\\bm{X}_{\\textrm{wt}}$ because it contains the MCs with most redundant number of nodes. These sets of wildcard MCs are available in *Supporting data S7,S8, S9* and *S10*; Table contains their size.\n\nThere are severe computational limitations to counting exactly the number of configurations redescribed by each set of MCs, since it depends on using the inclusion\/exclusion principle to count the elements of intersecting sets (MCs redescribe overlapping sets of configurations). See *Supporting text S6* for further details. We can report the exact value for $|\\bm{X}_{\\textrm{noP}}| = 8.35\n\\times 10^{10}$, which is about $14\\%$ of the number of configurations \u2013 or pre-patterns \u2013 estimated by Albert & Othmer to converge to the wild-type attractor $(6 \\times\n10^{11})$. Using the inclusion\/exclusion principle, it was also computationally feasible to count the configurations redescribed by a sample of 20 of the 32 MCs in $\\bm{X}'_{\\textrm{min}}: 9.6 \\times 10^{11}$. Since this sample of 20 MCs is a subset of $\\bm{X}'_{\\textrm{min}}$, which is a subset of $\\bm{X}'_{\\textrm{wt}}$, we thus demonstrate that $|\\bm{X}_{\\textrm{wt}}| \\geq |\\bm{X}_{\\textrm{min}}| \\geq 9.6 \\times\n10^{11}$, which is $1.6$ times larger than the previously estimated number of pre-patterns converging to the wild-type attractor . This means that the wild-type attraction basin is considerably (at least 1.6 times) larger than previously estimated, with a lower bound of at least $9.6 \\times 10^{11}$ network configurations. Although it was not computationally feasible to provide exact counts for the remaining MC sets, it is reasonable to conclude that the set $\\bm{X}'_{\\textrm{wt}}$ redescribes a significant proportion of the wild-type attractor basin, given the number of configurations redescribed by 20 of its most canalized MCs in comparison to the previous estimate of its size. Indeed, we pursued a very wide stochastic search with large tolerance parameters, arriving at a large number (1745) MCs, each of which redescribes a very large set of configurations. For instance, each MC with the smallest number of enputs (23) alone redescribes $1.37 \\times 10^{11}$ configurations, which is about $23\\%$ of the original estimated size of the wild-type attractor basin, and $14\\%$ of the lower bound for the size of the attractor basin we computed above. Given the large number of MCs in the $\\bm{X}'_{\\textrm{wt}}$ set, even with likely large overlaps of configurations, much of the attractor basin ought to be redescribed by this set.\n\nFrom $\\bm{X}'_{\\textrm{wt}}$, we derived two-symbol MC sets using $\\beta=8$. That is, due to the computational limitations discussed previously, we restricted the search to only those two-symbol MCs $\\bm{x}''$ that redescribe at least $\\beta=8$ wildcard MCs $\\bm{x}'$. Given that configurations of the spatial SPN are defined by $60$ automata states, the group-invariance enputs we may have missed with this constraint are rather trivial. For instance, we may have missed MCs with a single group-invariant enput of 3 variables (any group-invariant enput with 4 variables would be found), or MCs with 2 distinct group-invariant enputs of 2 variables each (any MCs with 3 group-invariant enputs would be found.) With this constraint on the search for two-symbol MCs, we identified only the pair of two-symbol MCs depicted in Figure : $\\{\\bm{x}''_1, \\bm{x}''_2 \\}$ \u2013 each redescribing 16 wildcard MCs \u2013 the MCs redescribed are available in *Supporting data S13*. These two MCs redescribe $1.95 \\times 10^{11}$ configurations; that is, about $32\\%$ of the wild-type attraction basin as estimated by , or $20\\%$ of the lower bound for the size of the attractor basin we computed above \u2013 a very substantial subset of the wild-type attractor basin.\n\nNo other two-symbol MCs redescribing at least eight wildcard MCs were found in the set $\\bm{X}'_{\\textrm{wt}}$. Therefore, $\\bm{X}''_{\\textrm{wt}}$ is comprised of the wildcard MCs in $\\bm{X}'_{\\textrm{wt}}$ with the addition of $\\{\\bm{x}''_1, \\bm{x}''_2 \\}$ and removal of the wildcard MCs these two schemata redescribe. Table contains the size of all MC sets. Moreover, $\\{\\bm{x}''_1, \\bm{x}''_2 \\}$ have no intersecting schemata with the additional three subsets of $\\bm{X}''_{\\textrm{wt}}$ we studied. This means that the two-symbol redescription (with $\\beta=8$) is equal to the wildcard redescription of the sets of configurations $\\bm{X}_{\\textrm{bio}}$, $\\bm{X}_{\\textrm{noP}}$ and $\\bm{X}_{\\textrm{min}}$. The pair of two-symbol MCs identified denote two very similar minimal patterns that guarantee convergence to the wild-type attractor. In both MCs, the pairs of nodes $wg_{2,4}$, HH$_{2,4}$ as well as $ci_4$ and CI$_4$ are marked with distinct position-free symbols. In other words, they have three identical group-invariant enputs. For $\\bm{x}''_1$ a fourth group-invariant enput comprises the nodes $hh_{1,3}$, while for $\\bm{x}''_2$ the fourth group-invariant enput contains the nodes HH$_{1,3}$. For $\\bm{x}''_2$ there is an extra literal enput: $ptc_4 = 0$ ($ptc$ gene in fourth cell is unexpressed). The remaining literal enputs are identical to those of $\\bm{x}''_1$. The group-invariance in these MCs is not very surprising considering the equivalent roles of neighbouring hedgehog and Wingless for intra-cellular dynamics \u2013 as discussed previously when the SPN's DCM was analysed. Notice that most group-invariance occurs for the same genes or proteins in alternative cells of the parasegment; for instance, $wg$ expressed in either cell 2 or cell 4. Nonetheless, both two-symbol MCs offer two minimal conditions to guarantee convergence to the wild-type attractor, which includes a very large proportion of the wild-type attractor basin. Therefore, they serve as a parsimonious prescription for analysts who wish to control the macro-level behaviour (i.e. attractor behaviour) of this system. Finally, the MCs obtained observe substantial macro-level canalization which we quantify below.\n\n## Quantifying Macro-level canalization\n\nIn the micro-level canalization section, we defined measures of *input redundancy*, *effective connectivity* and *input symmetry* to quantify micro-level canalization from the schema redescription of individual automata. Since we can also redescribe configurations that produce network dynamics, leading to the minimal configurations (MCs) of the previous section, we can use very similar measures to quantify macro-level canalization and control. At the macro-level, high canalization means that network dynamics are more easily controllable: MCs contain fewer necessary and sufficient node-states (enputs) to guarantee convergence to an attractor or target pattern $\\bm{\\mathcal{P}}$. Similarly to the micro-level case, we first define upper and lower bounds of *node redundancy* computed from the set of MCs $\\bm{X}''$ for a target pattern:\n\n$$\\bar{n}_{\\textrm{r}}(\\bm{X},\\bm{\\mathcal{P}}) = \\frac{\\displaystyle \\sum_{\\bm{x} \\in \\bm{X}} \\max_{ {\\theta : \\bm{x} \\in \\Theta_\\theta}}\\left(n_\\theta^\\# \\right)}{|\\bm{X}|}\n\\label{upper_node_red}$$\n\n$$\\underline{n}_{\\textrm{r}}(\\bm{X}, \\bm{\\mathcal{P}}) = \\frac{\\displaystyle \\sum_{\\bm{x} \\in \\bm{X}} \\min_{ {\\theta : \\bm{x} \\in \\Theta_\\theta}}\\left(n_\\theta^\\# \\right)}{|\\bm{X}|}\n\\label{lower_node_red}$$\n\nThese expressions tally the mean number of irrelevant nodes in controlling network dynamics towards $\\bm{\\mathcal{P}}$ for all configurations $\\bm{x}$ of a set of configurations of interest $\\bm{X}$ (e.g. a basin of attraction). The number of irrelevant nodes in a given MC $\\bm{x}_{\\theta}''$ is the number of its wildcards $n_{\\theta}^{\\#}$. Because each configuration $\\bm{x}$ is redescribed by one or more MCs, there are various ways to compute a characteristic number of irrelevant nodes associated with the configurations, which is nonetheless bounded by the maximum and minimum number of wildcards in the set of MCs that redescribe $\\bm{x}$. Therefore, the expressions above identify all MCs whose set of redescribed configurations $\\Theta_{\\theta}$ includes $\\bm{x}$. The upper (lower) bound of node redundancy, Equation (Equation ), corresponds to considering the maximum (minimum) number of irrelevant nodes found for all MCs that redescribe configuration $\\bm{x}$ of the interest set \u2013 an optimist (pessimist) quantification of this type of macro-level canalization. Here we use solely the upper bound, which we refer to henceforth simply as *node redundancy* with the notation $n_{\\textrm{r}}(\\bm{X}, \\bm{\\mathcal{P}})$. Similarly to the micro-level case, the assumption is that the most redundant MCs are always accessible for control of the network towards pattern $\\bm{\\mathcal{P}}$. The range for node redundancy is $0 \\le\nn_{\\textrm{r}} \\le n$, where $n$ is the number of nodes in the network. When $n_{\\textrm{r}}(\\bm{X}, \\bm{\\mathcal{P}}) = n$ we have full node irrelevance, or maximum canalization, which occurs only in the case of networks where the state of every node is not dependent on any input (that is, when $k_{\\textrm{r}} = k$ for every node). If $n_{\\textrm{r}}(\\bm{X}, \\bm{\\mathcal{P}}) = 0$, the state of every node is always needed to determine convergence to $\\bm{\\mathcal{P}}$ and we have no macro-level canalization.\n\nIf some nodes of a network are irrelevant to steer dynamics to $\\bm{\\mathcal{P}}$, from a control logic perspective, we can say that $\\bm{\\mathcal{P}}$ is effectively controlled by a subset of nodes of the network with fewer than $n$ nodes. In other words, by integrating the micro-level control logic of automata in a network into the DCM, we are able to compute MCs and infer from those the macro-level *effective control*, which is not apparent from looking at connectivity structure alone:\n\n$$n_{\\textrm{e}}(\\bm{X}, \\bm{\\mathcal{P}}) = n - n_{\\textrm{r}}(\\bm{X}, \\bm{\\mathcal{P}})\n\\label{lower_eff_n}$$\n\nwhose range is $0 \\le n_{\\textrm{e}} \\le n$. If $n_{\\textrm{e}}(\\bm{X}, \\bm{\\mathcal{P}}) = 0$ it means full node irrelevance, or maximum canalization. When $n_{\\textrm{e}}(\\bm{X}, \\bm{\\mathcal{P}}) = n$, it means no canalization i.e. one needs to control all $n$ nodes to guarantee converge to $\\bm{\\mathcal{P}}$.\n\nMacro-level canalization can also manifest *alternative* control mechanisms. The two-symbol schema redescription allows us to measure this form of control by computing the mean number of nodes that participate in group-invariant enputs, easily tallied by the number of position-free symbols ($n_\\theta^\\circ$) in MC schemata $\\bm{x}_{\\theta}''$ that characterize convergence to target pattern $\\bm{\\mathcal{P}}$. Thus, we quantify the upper and lower bounds of *node symmetry* in a set of configurations of interest $\\bm{X}$ related to target pattern $\\bm{\\mathcal{P}}$ (e.g. a basin of attraction).\n\n$$\\bar{n}_{\\textrm{s}}(\\bm{X}, \\bm{\\mathcal{P}}) = \\frac{ \\displaystyle \\sum_{\\bm{x} \\in \\bm{X}}\n \\max_{ {\\theta : \\bm{x} \\in \\Theta_\\theta}}\\left( n_\\theta^\\circ \\right) }{|\\bm{X}|}\n\\label{n_sym_upper}$$\n\n$$\\underline{n}_{\\textrm{s}}(\\bm{X}, \\bm{\\mathcal{P}}) = \\frac{ \\displaystyle \\sum_{\\bm{x} \\in \\bm{X}}\n \\min_{ {\\theta : \\bm{x} \\in \\Theta_\\theta}}\\left( n_\\theta^\\circ \\right) }{|\\bm{X}|}\n\\label{n_sym_lower}$$\n\nHere we use solely the upper bound, which we refer to henceforth simply as node symmetry and denote by $n_{\\textrm{s}}(\\bm{X},\n\\bm{\\mathcal{P}})$; its range is $[0, n]$. Again, the assumption is that the most canalized MCs are always accessible for control of the network towards pattern $\\bm{\\mathcal{P}}$. High (low) values mean that permutations of node-states are likely (unlikely) to leave the transition unchanged.\n\nMacro-level canalization in network dynamics is then quantified by two types of redundancy: node redundancy (or its counterpart, effective control) and node symmetry. To be able to compare macro-level control in automata networks of different sizes, we can compute *relative* measures of canalization:\n\n$$n_{\\textrm{r}}^{*}(\\bm{X}, \\bm{\\mathcal{P}}) = \\frac{n_{\\textrm{r}}(\\bm{X}, \\bm{\\mathcal{P}})}{n}; \\quad n_{\\textrm{e}}^{*}(\\bm{X}, \\bm{\\mathcal{P}}) = \\frac{n_{\\textrm{e}}(\\bm{X}, \\bm{\\mathcal{P}})}{n}; \\quad n_{\\textrm{s}}^{*}(\\bm{X}, \\bm{\\mathcal{P}}) = \\frac{n_{\\textrm{s}}(\\bm{X}, \\bm{\\mathcal{P}})}{n}\n\\label{relative__macro_canalization_measures}$$\n\nwhose range is $[0, 1].$ Network dynamics towards a pattern of interest $\\bm{\\mathcal{P}}$ can have different amounts of each form of canalization, which allows us to consider four broad classes of control in network dynamics \u2013 just like the micro-level canalization case (see above).\n\nThe two MCs identified above for the single-cell SPN model (Eq. ), redescribe the full set of configurations that converge to $I1$. Since these MC schemata do not have group-invariant enputs, node symmetry does not exist: ${n}_{\\textrm{s}}(\\bm{X}, I1) = 0$. Node redundancy and effective control is ${n}_{\\textrm{r}}(\\bm{X}, I1)\n= 15$ and ${n}_{\\textrm{e}}(\\bm{X}, I1) = 2$, respectively. In other words, even though the network of the single-cell SPN model comprises $n=17$ nodes, to control its dynamics towards attractor $I1$, it is sufficient to ensure that the states of only two nodes remain fixed; the initial state of the other 15 nodes is irrelevant. More concretely, $nhh$ must remain *off* and either SLP remains *on* or $nwg$ remains *off*. The relative measures become: $n_{\\textrm{r}}^{*}(\\bm{X}, I1) = 15\/17$ ($\\approx 88\\%$ of nodes are redundant to guarantee convergence to attractor $I1$) $n_{\\textrm{e}}^{*}(\\bm{X}, I1) = 2\/17$ (one only needs to control $\\approx 12\\%$ of nodes to guarantee convergence to attractor $I1$), and $n_{\\textrm{s}}^{*}(\\bm{X}, I1) = 0$ (there is no node symmetry in these MCs). This means that there is a large amount of macro-level canalization of the node redundancy type \u2013 and thus higher controllability \u2013 in the basins of attraction of the SPN model where pattern $I1$ is present.\n\nThe macro-level canalization measures above assume that the interest set of configurations $\\bm{X}$ can be enumerated. Moreover, schema redescription of network configurations itself assumes that $\\bm{X}$ can be sufficiently sampled with our stochastic search method (see previous sub-section). The node symmetry measure additionally assumes that the set of wildcard MCs obtained by stochastic search is not too large to compute symmetric groups. While these assumptions are easily met for micro-level analysis, because LUT entries of individual automata in models of biochemical regulation do not have very large number of inputs, they are more challenging at the macro-level. Certainly, canalization in the single-cell SPN model can be fully studied at both the micro- and macro-levels \u2013 see Figures and for the former as well as example above for the latter. But quantification of macro-level canalization of larger networks, such as the spatial SPN model, needs to be estimated. Therefore, in formulae , , , and , the set of configurations $\\bm{X}$ is sampled: $\\bm{\\hat{X}}$. Configurations for $\\bm{\\hat{X}}$ are sampled from each MC in the set $\\bm{X}''$, proportionally to the number of configurations redescribed by each MC \u2013 i.e. roulette wheel sampling. Configurations from a selected MC are sampled by ascribing Boolean truth values to every wildcard in the MC schema; the proportion of each of the truth values is sampled from a uniform distribution. If a selected MC is a 2-symbol schema, the truth-values of group-invariant enputs are also sampled from a uniform distribution of all possible possibilities. Naturally, the same configuration $\\bm{x}$ can be redescribed by more than one MC $\\theta$. In summary, macro-level canalization for larger networks is quantified with the estimated measures: $\\hat{n}_{\\textrm{r}}$, $\\hat{n}_{\\textrm{e}}$, and $\\hat{n}_{\\textrm{s}}$, as well as their relative versions.\n\nTables and summarize the quantification of macro-level canalization estimated for the four MC sets obtained above: $\\bm{X}''_{\\textrm{wt}}$, $\\bm{X}''_{\\textrm{min}}$, $\\bm{X}''_{\\textrm{bio}}$, and $\\bm{X}''_{\\textrm{noP}}$. Effective control ($n_{\\textrm{e}}$) ranges between $23$ and $26.2$ nodes (out of $60$) for the four sets of MCs; this means (see $n_{\\textrm{e}}^{*}$) that only $38$ to $44\\%$ of nodes need to be controlled to guarantee convergence to wild-type. This shows that there is substantial macro-level canalization in the wild-type attractor basin; from $n_{\\textrm{r}}^{*}$, we can see that $56$ to $62\\%$ of nodes are, on average, redundant to guarantee convergence to wild-type. On the other hand, macro-level canalization in the form of alternative (or symmetric) control mechanisms is not very relevant on this attractor basin, as observed by the low values of $n_{\\textrm{s}}$ and $n_{\\textrm{s}}^{*}$: in the wild-type attractor basin, on average, only approximately 1 out 60 nodes, or $1.6\\%$ can permute.\n\n## Enput power and critical nodes\n\nEvery MC is a schema, and hence comprises a unique set of enputs, not entirely redescribed by any other MC. As defined in the micro-level canalization section, an enput $e$ can be literal \u2013 a single node in a specific Boolean state \u2013 or a group-invariant enput: a set of nodes with a symmetry constraint. Every enput $e$ in a given MC is essential to ensure convergence to a pattern $\\bm{\\mathcal{P}}$, e.g. an attractor $\\mathcal{A}$. Consequently, if the state or constraint of $e$ is disrupted in the MC, without gaining additional knowledge about the configuration of the network, we cannot guarantee convergence to $\\bm{\\mathcal{P}}$. How *critical* is $e$ in a set of configurations $\\bm{X}$ redescribed by an MC set $\\bm{X}''$ \u2013 such as the set of MCs that redescribe a basin of attraction? Since there are usually alternative MCs that redescribe the possible dynamic trajectories to $\\bm{\\mathcal{P}}$, the more $e$ appears in $\\bm{X}''$, the more critical it is in guaranteeing convergence to $\\bm{\\mathcal{P}}$.\n\nFor instance, in the two MCs shown in Equation , the enput $e \\equiv (nhh = 0)$ is common to both. Therefore, disrupting it, without gaining additional knowledge about the state of other nodes, would no longer guarantee convergence to the attractor pattern $I1$ in the single-cell SPN dynamics. Similarly, for the two-symbol MC set of the spatial SPN model, shown in Figure , enputs $e \\equiv (hh_{2,4} = 0)$ and group-invariant enput $e \\equiv (wg_2 = 1 \\vee wg_4 = 1)$ appear in both MCs. Disrupting them, would no longer guarantee convergence to wild-type attractor in the spatial SPN dynamics.\n\nLet us quantify the potential disruption of target dynamics by perturbation of enputs in an MC set. The *power* of an enput $e$ in a set of configurations $\\bm{X}\n\\rightarrowtail \\bm{X}'' : \\sigma(\\bm{x}) \\leadsto \\bm{\\mathcal{P}}, \\forall \\bm{x} \\in \\bm{X}$, is given by:\n\n$$\\epsilon(e,\\bm{X}'',\\bm{\\mathcal{P}}) =\n \\frac{|\\bm{X}_e|}{|\\bm{X}|}\n\\label{eq:dominance}$$\n\nwhere $\\bm{X}_e \\subseteq \\bm{X}$ is the subset of configurations redescribed by $\\bm{X}''$ that contain enput $e$: $\\bm{X}_e \\equiv \\{\\bm{x} \\in \\bm{X}: \\bm{x} \\rightarrowtail\n\\bm{x}'' \\wedge e \\in \\bm{x}'' \\}$. Thus, this measure yields the proportion of configurations in $\\bm{X}$ redescribed by the MCs in which $e$ is an enput; its range is $[0,1]$. If an enput appears in every MC, as in the examples above, then $\\epsilon = 1$ \u2013 in which case $e$ is said to have *full power* over $\\bm{X}''$. For the analysis of the SPN model below when $0.5 \\le \\epsilon < 1$, $e$ is a *high power* enput, when $0 < e < 0.5$ it is a *low power* enput, and when $\\epsilon =0$ it is a *null power* enput. The larger the power of $e$, the more its perturbation is likely to disrupt convergence to the target pattern $\\bm{\\mathcal{P}}$. When $\\bm{X}$ is too large, we estimate $\\hat{\\epsilon}$ \u2013 similarly to the canalization measures discussed in the previous subsection.\n\nWe studied the wild-type attractor basin of the spatial SPN model using the four MC sets of interest: $\\bm{X}''_{\\textrm{wt}}$, $\\bm{X}''_{\\textrm{min}}$, $\\bm{X}''_{\\textrm{bio}}$, and $\\bm{X}''_{\\textrm{noP}}$ (see Minimal configurations subsection above) focusing on the power of literal enputs only. It is also possible to compute the enput power of group-invariant enputs. For example, the two-symbol MC $\\bm{x}''_1$ in Figure , has one of its four group-invariant enputs defined by $ci = 1 \\lor CI = 1$. The power of this enput would tally those MCs in which this condition holds. Nonetheless, here we only measure the power of literal enputs and present the study of the power of group-invariant enputs elsewhere. The enput power computed for these four sets is depicted in Figure , where the output nodes PH and SMO are omitted because they are never input variables to any node in the SPN model, and therefore have null power. For the discussion of these results, it is useful to compare them to the known initial condition, $\\bm{x}_{\\textrm{ini}}$ depicted in Figure , and the wild-type attractor, $\\mathcal{A}_{\\textrm{wt}}$ depicted in Figure (a).\n\n**Enput power in** $\\bm{X}''_{\\textrm{wt}}$ (see Figure A). The enputs with *full power* ($\\epsilon=1$) are: SLP$_{1,2} = 0$, SLP$_{3,4} = 1, hh_{2,4} = 0$ and $ptc_1 = 0$. This is not entirely surprising since all of these genes and proteins are specified as such in both $\\bm{x}_{\\textrm{ini}}$ and $\\mathcal{A}_{\\textrm{wt}}$. However, these values show that these enputs must remain in these states in the entire (sampled) wild-type basin of attraction. In other words, these enputs are *critical controllers* of the dynamics to the wild-type attractor. Indeed, the wild-type is not *robust* to changes in these enputs, which are likely to steer the dynamics to other attractors, as discussed further in the next section. Therefore, the spatial SPN model appears to be unable to recover the dynamic trajectory to the wild-type attractor when either the hedgehog gene is expressed in cells two and four; or the patched gene is expressed in the anterior cell, as well when the initial expression pattern of SLP determined upstream by the pair-rule gene family is disrupted in any way. There are also enputs with *high power* to control wild-type behaviour: $wg_{1,3} = \\textrm{WG}_{1,3} = 0$, $en_{1} = 1$, PTC$_1 = 0$, $en_{2,4}=0$, $ptc_3 = 1$, CI$_3 = 0$ and CIR$_3 = 1$. Again, these are the states of these genes and proteins in the known initial configuration of the SPN $\\bm{x}_{\\textrm{ini}}$, and most of them, except for $ptc_3 = 1$, CI$_3 = 0$ and CIR$_3 = 1$ correspond to their final states in $\\mathcal{A}_{\\textrm{wt}}$.\n\nIn Figure A every node in the SPN \u2013 except the omitted nodes PH and SMO \u2013 appear as an enput, in at least one Boolean state, in many cases with very low values of $\\epsilon$. Thus, while macro-level dynamics is significantly canalized (see above), especially by SLP and the spatial signals for each cell, control of wild-type can derive from alternative strategies, whereby every node can act as an enput in some context. Nonetheless, most nodes ultimately do not observe much power to control wild-type behaviour, thus interventions to disturb wild-type behaviour are most effective via the few more powerful controllers (see also next section).\n\nWe can also compare the enput power computed for $\\bm{X}''_{\\textrm{wt}}$ (Figure A), with the two-symbol MCs $\\bm{x}''_1$ and $\\bm{x}''_2$ in Figure . These two MCs redescribe a significant portion of the wild-type attractor basin \u2013 $20\\%$ of our lower bound count of this basin. Because they only appear in $\\bm{X}''_{\\textrm{wt}}$ and not in any of the other MC sets we studied, the portion of the wild-type attractor basin they redescribe is unique to $\\bm{X}_{\\textrm{wt}}$, and can be analysed via $\\bm{x}''_1$ and $\\bm{x}''_2$. Most of the literal enputs specified in $\\bm{x}''_1$ and $\\bm{x}''_2$ have high power in $\\bm{X}''_{\\textrm{wt}}$, except for WG$_2 = wg_4 =\n\\textrm{CIR}_{1,2,4} = 1$, which are enputs in these two-symbol MCs that have low power. Conversely, there are literal enputs with high-power in $\\bm{X}''_{\\textrm{wt}}$ that are not enputs in these two-symbol MCs: EN$_{2,4} = 0$ and PTC$_1=0$. A key distinguishing feature of $\\bm{x}''_1$ and $\\bm{x}''_2$ is the expression of CIR across the entire parasegment as well as of the wingless protein in the second cell, both of which are different from the trajectory between the known initial condition of the SPN and the wild-type attractor. Therefore, $\\bm{x}''_1$ and $\\bm{x}''_2$ redescribe a (large) portion of the attractor basin outside of the more commonly studied dynamical trajectories.\n\n**Enput power in** $\\bm{X}''_{\\textrm{min}}$ (see Figure B). We found an unexpected expression of CIR$_2 = 1$ (now with full power) as well as $wg_2 = \\textrm{WG}_2 = 1$ (high power). Other enputs whose expression is in opposition to both $\\bm{x}_{\\textrm{ini}}$ and $\\mathcal{A}_{\\textrm{wt}}$ appear with low power: HH$_{2,4} = 1$ and CIR$_{1} = 1$. This again suggests that there is a substantial subset of the wild-type attractor basin, controlled by these and other enputs, distinct from the trajectory that results from the known (biologically plausible) initial configuration. We can also see that there is a significant number of nodes that do not play the role of enput in any MC \u2013 nodes with *null power*, depicted as small grey circles \u2013 as well as many more enputs with full power. $\\bm{X}''_{\\textrm{min}}$ redescribes wild-type dynamics with the smallest number (23) of enputs; this set contains only 32 MCs out of the 1731 in $\\bm{X}''_{\\textrm{wt}}$. However, these are the most macro-canalizing MCs that guarantee convergence to wild-type. Indeed, because of their parsimony, they redescribe a very large subset of the wild-type attractor basin with at least 1.6 times more configurations than what was previously estimated for this basin (see above). Therefore, $\\bm{X}''_{\\textrm{min}}$ provides a solid baseline for the understanding of control in the wild-type attractor basin. This means that the genes and proteins with full power in this set are critical controllers of wild-type behaviour.\n\n**Enput power in** $\\bm{X}''_{\\textrm{bio}}$ (see Figure C). Because this MC set only redescribes configurations in the dynamic trajectory from $\\bm{x}_{\\textrm{ini}}$ to $\\mathcal{A}_{\\textrm{wt}}$, the transient dynamics observed in $\\bm{X}''_{\\textrm{wt}}$ and $\\bm{X}''_{\\textrm{min}}$, e.g. $wg_2 = 1$ and CIR$_2 = 1$, disappear. There are, however, other enputs with full power: $wg_{1,3} =\n\\textrm{WG}_{1,3} = 0$, $en_{2,4} = \\textrm{EN}_{2,4} = 0$, $ptc_1 =\n\\textrm{PTC}_1 = 0$. These critical enputs are particularly important for restricting analysis to a better-known portion of the wild-type attractor basin, for which the model was especially built.\n\n**Enput power in** $\\bm{X}''_{\\textrm{noP}}$ (see Figure D). This set of MCs is useful to understand the beginning of the segment polarity regulatory dynamics, with no proteins expressed. The set of critical genes that must be expressed (*on*) are $ptc_3$ and $wg_4$, which appear with full power; moreover, $en_1 = hh_1 = ptc_2 = ci_2 = 1$ appear with high power. As shown in the figure, most other enputs with full or high power correspond to genes and proteins that must be inhibited (*off*), except, of course, SLP$_{3,4}$ that are assumed to be always *on* in the SPN model.\n\nWe compared these results with previous work on identifying critical nodes in the SPN model. Chaves et al. deduced, from the model's logic, minimal 'pre-patterns' for the initial configuration of the SPN that guarantee convergence to wild-type attractor. More specifically, two necessary conditions and one sufficient condition were deduced, which we now contrast with the enput power analysis.\n\nThe **first necessary condition** for convergence to the wild-type attractor is: $ptc_3 = 1$, assuming that all proteins are unexpressed (*off*) initially, and the sloppy pair gene rule is maintained constant (i.e. SLP$_{1,2} = 0 \\; \\land$ SLP$_{3,4} = 1$.) Of the MC sets we analysed, only $\\bm{X}''_{\\textrm{noP}}$ obeys the (biologically plausible) assumptions for this necessary condition. As we can see in Figure D, the enput $ptc_3 = 1$ has full power on this MC set, which confirms this previous theoretical result. However, since every enput with full power is a necessary condition for the set of configurations described by its MC set, we can derive other necessary conditions for this set of configurations (with the same assumptions), such as $ptc_1 = 0$, $wg_3 = 0$, or $wg_4 = 1$ (see below). We can also see that not all assumptions for the first necessary condition are necessary; while the sloppy pair rule appears as four enputs with full power, not all proteins are required to be unexpressed: the expression of HH is irrelevant in every cell of the parasegment, as is the expression of PTC$_{2,3}$, WG$_{2,4}$, CIA$_{4}$, and CIR$_{1,2,3}$. Moreover, the enput power analysis allows us to identify 'degrees of necessity'; some enputs may not be necessary, but almost always necessary. This is the case of the expression of $en_{1}$, which has high power in $\\bm{X}''_{\\textrm{noP}}$, but is not a necessary condition as a few MCs can guarantee convergence to wild-type with $en_{1} = 0$ (which also appears as enput with low power). Naturally, if we relax the assumptions for condition $ptc_3 = 1$, it may no longer be a necessary condition. This can be see when we look at the enput power analysis of the entire (sampled) wild-type basin $\\bm{X}''_{\\textrm{wt}}$ (Figure A) or the smaller $\\bm{X}''_{\\textrm{bio}}$ (Figure C). In these cases, which still preserve the sloppy pair rule assumption, $ptc_3 = 1$ is no longer an enput with full power. This means that, according to this model, if some proteins are expressed initially, $ptc_3 = 1$ is no longer a necessary condition. Interestingly, we found that in the most macro-canalizing subset of the attractor basin, $\\bm{X}''_{\\textrm{min}}$ (Figure B) \u2013 which assumes the sloppy pair rule constraint but is not constrained to initially unexpressed proteins \u2013 $ptc_3 =\n1$ does appear as an enput with full power again. This means that in the most parsimonious means to control convergence to wild-type attractor, $ptc_3 = 1$ is a necessary condition too. It is noteworthy that in this case, not only can some proteins be expressed, but the expression of CIR$_{2}$ is also a necessary condition (enput with full power).\n\nThe **second necessary condition** for convergence to the wild-type attractor is: $wg_4 =1 \\vee en_1 =1 \\vee ci_4 =1$, assuming that all proteins are unexpressed (*off*) initially, and the sloppy pair gene rule is maintained constant (i.e. SLP$_{1,2} = 0 \\; \\land$ SLP$_{3,4} = 1$) . Again, only $\\bm{X}''_{\\textrm{noP}}$ obeys the (biologically likely) assumptions for this necessary condition. As we can see in Figure D, the enput $wg_4 =1$ has full power, therefore it is a necessary condition. However, the enput $en_1 =1$ has high power, and the enput $ci_4 =1$ has no power. This means that they are not necessary, though $en_1 = 1$ is most often needed. These results suggest that this necessary condition could be shortened to $wg_4 =1$, because in our sampling of the wild-type attractor basin, in the subset meeting the assumptions of the condition, we did not find a single configuration where $wg_4 =0$. Even though our stochastic search was very large, it is possible that there may be configurations, with no proteins expressed, where $wg_4 =\n0 \\; \\land (en_1 = 1 \\vee ci_4\n=1 )$, thus maintaining the original necessary condition. However, our enput power analysis gives a more realistic and nuanced picture of control in the SPN model under the same assumptions. While the necessary condition may be $wg_4 =1 \\vee en_1 =1 \\vee ci_4 =1$, the individual enputs have strikingly different power in controlling for wild-type behaviour: $ci_4 =1$ was never needed (no power), $en_1\n=1$ has high power, and $wg_4 =1$ has full power. Naturally, if we relax the assumptions for this condition, it may no longer be a necessary condition. For instance, if we allow proteins to be expressed initially (still preserving the sloppy pair constraint), we can find MCs that redescribe configurations where $wg_4 = en_1 =\nci_4 = 0$. We found 171 MCs in $\\bm{X}''_{\\textrm{wt}}$ (available in *Supporting data S14* where this condition is not necessary, one of them depicted in Figure .\n\nThe **sufficient condition** for convergence to the wild-type attractor is: $wg_4 = 1 \\; \\land \\; ptc_3 = 1$, assuming that the sloppy pair gene rule is maintained constant (i.e. SLP$_{1,2} = 0 \\; \\land$ SLP$_{3,4} = 1$). A variation of this sufficient condition assumes instead (maintaining the sloppy pair gene rule): $wg_4 = 1 \\; \\land$ PTC$_{3}=1$ In their analysis, Chaves et al. assume that all proteins are unexpressed and that many other genes are initially inhibited (*off*). Even though in Chaves et al. the initial condition itself only requires $ptc_{1}=ci_{1,3}=0$, the argument hinges on propositions and facts that require knowing the state of additional genes such as $en_{2} =\n wg_{3} = hh_{2,4}=0$. While Chaves et al. concluded rightly from this minimal pre-pattern, that convergence to the wild-type pattern has a remarkable error correcting ability to expression *delays* in all other genes, the condition does not really describe robustness to *premature expression* of genes and proteins. It is interesting to investigate sufficient conditions that do require the states of most variables to be specified, giving us the ability to study robustness to both delays and premature expression of chemical species. The MC schemata we obtained with our macro-level analysis allows us to investigate such sufficient conditions directly.\n\nWe searched the entire MC set $\\bm{X}''_{\\textrm{wt}}$ to retrieve the MCs with the *fewest* number of enputs specified as *on*. The 10 MCs (available in *S11*) we retrieved contain only 26 literal enputs, where in six MCs the two nodes in the sufficient condition above ($wg_4, ptc_3$), plus the nodes from the sloppy pair rule (SLP$_{3,4}$) are *on*, 24 are *off* and the remaining 32 are wildcards, and thus irrelevant. In the remaining MCs, instead of $ptc_3 = 1$, we found PTC$_3 = 1$ to be an enput. In those MCs $ptc_3 = \\#$. Converting all wildcards to *off* in one of these MCs, confirms the sufficient condition, as can be seen from Figure A, where SLP$_{3,4} = wg_4 = ptc_3 =\n1$, and everything else is *off*. This can be seen as an 'extreme' condition to wild-type attractor, with a minimum set of genes expressed. We also searched for the opposite extreme scenario, retrieving all MCs with the largest number of *on* nodes, that still converges to the wild-type pattern (available in *Supporting data S12*. By replacing all wildcards in such MCs to *on*, we obtained the configuration in which only 16 nodes must be inhibited (*off*), while the remaining 44 are expressed (*on*), depicted in Figure B. Interestingly, in this extreme configuration, $hh$ must remain *off* across the whole parasegment.\n\n## Robustness to enput disruption\n\nThe power measure introduced in the previous subsection allows us to predict critical nodes in controlling network dynamics to a pattern of interest $\\bm{\\mathcal{P}}$. A natural next step is to investigate what happens when the critical controllers are actually disrupted. We can disrupt an enput $e$ in an MC set with a variety of dynamic regimes. Here, we adopt the approach proposed by Helikar *et al.* , where a node of interest flips its state at time $t$ with a probability $\\zeta$, which can be seen to represent noise in regulatory and signalling events, as well as the 'concentration' of a gene (its corresponding mRNA) or protein \u2013 thus making it possible to use Boolean networks to study continuous changes in concentration of biochemical systems (see ).\n\nWe start from an initial set of configurations of interest: $\\bm{X}^0$. This can be a single configuration, such as the known initial configuration of the SPN $\\bm{X}^0 \\equiv\n\\{\\bm{x}_{\\textrm{ini}}\\}$ (as in Figure A), where the enput $e$ is in a specific (Boolean) value. Next, we set the value of *noise* parameter $\\zeta$, which is the probability that $e$ momentarily flips from its state in $\\bm{X}^0$ at time $t$. This noise is applied at every time step of the simulated dynamics; when a state-flip occurs at time $t$, the node returns to its original state at $t+1$ when noise with probability $\\zeta$ occurs again. Noise is applied to $e$ from $t = 0$ to $t = m$. At time step $t = m+1$ no more noise is applied to $e$ ($\\zeta = 0$) and the network is allowed to converge to an attractor. This process is repeated for $M$ trials. Finally, we record the proportions of the $M$ trials that converged to different attractors.\n\nSince in this paper we only computed enput power for literal enputs (see previous subsection), we also only study literal enput disruption. It is straightforward to disrupt group-invariant enputs; for instance, the group-invariant enput defined by $ci = 1 \\lor$ CI $=\n1$ from the two-symbol MC $\\bm{x}''_1$ in Figure , can be perturbed by making $ci = 0 \\land$ CI $=\n0$. Nonetheless, for simplicity, we present the study of the disruption of group-invariant enputs elsewhere.\n\nThe enput power analysis in the previous subsection, revealed that in the wild-type attractor basin ($\\bm{X}_{\\textrm{wt}}$) of the spatial SPN model there are the following critical nodes (or key controllers): across the parasegment, SLP proteins must be inhibited in cells 1 and 2 (SLP$_{1,2} = 0$) and expressed in cells 3 and 4 (SLP$_{3,4} = 1$), as determined by the pair-rule gene family; hedgehog genes (spatial signals) in cells 2 and 4 must be inhibited ($hh_{2,4} = 0$); the patched gene in the anterior cell must also be inhibited ($ptc_1 =\n0$). With the *stochastic intervention* procedure just described, we seek to answer two questions about these key controllers: (1) how sensitive are they to varying degrees of stochastic noise? and (2) which and how many other attractors become reachable when they are disrupted? In addition to the seven full power enputs, for comparison purposes, we also test the low power enput CI$_4 = 0$. In the original SPN model the states of SLP$_{1,2,3,4}$ are fixed (the sloppy gene constraints). Because these naturally become enputs with full power (see Figure ), it is relevant to include them in this study of enput disruption. However, by relaxing the fixed-state constraint on SLP$_{1,2,3,4}$, by inducing stochastic noise, the dynamical landscape of the spatial SPN model is enlarged from $2^{56}$ to $2^{60}$ configurations. This means that more attractors than the ten identified for the SPN Boolean model (depicted in Figure ) are possible, and indeed found as explained below.\n\nWe used $\\bm{X}^0 \\equiv \\{\\bm{x}_{\\textrm{ini}}\\}$ as the initial state of the networks analysed via stochastic interventions, because of its biological relevance. The simulations where performed with the following parameters: $\\zeta \\in [0.05,0.95]$, swept with $\\Delta(\\zeta) = 0.05$, plus extremum values $\\zeta =0.02$ and $\\zeta\n=0.98$; $m = 500$ steps; $M=10^4$. The simulation results are shown in Figure .\n\nThe first striking result is that disruption of SLP$_1 = 0$ makes it possible to drive the dynamics away from wild-type into one of five other attractors (one of which a variant of wild-type). For $\\zeta > 0.15$ no further convergence to wild-type is observed, and at $\\zeta =0.05$ the proportion of trials that converged to wild-type was already very small. We also found phase transitions associated with the values of $\\zeta$. For $\\zeta \\le 0.15$ most trials converged to wild-type, wild-type (ptc mutant), broad-stripes or no-segmentation, and a very small proportion to two variants of the ectopic mutant. When $\\zeta = 0.15$ the proportion of trials converging to broad-stripes reaches its peak, and decreases, so that no trial converged to this mutant expression pattern for $\\zeta \\ge 0.55$. Finally, for $\\zeta \\ge 0.55$ convergence to the ectopic variants reaches its peak and decreases steadily but does not disappear, while convergence to the no-segmentation mutant increases becoming almost $100\\%$ when $\\zeta = 0.98$. We thus conclude that SLP$_1 = 0$ is a wild-type attractor enput which is very sensitive to noise.\n\nIn the case of SLP$_{3} = 1$, we observed convergence to an attractor that is not any of the original ten attractors \u2013 characterized by having two engrailed bands in cells 1 and 3 (see *Supporting text S5*). The proportion of trials converging to wild-type and to the new attractor decrease and increase respectively, reaching similar proportions when $\\zeta=0.5$. When $\\zeta=0.98$, almost every trial converged to the new attractor. We conclude that SLP$_{3} = 1$ is a wild-type attractor enput whose robustness is proportional to noise.\n\nDisruption of SLP$_4 = 1$ resulted in a behaviour similar to SLP$_1$, but with fewer possible attractors reached. As $\\zeta$ is increased, fewer trials converge to wild-type and growing proportions of trials converge to the wild-type $ptc$ mutant pattern (reaching a peak at $\\zeta =0.5$) and the no-segmentation mutant. For more extreme values of $\\zeta$, the majority of trials converged to the no-segmentation mutant. However, an important difference with respect to SLP$_1$ was observed: for $\\zeta \\le 0.5$ the majority of trials converged to wild-type, and convergence to this attractor is observed for the whole range of $\\zeta$. Thus the wild-type phenotype in the SPN model is much more robust to perturbations to the expression of SLP in the posterior cell (SLP$_4 =\n1$), than to perturbations to its inhibition in the anterior cell (SLP$_1 = 0$).\n\nWith the parameters chosen, the disruption of SLP$_2= 0$ leads to a remarkable similar behaviour: any disruption (any amount of noise) leads to the same wild-type variant attractor pattern with two wingless stripes (c). Therefore, SLP$_2= 0$ is not robust at all \u2013 though the resulting attractor is always the same and a variant of wild-type. In this case, convergence to a single attractor for all values of $\\zeta$ is the result of setting $m=500$ in our experiments. When we lower the value of $m$ enough in our simulations, for low values of $\\zeta$, there are trials that are not perturbed and thus maintain convergence to the wild-type attractor. But any perturbation of SLP$_2= 0$ that occurs leads the dynamics to the wild-type variant.\n\nDisruption of $hh_{2,4}=0$ increasingly drives dynamics to the broad-stripes mutant. However, disruption of $hh_2$ reveals greater robustness since a large number of trials still converges to wild-type for $\\zeta \\le 0.15$, and residual convergence to wild-type is observed up to $\\zeta\n= 0.75$. In contrast, any disruption of $hh_4$ above $\\zeta = 0.05$ leads to the broad-stripes mutant, and even very small amounts of disruption lead to a large proportion of mutants. Similarly, disruption of $e \\equiv ptc_1 = 0$ drives the dynamics to one \u2013 and the same \u2013 of the wild-type variants. Yet, when $\\zeta=0.02$ there is a minute proportion of trajectories that still converge to the wild-type attractor. Therefore, as expected, the wild-type attractor in the SPN model is not very robust to disruptions of the enputs with full power. Finally, and in contrast, no disruption of low-power enput CI$_4= 0$ is capable of altering convergence to the wild-type attractor.\n\n# Discussion\n\nWe introduced wildcard and two-symbol redescription as a means to characterize the control logic of the automata used to model networks of biochemical regulation and signalling. We do this by generalizing the concept of *canalization*, which becomes synonymous with redundancy in the logic of automata. The two-symbol schemata we propose capture two forms of logical redundancy, and therefore of canalization: input redundancy and symmetry. This allowed us to provide a straightforward way to *quantify* canalization of individual automata (micro-level), and to integrate the entire canalizing logic of an automata network into the Dynamics Canalization Map (DCM). A great merit of the DCM is that it allows us to make inferences about collective (macro-level) dynamics of networks from the micro-level canalizing logic of individual automata \u2013 with incomplete information. This is important because even medium-sized automata models of biochemical regulation lead to dynamical landscapes that are too large to compute. In contrast, the DCM scales linearly with number of automata \u2013 and schema redescription, based on computation of prime implicants \u2013 is easy to compute for individual automata with the number of inputs typically used in the literature.\n\nWith this methodology, we are thus providing a method to link micro- to macro-level dynamics \u2013 a crux of complexity. Indeed, in this paper we showed how to uncover *dynamical modularity*: separable building blocks of macro-level dynamics. This an entirely distinct concept from community structure in networks, and allows us to study complex networks with node dynamics \u2013 rather than just their connectivity structure. The identification of such modules in the dynamics of networks is entirely novel and provides insight as to how the collective dynamics of biochemical networks uses these building blocks to produce its phenotypic behaviour \u2013 towards the goal of explaining how biochemical networks 'compute'.\n\nBy basing our methodology on the redescription of individual automata (micro-level), we also avoid the scaling problems faced by previous schemata approaches which focused solely on redescription of the dynamical landscape (macro-level) of networks . By implementing the DCM as a threshold network, we show that we can compute the dynamical behaviour of the original automata network from information about the state of just a few network nodes (partial information). In its original formulation, the dynamic unfolding of an automata network cannot be computed unless an initial state of all its nodes is specified. In turn, this allows us to search for minimal conditions (MCs) that guarantee convergence to an attractor of interest. Not only are MCs important to understand how to *control* complex network dynamics, but they also allow us to *quantify macro-level canalization* therein. From this, we get a measurable understanding of the robustness of attractors of interest \u2013 the greater the canalization, the greater the robustness to random perturbations \u2013 and, conversely, the identification of *critical node-states* (enputs) in the network dynamics to those attractors. We provided a measure of the capacity of these critical nodes to control convergence to an attractor of interest (enput power), and studied their robustness to disruptions. By quantifying the ability of individual nodes to control attractor behaviour, we can obtain a testable understanding of macro-level canalization in the analysed biochemical network. Indeed, we can uncover how robust phenotypic traits are (e.g. robustness of the wild-type attractor), and which critical nodes must be acted upon in order to disrupt phenotypic behaviour.\n\nWe exemplified our methodology with the well-known segment polarity network model (in both the single-cell and the spatial versions). Because this model has been extensively studied, we use it to show that our analysis does not contradict any previous findings. However, our analysis also allowed us to gain new knowledge about its behaviour. From a better understanding of the size of its wild-type attractor basin (larger than previously thought) to uncovering new minimal conditions and critical nodes that control wild-type behaviour. We also fully quantified micro- and macro-level canalization in the model, and provided a complete map of its canalization logic including dynamical modularity. Naturally, our results pertain to this model; we do not claim that our results characterize the real Drosophila segment polarity gene network. However, our results, should they be found to deviate from organism studies, can certainly be used to improve the current model, and thus improve our understanding of Drosophila development. Thus a key use of our methodology in systems biology should be to help improve modelling accuracy. With the methodology now tested on this model, in subsequent work we will apply it to several automata network models of biochemical regulation and signalling available in the systems biology literature.\n\nThe pathway modules we derived by inspection of the DCM for the segment polarity network revealed a number of properties of complex networks dynamics that deserve further study. For instance, the dynamical sequence that occurs once each such module is activated is independent of the temporal update scheme utilized. Therefore, if the dynamics of a network is captured exclusively by such modules, its intra-module behaviour will be similar for both synchronous and asynchronous updating \u2013 denoting a particular form of robustness to timing. We will explore this property in future work, but as we showed here, the dynamics of the single-cell version of the SPN model is very (though not fully) controlled by only two pathway modules. This explains why its dynamical behaviour is quite robust to timing events as previously reported .\n\nResearch in cellular processes has provided a huge amount of genomic, proteomic, and metabolomics data used to characterize networks of biochemical reactions. All this information opens the possibility of understanding complex regulation of intra- and inter-cellular processes in time and space. However, this possibility is not yet realized because we do not understand the dynamical constraints that arise at the phenome (macro-) level from micro-level interactions. One essential step towards reaching these ambitious goals is to identify and understand the loci of control in the dynamics of complex networks that make up living cells. Towards this goal, we developed the new methodology presented in this paper. Our methodology is applicable to any complex network that can be modelled using binary state automata \u2013 and easily extensible to multiple-state automata. We currently focus only on biochemical regulation with the goal of understanding the possible mechanisms of collective information processing that may be at work in orchestrating cellular activity.\n\n# Acknowledgements\n\nWe thank the FLAD Computational Biology Collaboratorium at the Gulbenkian Institute of Science (Portugal) for hosting and providing facilities used for this research. We also thank Indiana University for providing access to its computing facilities. Finally, we are very grateful for the generous and constructive comments we received from reviewers.\n\n# Tables\n\n$$\\begin{array}{|l|l|l|}\n\\hline\n\\text{\\emph{Index}} & \\text{\\emph{Node}} & \\text{\\emph{State-Transition Function}} \\\\\n\\hline\n1 & \\textrm{SLP}_i^{t+1} & \\leftarrow 0 \\text{ if } i = 1 \\lor i = 2; 1 \\text{ if } i = 3 \\lor i = 4; \\\\\n2 & wg_i^{t+1} & \\leftarrow (\\textrm{CIA}_i^t \\land \\textrm{SLP}_i^t \\land \\neg \\textrm{CIR}_i^t)\\lor (wg_i^t \\land (\\textrm{CIA}_i^t \\lor \\textrm{SLP}_i^t)\\land \\neg \\textrm{CIR}_i^t) \\\\\n3 & \\textrm{WG}_i^{t+1} & \\leftarrow wg_i^t \\\\\n4 & en_i^{t+1} & \\leftarrow ({\\textrm{WG}_{i-1}^t} \\lor {\\textrm{WG}_{i+1}^t}) \\land \\neg \\textrm{SLP}_i^t \\\\\n5 & \\textrm{EN}_i^{t+1} & \\leftarrow en_i^t \\\\\n6 & hh_i^{t+1} & \\leftarrow \\textrm{EN}_i^t \\land \\neg \\textrm{CIR}_i^t \\\\\n7 & \\textrm{HH}_i^{t+1} & \\leftarrow hh_i^t \\\\\n8 & ptc_i^{t+1} & \\leftarrow \\textrm{CIA}_i^t \\land \\neg \\textrm{EN}_i^t \\land \\neg \\textrm{CIR}_i^t \\\\\n9 & \\textrm{PTC}_i^{t+1} & \\leftarrow ptc_i^t \\lor (\\textrm{PTC}_i^t \\land \\neg {\\textrm{HH}_{i-1}^t} \\land \\neg {\\textrm{HH}_{i-1}^t}) \\\\\n10 & \\textrm{PH}_i^{t} & \\leftarrow \\textrm{PTC}_i^t \\land ({\\textrm{HH}_{i-1}^t} \\lor {\\textrm{HH}_{i+1}^t}) \\\\\n11 & \\textrm{SMO}_i^{t} & \\leftarrow \\neg \\textrm{PTC}_i^t \\lor ({\\textrm{HH}_{i-1}^t} \\lor {\\textrm{HH}_{i+1}^t}) \\\\\n12 & ci_i^{t+1} & \\leftarrow \\neg \\textrm{EN}_i^t \\\\\n13 & \\textrm{CI}_i^{t+1} & \\leftarrow ci_i^t \\\\\n14 & \\textrm{CIA}_i^{t+1} & \\leftarrow \\textrm{CI}_i^t \\land (\\neg \\textrm{PTC}_i^t \\lor {hh_{i-1}^t} \\lor {hh_{i+1}^t} \\lor {\\textrm{HH}_{i-1}^t} \\lor {\\textrm{HH}_{i+1}^t}) \\\\\n15 & \\textrm{CIR}_i^{t+1} & \\leftarrow \\textrm{CI}_i^t \\land \\textrm{PTC}_i^t \\land \\neg {hh_{i-1}^t} \\land \\neg {hh_{i+1}^t} \\land \\neg {\\textrm{HH}_{i-1}^t} \\land \\neg {\\textrm{HH}_{i+1}^t} \\\\\n\\hline\n\\end{array}$$\n\n| | **s-units** | **t-units** |\n|:--:|:--:|:--:|\n| **incoming fibres** | one or more | one or more |\n| **outgoing fibres** | one per schema of which is enput | one for the transition it causes |\n| **branching out** | yes | no |\n| **fusing in** | no | yes |\n\n**Connectivity rules in canalizing maps**\n\n| **MC set** | $|\\bm{X}''|$ | $e$ (min) | $e$ (max) | $n_\\textrm{e}$ | $n_\\textrm{r}$ | $n_\\textrm{s}$ |\n|:--:|:--:|:--:|:--:|:--:|:--:|:--:|\n| $\\bm{X}'_{\\textrm{wt}}$ | 1745 | 23 | 33 | $24.01 \\pm 0.08$ | 35.99 $\\pm 0.17$ | $0.98 \\pm 0.03$ |\n| $\\bm{X}'_{\\textrm{min}}$ | 32 | 23 | 23 | $23 \\pm 0$ | 37 $\\pm 0$ | 0 |\n| $\\bm{X}'_{\\textrm{bio}}$ | 90 | 25 | 28 | $25.75 \\pm 0.11$ | 34.25 $\\pm 0.11$ | 0 |\n| $\\bm{X}'_{\\textrm{noP}}$ | 24 | 26 | 30 | $26.2 \\pm 0.04$ | 34.8 $\\pm 0.04$ | 0 |\n\n**Macro-level canalization in the wildcard MC sets converging to wild-type in the SPN.** The table lists for every set of MCs reported in the main text: cardinality, minimum number of enputs, maximum number of enputs, estimated canalization. Canalization measures were obtained, for each MC set, from $10$ independent samples of $10^4$ configurations, thus $|\\bm{\\hat{X}}| = 10^5$. Values shown refer to the mean plus 95% confidence intervals for the 10 independent measurements.\n\n| **MC set** | $n^*_\\textrm{e}$ | $n^*_\\textrm{r}$ | $n^*_\\textrm{s}$ | | | | | | |\n|:--:|:--:|:--:|:--:|:--:|:--:|:--:|:--:|:--:|:--:|\n| $\\bm{X}'_{\\textrm{wt}}$ | 0.4 $\\pm 0.001$ | 0.6 $\\pm 0.001$ | 0.016 $\\pm 0.002$ | | | | | | |\n| $\\bm{X}'_{\\textrm{min}}$ | 0.38 | 0.62 | 0 | | | | | | |\n| $\\bm{X}'_{\\textrm{bio}}$ | 0.43 $\\pm 0.001$ | 0.57 $\\pm 0.001$ | 0 | | | | | | |\n| $\\bm{X}'_{\\textrm{noP}}$ | 0.436 $\\pm 0.0007$ | 0.564 $\\pm 0.0007$ | 0 | | | | | | |\n\n**Macro-level canalization in the wildcard MC sets converging to wild-type in the SPN.** The table lists the relative canalization measures for every set of MCs reported in the main text. Canalization measures were obtained, for each MC set, from $10$ independent samples of $10^4$ configurations, thus $|\\bm{\\hat{X}}| = 10^5$. Values shown refer to the mean plus 95% confidence intervals for the 10 independent measurements.\n\n# Figure Legends\n\n# Supporting Information Legends\n\nSupporting text **S1** Glossary and mathematical notation. \nSupporting text **S2** Details about the computation of wildcard and two-symbol schemata. \nSupporting text **S3** Details about the conversion of schemata into a single threshold network. \nSupporting text **S4** Algorithms for the computation of minimal configurations. \nSupporting text **S5** Further details concerning the minimal configurations found for the segment polarity network model. \nSupporting text **S6** Basic notions of the inclusion\/exclusion principle. \nSupporting data **S7** (.csv format) Minimal configurations for the segment polarity network model obtained from biologically-plausible seed configurations. \nSupporting data **S8** (.csv format) Entire set of minimal configurations obtained for the segment polarity network model. \nSupporting data **S9** (.csv format) Minimal configurations for the segment polarity network where no protein is *on*. \nSupporting data **S10** (.csv format) Minimal configurations for the segment polarity network with the smallest number of nodes that need to be specified in a Boolean state. \nSupporting data **S11** (.csv format) Minimal configurations for the segment polarity network with the fewest number of *on* nodes \nSupporting data **S12** (.csv format) Minimal configurations for the segment polarity network with the largest number of *on* nodes \nSupporting data **S13** (.csv format) (Wildcard) minimal configurations for the segment polarity network that were redescribed as two-symbol schemata \nSupporting data **S14** (.csv format) Minimal configurations for the segment polarity network that do not satisfy $wg_4 =1 \\vee en_1 =1 \\vee ci_4 =1$","meta":{"dup_signals":{"dup_doc_count":20,"dup_dump_count":16,"dup_details":{"curated_sources":2,"2022-21":1,"2021-43":1,"2021-17":2,"2020-29":1,"2019-39":2,"2019-22":1,"2019-18":1,"2018-47":2,"2018-34":1,"2018-22":1,"2017-39":1,"2017-22":1,"2014-42":1,"2022-49":1,"2024-22":1}},"filename":"out\/1301.5831_extract_plosOne_canal_control_BN_arxiv.tex.md"},"subset":"arxiv"} +{"text":"abstract: Machine Learning has been a big success story during the AI resurgence. One particular stand out success relates to learning from a massive amount of data. In spite of early assertions of the unreasonable effectiveness of data, there is increasing recognition for utilizing knowledge whenever it is available or can be created purposefully. In this paper, we discuss the indispensable role of knowledge for deeper understanding of content where (i) large amounts of training data are unavailable, (ii) the objects to be recognized are complex, (e.g., implicit entities and highly subjective content), and (iii) applications need to use complementary or related data in multiple modalities\/media. What brings us to the cusp of rapid progress is our ability to (a) create relevant and reliable knowledge and (b) carefully exploit knowledge to enhance ML\/NLP techniques. Using diverse examples, we seek to foretell unprecedented progress in our ability for deeper understanding and exploitation of multimodal data and continued incorporation of knowledge in learning techniques.\n .\n .\n .\n Machine Intelligence, Multimodal Exploitation, Understanding Complex Text, Knowledge-enhanced Machine Learning, Knowledge-enhanced NLP, Knowledge-driven Deep Content Understanding, Personalized Digital Health, Semantic-Cognitive-Perceptual Computing, Implicit Entity Recognition, Emoji Sense Disambiguation\nauthor: Amit Sheth; Sujan Perera; Sanjaya Wijeratne; Krishnaprasad Thirunarayan\ntitle: Knowledge will Propel Machine Understanding of Content: Extrapolating from Current Examples\n\n# Introduction\n\nRecent success in the area of Machine Learning (ML) for Natural Language Processing (NLP) has been largely credited to the availability of enormous training datasets and computing power to train complex computational models\u00a0. Complex NLP tasks such as statistical machine translation and speech recognition have greatly benefited from the Web-scale unlabeled data that is freely available for consumption by learning systems such as deep neural nets. However, many traditional research problems related to NLP, such as part-of-speech tagging and named entity recognition (NER), require labeled or human-annotated data, but the creation of such datasets is expensive in terms of the human effort required. In spite of early assertion of the unreasonable effectiveness of data (i.e., data alone is sufficient), there is an increasing recognition for utilizing knowledge to solve complex AI problems. Even though knowledge base creation and curation is non-trivial, it can significantly improve result quality, reliability, and coverage. A number of AI experts, including Yoav Shoham\u00a0, Oren Etzioni, and Pedro Domingos\u00a0, have talked about this in recent years. In fact, codification and exploitation of declarative knowledge can be both feasible and beneficial in situations where there is not enough data or adequate methodology to learn the nuances associated with the concepts and their relationships.\n\nThe value of domain\/world knowledge in solving complex problems was recognized much earlier\u00a0. These efforts were centered around language understanding. Hence, the major focus was towards representing linguistic knowledge. The most popular artifacts of these efforts are FrameNet\u00a0 and WordNet\u00a0, which were developed by realizing the ideas of frame semantics\u00a0 and lexical-semantic relations\u00a0, respectively. Both these resources have been used extensively by the NLP research community to understand the semantics of natural language documents.\n\nThe building and utilization of the knowledge bases took a major leap with the advent of the Semantic Web in the early 2000s. For example, it was the key to the first patent on Semantic Web and a commercial semantic search\/browsing and personalization engine over 15 years ago\u00a0, where knowledge in multiple domains complemented ML techniques for information extraction (NER, semantic annotation) and building intelligent applications[^1]. Major efforts in the Semantic Web community have produced large, cross-domain (e.g., DBpedia, Yago, Freebase, Google Knowledge Graph) and domain specific (e.g., Gene Ontology, MusicBrainz, UMLS) knowledge bases in recent years which have served as the foundation for the intelligent applications discussed next.\n\nThe value of these knowledge bases has been demonstrated for determining semantic similarity\u00a0, question answering\u00a0, ontology alignment\u00a0, and word sense disambiguation (WSD)\u00a0, as well as major practical AI services, including Apple's Siri, Google's Semantic Search, and IBM's Watson. For example, Siri relies on knowledge extracted from reputed online resources to answer queries on restaurant searches, movie suggestions, nearby events, etc. In fact, \"question answering\", which is the core competency of Siri, was built by partnering with Semantic Web and Semantic Search service providers who extensively utilize knowledge bases in their applications[^2]. The Jeopardy version of IBM Watson uses semi-structured and structured knowledge bases such as DBpedia, Yago, and WordNet to strengthen the evidence and answer sources to fuel its DeepQA architecture\u00a0. A recent study\u00a0 has shown that Google search results can be negatively affected when it does not have access to Wikipedia. Google Semantic Search is fueled by Google Knowledge Graph[^3], which is also used to enrich search results similar to what the Taalee\/Semagix semantic search engine did 15 years ago[^4]\u00a0.\n\nWhile knowledge bases are used in an auxiliary manner in the above scenarios, we argue that they have a major role to play in understanding real-world data. Real-world data has a greater complexity that has yet to be fully appreciated and supported by automated systems. This complexity emerges from various dimensions. Human communication has added many constructs to language that help people better organize knowledge and communicate effectively and concisely. However, current information extraction solutions fall short in processing several implicit constructs and information that is readily accessible to humans. One source of such complexity is our ability to express ideas, facts, and opinions in an implicit manner. For example, the sentence *\"The patient showed accumulation of fluid in his extremities, but respirations were unlabored and there were no use of accessory muscles\"* refers to the clinical conditions of \"shortness of breath\" and \"edema\", which would be understood by a clinician. However, the sentence does not contain names of these clinical conditions \u2013 rather it contains descriptions that imply the two conditions. Current literature on entity extraction has not paid much attention to implicit entities\u00a0.\n\nAnother complexity in real-world scenarios and use cases is data heterogeneity due to their multimodal nature. There is an increasing availability of physical (including sensor\/IoT), cyber, and social data that are related to events and experiences of human interest\u00a0. For example, in our personalized digital health application for managing asthma in children[^5], we use numeric data from sensors for measuring a patient's physiology (e.g., exhaled nitric oxide) and immediate surroundings (e.g., volatile organic compounds, particulate matter, temperature, humidity), collect data from the Web for the local area (e.g., air quality, pollen, weather), and extract textual data from social media (i.e., tweets and web forum data relevant to asthma)\u00a0. Each of these modalities provides complementary information that is helpful in evaluating a hypothesis provided by a clinician and also helps in disease management. We can also relate anomalies in the sensor readings (such as spirometer) to asthma symptoms and potential treatments (such as taking rescue medication). Thus, understanding a patient's health and well-being requires integrating and interpreting multimodal data and gleaning insights to provide reliable situational awareness and decisions. Knowledge bases play a critical role in establishing relationships between multiple data streams of diverse modalities, disease characteristics and treatments, and in transcending multiple abstraction levels\u00a0. For instance, we can relate the asthma severity level of a patient, measured exhaled nitric oxide, relevant environmental triggers, and prescribed asthma medications to one another to come up with personalized actionable insights and decisions.\n\nKnowledge bases can come in handy when there is not enough hand-labaled data for supervised learning. For example, emoji sense disambiguation, which is the ability to identify the meaning of an emoji in the context of a message in a computational manner\u00a0, is a problem that can be solved using supervised and knowledge-based approaches. However, there is no hand-labeled emoji sense dataset in existence that can be used to solve this problem using supervised learning algorithms. One reason for this could be that emoji have only recently become popular, despite having been first introduced in the late 1990s\u00a0. We have developed a comprehensive emoji sense knowledge base called EmojiNet\u00a0 by automatically extracting emoji senses from open web resources and integrating them with BabelNet. Using EmojiNet as a sense inventory, we have demonstrated that the emoji sense disambiguation problem can be solved with carefully designed knowledge bases, obtaining promising results\u00a0.\n\nIn this paper, we argue that careful exploitation of knowledge can greatly enhance the current ability of (big) data processing. At Kno.e.sis, we have dealt with several complex situations where:\n\n1. Large quantities of hand-labeled data required for unsupervised (self-taught) techniques to work well is not available or the annotation effort is significant.\n\n2. The text to be recognized is complex (i.e., beyond simple entity - person\/location\/organization), requiring novel techniques for dealing with complex\/compound entities\u00a0, implicit entities\u00a0, and subjectivity (emotions, intention)\u00a0.\n\n3. Multimodal data \u2013 numeric, textual and image, qualitative and quantitative, certain and uncertain \u2013 are available naturally\u00a0.\n\nOur recent efforts have centered around exploiting different kinds of knowledge bases and using semantic techniques to complement and enhance ML, statistical techniques, and NLP. Our ideas are inspired by the human brain's ability to learn and generalize knowledge from a small amount of data (i.e., humans do not need to examine tens of thousands of cat faces to recognize the next \"unseen\" cat shown to them), analyze situations by simultaneously and synergistically exploiting multimodal data streams, and understand more complex and nuanced aspects of content, especially by knowing (through common-sense knowledge) semantics\/identity preserving transformations.\n\n# Challenges in creating and using knowledge bases\n\nLast decade saw an increasing use of background knowledge for solving diverse problems. While applications such as searching, browsing, and question answering can use large, publically available knowledge bases in their current form, others like movie recommendation, biomedical knowledge discovery, and clinical data interpretation are challenged by the limitations discussed below. \n**Lack of organization of knowledge bases:** Proper organization of knowledge bases has not kept pace with their rapid growth, both in terms of variety and size. Users find it increasingly difficult to find relevant knowledge bases or relevant portions of a large knowledge base for use in domain-specific applications (e.g., movie, clinical, biomedical). This highlights the need to identify and select relevant knowledge bases such as the linked open data cloud, and extract the relevant portion of the knowledge from broad coverage sources such as Wikipedia and DBpedia. We are working on automatically indexing the domains of the knowledge bases\u00a0 and exploiting the semantics of the entities and their relationships to select relevant portions of a knowledge base . \n**Gaps in represented knowledge:** The existing knowledge bases can be incomplete with respect to a task at hand. For example, applications such as computer assisted coding (CAC) and clinical document improvement (CDI) require comprehensive knowledge about a particular domain (e.g., cardiology, oncology)[^6]. We observe that although the existing medical knowledge bases (e.g., Unified Medical Language System (UMLS)) are rich in taxonomical relationships, they lack non-taxonomical relationships among clinical entities. We have developed data-driven algorithms that use real-world clinical data (such as EMRs) to discover missing relationships between clinical entities in existing knowledge base, and then get these validated by a domain-expert-in-the-loop\u00a0. Yet another challenge is creating personalized knowledge bases for specific tasks. For example, in\u00a0, personal knowledge graphs are created based on the content consumed by a user, taking into account the dynamically changing vocabulary, and this is applied to improve subsequent filtering of relevant content. \n**Inefficient metadata representation and reasoning techniques:** The scope of what is captured in the knowledge bases is rapidly expanding, and involves capturing more subtle aspects such as subjectivity (intention, emotions, sentiments), spatial and temporal information, and provenance. Traditional triple-based representation languages developed by Semantic Web community (e.g., RDF, OWL) are unsuitable for capturing such metadata due to their limited expressivity. For example, representation of spatio-temporal context or uncertainty associated with a triple is *ad hoc*, inefficient, and lacks semantic integration for formal reasoning. These limitations and requirements are well-recognized by the Semantic Web community, with some recent promising research to address them. For example, the singleton-property based representation\u00a0 adds ability to make statements about a triple (i.e., to express the context of a triple) and probabilistic soft logic\u00a0 adds ability to associate the probability value with a triple and reason over them. It will be really exciting to see applications exploiting such enhanced hybrid knowledge representation models that perform 'human-like' reasoning on them.\n\nNext, we discuss several applications that utilize knowledge bases and multimodal data to circumvent or overcoming some of the aforementioned challenges due to insufficient manually-created knowledge. \n**Application 1: Emoji sense disambiguation**\n\nWith the rise of social media, \"emoji\" have become extremely popular in online communication. People are using emoji as a new language on social media to add color and whimsiness to their messages. Without rigid semantics attached to them, emoji symbols take on different meanings based on the context of a message. This has resulted in ambiguity in emoji use (see Figure\u00a0). Only recently have there been efforts to mimic NLP techniques used for machine translation, word sense disambiguation and search into the realm of emoji\u00a0. The ability to automatically process, derive meaning, and interpret text fused with emoji will be essential as society embraces emoji as a standard form of online communication. Having access to knowledge bases that are specifically designed to capture emoji meaning can play a vital role in representing, contextually disambiguating, and converting pictorial forms of emoji into text, thereby leveraging and generalizing NLP techniques for processing richer medium of communication.\n\nAs a step towards building machines that can understand emoji, we have developed EmojiNet\u00a0, the first machine readable sense inventory for emoji. It links Unicode emoji representations to their English meanings extracted from the Web, enabling systems to link emoji with their context-specific meanings. EmojiNet is constructed by integrating multiple emoji resources with BabelNet, which is the most comprehensive multilingual sense inventory available to-date. For example, for the emoji 'face with tears of joy' , EmojiNet lists 14 different senses, ranging from happy to sad. An application designed to disambiguate emoji senses can use the senses provided by EmojiNet to automatically learn message contexts where a particular emoji sense could appear. Emoji sense disambiguation could improve the research on sentiment and emotion analysis. For example, consider the emoji , which can take the meanings *happy* and *sad* based on the context in which it has been used. Current sentiment analysis applications do not differentiate among these two meanings when they process . However, finding the meanings of by emoji sense disambiguation techniques\u00a0 can improve sentiment prediction. Emoji similarity calculation is another task that could be benefited by knowledge bases and multi-modal data analysis. Similar to computing similarity between words, we can calculate the similarity between emoji characters. We have demonstrated how EmojiNet can be utilized to solve the problem of emoji similarity\u00a0. Specifically, we have shown that emoji similarity measures based on the rich emoji meanings available in EmojiNet can outperform conventional emoji similarity measures based on distributional semantic models and also helps to improve applications such as sentiment analysis\u00a0. \n**Application 2: Implicit entity linking**\n\nAs discussed, one of the complexities in data is the ability to express facts, ideas, and opinions in an implicit manner. As humans, we seamlessly express and infer implicit information in our daily conversations. Consider the two tweets *\"Aren't we gonna talk about how ridiculous the new space movie with Sandra Bullock is?\"* and *\"I'm striving to be +ve in what I say, so I'll refrain from making a comment abt the latest Michael Bay movie\"*. The first tweet contains an implicit mention of movie 'Gravity' and the second tweet contains an element of sarcasm and negative sentiment towards the movie 'Transformers: Age of Extinction'. Both the sentiment and the movie are implicit in the tweet. While it is possible to express facts, ideas, and opinions in an implicit manner, for brevity, we will focus on how knowledge aids in automatic identification of implicitly mentioned entities in text.\n\nWe define implicit entities as \"entities mentioned in text where neither its name nor its synonym\/alias\/abbreviation or co-reference is explicitly mentioned in the same text\". Implicit entities are a common occurrence. For example, our studies found that 21% of movie mentions and 40% of book mentions are implicit in tweets, and about 35% and 40% of 'edema' and 'shortness of breath' mentions are implicit in clinical narratives. There are genuine reasons why people tend to use implicit mentions in daily conversations. Here are few reasons that we have observed:\n\n1. To express sentiment and sarcasm : See above examples.\n\n2. To provide descriptive information : For example, it is a common practice to describe the features of an entity rather than simply list down its name in clinical narratives. Consider the sentence 'small fluid adjacent to the gallbladder with gallstones which may represent inflammation.' This sentence contains implicit mention of the condition cholecystitis ('inflammation in gallbladder' is recognized as cholecystitis) with its possible cause. The extra information (i.e., possible cause) in description can be critical in understanding the patient's health status and treating the patient. While it is feasible to provide these extra information with the corresponding explicit entity names, it is observed that clinical professionals prefer this style.\n\n3. To emphasize the features of an entity : Sometimes we replace the name of the entity with its special characteristics in order to give importance to those characteristics. For example, the text snippet \"Mason Evans 12 year long shoot won big in golden globe\" has an implicit mention of the movie 'Boyhood.' There is a difference between this text snippet and its alternative form \"Boyhood won big in golden globe.\" The speaker is interested in emphasizing the distinct feature of the movie, which would have been ignored if he had used the name of the movie as in the second phrase.\n\n4. To communicate shared understanding : We do not bother spelling out everything when we know that the other person has enough background knowledge to understand the message conveyed. A good example is the fact that clinical narratives rarely mention the relationships between entities explicitly (e.g., relationships between symptoms and disorders, relationships between medications and disorders), rather it is understood that the other professionals reading the document have the expertise to understand such implicit relationships in the document.\n\nWhenever we communicate, we assume common understanding or shared-knowledge with the audience. A reader who does not know that Sandra Bullock starred in the movie 'Gravity' and that it is a space exploration movie would not be able to decode the reference to the movie 'Gravity' in the first example; a reader who does not know about Michael Bay's movie release would have no clue about the movie mentioned in the second tweet; a reader who does not know the characteristics of the clinical condition 'cholecystitis' would not be able to decode its mention in the clinical text snippet shown above; a reader who is not a medical expert would not be able to connect the diseases and symptoms mentioned in a clinical narrative. **These examples demonstrate the indispensable value of domain knowledge in text understanding**. Unfortunately, state-of-the-art named entity recognition applications do not capture implicit entities\u00a0. Also, we have not seen big data-centric or other approaches that can glean implicit entities without the use of background knowledge (that is already available (e.g., in UMLS) or can be created (e.g., from tweets and Wikipedia)).\n\nThe task of recognizing implicit entities in text demands comprehensive and up-to-date world knowledge. Individuals resort to a diverse set of entity characteristics to make implicit references. For example, references to the movie 'Boyhood' can use phrases like *\"Richard Linklater movie\"*, *\"Ellar Coltrane on his 12-year movie role\"*, *\"12-year long movie shoot\"*, *\"latest movie shot in my city Houston\"*, and *\"Mason Evan's childhood movie\"*. Hence, it is important to have comprehensive knowledge about the entities to decode their implicit mentions. Another complexity is the temporal relevancy of the knowledge. The same phrase can be used to refer to different entities at different points in time. For instance, the phrase *\"space movie\"* referred to the movie 'Gravity' in Fall 2013, while the same phrase in Fall 2015 referred to the movie 'The Martian'. On the flip side, the most salient characteristics of a movie may change over time and so will the phrases used to refer to it. In November 2014 the movie 'Furious 7' was frequently referred to with the phrase *\"Paul Walker's last movie\"*. This was due to the actor's death around that time. However, after the movie release in April 2015, the same movie was often mentioned through the phrase *\"fastest film to reach the \\$1 billion\"*.\n\nWe have developed knowledge-driven solutions that decode the implicit entity mentions in clinical narratives\u00a0 and tweets\u00a0. We exploit the publicly available knowledge bases (only the portions that matches with the domain of interest) in order to access the required domain knowledge to decode implicitly mentioned entities. Our solution models individual entities of interest by collecting knowledge about the entities from these publicly available knowledge bases, which consist of definitions of the entities, other associated concepts, and the temporal relevance of the associated concepts. Figure\u00a0 shows a snippet from generated entity model. It shows the models generated for movies 'Gravity', 'Interstellar', and 'The Martian'. The colored (shaded) nodes (circles) represent factual knowledge related to these movies extracted from DBpedia knowledge base and the uncolored nodes represent the contextual knowledge (time-sensitive knowledge) related to entities extracted from daily communications in Twitter. The implicit entity linking algorithms are designed to carefully use the knowledge encoded in these models to identify implicit entities in the text. \n\n**Application 3: Understanding and analyzing drug abuse related discussions on web forums**\n\nThe use of knowledge bases to improve keyword-based search has received much attention from commercial search engines lately. However, the use of knowledge bases alone cannot solve complex, domain-specific information needs. For example, answering a complex search query such as \"How are drug users overdosing on semi synthetic opioid Buprenorphine?\" may require a search engine to be aware of several facts, including that Buprenorphine is a drug, that users refer to Buprenorphine with synonyms such as 'bupe', 'bupey', 'suboxone', and 'subbies', and the prescribed daily dosage range for Buprenorphine. The search engine should also have access to ontological knowledge as well as other \"intelligible constructs\" that are not typically modeled in ontologies, such as equivalent references to the frequency of drug use, the interval of use, and the typical dosage, to answer such complex search needs. At Kno.e.sis, we have developed an information retrieval system that integrates ontology-driven query interpretation with synonym-based query expansion and domain-specific rules to facilitate analysis of online web forums for drug abuse-related information extraction. Our system is based on a context-free grammar (CFG) that defines the interpretation of the query language constructs used to search for the drug abuse-related information needs and a domain-specific knowledge base that can be used to understand information in drug-related web forum posts. Our tool utilizes lexical, lexico-ontological, ontological, and rule-based knowledge to understand the information needs behind complex search queries and uses that information to expand the queries for significantly higher recall and precision (see Figure )\u00a0. This research\u00a0 resulted in an unexpected finding of abuse of over the counter drug, which led to a FDA warning[^7]. \n**Application 4: Understanding city traffic using sensor and textual observations**\n\nWith increased urbanization, understanding and controlling city traffic flow has become an important problem. Currently, there are over 1 billion cars on the road network, and there has been a 236% increase in vehicular traffic from 1981 to 2001\u00a0. Given that road traffic is predicted to double by 2020, achieving zero traffic fatalities and reducing traffic delays are becoming pressing challenges, requiring deeper understanding of traffic events, and their consequences and interaction with traffic flow. Sensors deployed on road networks continuously relay important information about travel speed through certain road networks while citizen sensors (i.e., humans) share real-time information about traffic\/road conditions on public social media streams such as Twitter. As humans, we know how to integrate information from these multimodal data sources: qualitative traffic event information to account for quantitative measured traffic flow (e.g., an accident reported in tweets can explain a slow-moving traffic nearby). However, current research on understanding city traffic dynamics either focuses only on sensory data or only on social media data but not both. Further, we use historical data to understand traffic patterns and exploit the complementary and corroborative nature of these multimodal data sources to provide comprehensive information about traffic.\n\nOne research direction is to create and materialize statistical domain knowledge about traffic into a machine-readable format. In other words, we want to define and establish associations between different variables (concepts) in the traffic domain (e.g., association between 'bad weather' and a 'traffic jam'). However, mining such correlations from data alone is neither complete nor reliable. We have developed statistical techniques based on probabilistic graphical models (PGMs)\u00a0 to learn the structure (variable dependencies), leverage declarative domain knowledge to enrich and\/or correct the gleaned structure due to limitations of a data-driven approach, and finally learn parameters for the updated structural model. Specifically, we use the sensor data collected by 511.org to develop an initial PGM that explains the conditional dependencies between variables in the traffic domain. Then we use declarative knowledge in ConceptNet to add\/modify variables (nodes) and the type and the nature of conditional dependencies (directed edges) before learning parameters, thereby obtaining the complete PGM. Figure\u00a0(a)(i) shows a snippet of ConceptNet and Figure\u00a0(a)(ii) demonstrates the enrichment step of the developed model using the domain knowledge in ConceptNet\u00a0.\n\nAnother research direction is to characterize a normal traffic pattern derived from sensor observations and then detect and explain any anomalies using social media data. We used a Restricted Switching Linear Dynamical System (RSLDS) to model normal speed and travel time dynamics and detect anomalies. Using speed and travel time data from each link, plus our common sense knowledge about the nature of expected traffic variations, we learn the parameters of the RSLDS model for each link. We then use a box-plot of the log likelihood scores of the various average speed traces with respect to the RSLDS model to learn and characterize anomalies for each link in the San Francisco Bay Area traffic data\u00a0. Later, given a new traffic speed trace over a link, we can obtain its log likelihood score with respect to the RSLDS model for the particular day of the week and the hour of the day, to determine whether it is normal or anomalous. This anomalous traffic speed information is further correlated with traffic events extracted from Twitter data (using crawlers seeded with OSM, 511.org and Scribe vocabularies) using their spatio-temporal context to explain the anomalies. Figure\u00a0(b) demonstrates this process. **This example again demonstrates the vital role of multi-modal data for better interpretation of traffic dynamics, synthesizing probabilistic\/statistical knowledge, and the application of both statistical models such as RSLDS and complementary semantic analysis of Twitter data**. Further exploration of different approaches to represent and exploit semantics appear in\u00a0. Table\u00a0 summarizes the role of knowledge bases in the four applications discussed above.\n\n| **Problem Domain** | **Use of Knowledge bases** | **Nature of Improvement** |\n|:---|:---|:---|\n| Emoji Similarity and | Generation and application of | Leveraging linguistic knowledge |\n| Sense Disambiguation | EmojiNet | for emoji interpretation |\n| Implicit Entity Linking | Adapted UMLS definitions for | Recall and coverage |\n| | identifying medical entities, and | |\n| | Wikipedia and Twitter data for | |\n| | identifying Twitter entities | |\n| Understanding Drug | Application of Drug Abuse | Recall and coverage |\n| Abuse-related | Ontology along with slang term | |\n| Discussions | dictionaries and grammar | |\n| Traffic Data Analysis | Statistical knowledge extraction | Anomaly detection and |\n| | and using ontologies for Twitter | explanation; Multi-modal data |\n| | event extraction | stream correlation |\n\nSummary of knowledge-based approaches and the resulting improvements for each problem domain.\n\n# Looking forward\n\nWe discussed the importance of domain\/world knowledge in understanding complex data in the real world, particularly when large amounts of training data are not readily available or it is expensive to generate. We demonstrated several applications where knowledge plays an indispensable role in understanding complex language constructs and multimodal data. Specifically, we have demonstrated how knowledge can be created to incorporate a new medium of communication (such as emoji), curated knowledge can be adapted to process implicit references (such as in implicit entity and relation linking), statistical knowledge can be synthesized in terms of normalcy and anomaly and integrated with textual information (such as in traffic context), and linguistic knowledge can be used for more expressive querying of informal text with improved recall (such as in drug related posts). We are also seeing early efforts in making knowledge bases dynamic and evolve to account for the changes in the real world[^8].\n\nKnowledge seems to play a central role in human learning and intelligence, such as in learning from a small amount of data, and in cognition \u2013 especially perception. Our ability to create or deploy just the right knowledge in our computing processes will improve machine intelligence, perhaps in a similar way as knowledge has played a central role in human intelligence. As a corollary to this, two specific advances we expect are: a deeper and nuanced understanding of content (including but not limited to text) and our ability to process and learn from multimodal data at a semantic level (given that concepts manifest very differently at the data level in different media or modalities). The human brain is extremely adept at processing multimodal data \u2013 our senses are capable of receiving 11 million bits per second, and our brain is able to distill that into abstractions that need only a few tens of bits to represent (for further explorations, see\u00a0). Knowledge plays a central role in this abstraction and reasoning process known as *the perception cycle*.\n\nKnowledge-driven processing can be viewed from three increasingly sophisticated computational approaches: (1) Semantic Computing, (2) Cognitive Computing, and (3) Perceptual Computing. *Semantic Computing* refers to computing the type of a data value, and relating it to other domain concepts. In the healthcare context, this can involve relating symptoms to diseases and treatments. Ontologies, and Semantic Web technologies provide the foundation for semantic computing. *Cognitive computing* refers to representation and reasoning with data using background knowledge reflecting how humans interpret and process data. In the healthcare context, this requires capturing the experience and domain expertise of doctors through knowledge bases and heuristic rules for abstracting multimodal data into medically relevant abstractions, insights, and actions, taking into account triggers, personal data, patient health history, demographics data, health objectives, and medical domain knowledge. For instance, \"normal'' blood pressure varies with factors such as age, gender, emotional state, activity, and illness; similarly, the \"target\" blood pressure, HBA1C, and cholesterol values a patient is advised to maintain depend on whether the patient is diabetic or not. In the traffic context, this can be used to interpret and label a time-series of traffic sensor data using a traffic event ontology. *Perceptual computing*, which builds on background knowledge created for semantic and cognitive computing, uses deductive reasoning to predict effects and treatments from causes, and abductive reasoning to explain the effects using causes, resolving any data incompleteness or ambiguity by seeking additional data. The knowledge itself can be a hybrid of deterministic and probabilistic rules, modeling both normalcy and anomalies, transcending abstraction levels. This directly contributes to making decisions and taking actions.\n\nWe expect more progress in hybrid knowledge representation and reasoning techniques to better fit domain characteristics and applications. Even though deep learning techniques have made incredible progress in machine learning and prediction tasks, they are still uninterpretable and prone to devious attacks. There are anecdotal examples of misinterpretations of audio and video data through adversarial attacks that can result in egregious errors with serious negative consequences. In such scenarios, we expect hybrid knowledge bases to provide a complementary foundation for reliable reasoning. In the medical domain, the use of interleaved abductive and deductive reasoning (a.k.a., perception cycle) can provide actionable insights ranging from determining confirmatory laboratory tests and disease diagnosis to treatment decisions. Declarative medical knowledge bases can be used to verify the consistency of an EMR and data-driven techniques can be applied to a collection of EMRs to determine and fix potential gaps in the knowledge bases. Thus, there is a symbiotic relationship between the application of knowledge and data to improve the reliability of each other. The traffic scenario shows how to hybridize complementary statistical knowledge and declarative knowledge to obtain an enriched representation (See also\u00a0). It also shows how multimodal data streams can be integrated to provide more comprehensive situational awareness.\n\nMachine intelligence has been the holy grail of a lot of AI research lately. The statistical pattern matching approach and learning from big data, typically of a single modality, has seen tremendous success. For those of us who have pursued brain-inspired computing approaches, we think the time has come for rapid progress using a model-building approach. The ability to build broad models (both in terms of coverage as well as variety \u2013 not only with entities and relationships but also representing emotions, intentions and subjectivity features, such as, linguistic, cultural, and other aspects of human interest and functions) will be critical. Further, domain-specific, purpose-specific, personalized declarative knowledge combined with richer representation \u2013 especially probabilistic graph models \u2013 will see rapid progress. These will complement neural network approaches. We may also see knowledge playing a significant role in enhancing deep learning. Rather than the dominance of data-centric approaches, we will see an interleaving and interplay of the data and knowledge tracks, each with its own strengths and weaknesses, and their combinations performing better than the parts in isolation.\n\n# Acknowledgments\n\nWe acknowledge partial support from the National Institutes of Health (NIH) award: 1R01HD087132-01: \"kHealth: Semantic Multisensory Mobile Approach to Personalized Asthma Care\" and the National Science Foundation (NSF) award: EAR 1520870: \"Hazards SEES: Social and Physical Sensing Enabled Decision Support for Disaster Management and Response\". Points of view or opinions in this document are those of the authors and do not necessarily represent the official position or policies of the NIH or NSF.\n\n[^1]: \n\n[^2]: \n\n[^3]: \n\n[^4]: \n\n[^5]: \n\n[^6]: \n\n[^7]: \n\n[^8]: ","meta":{"dup_signals":{"dup_doc_count":50,"dup_dump_count":14,"dup_details":{"curated_sources":2,"2023-23":3,"2022-49":4,"2022-33":9,"2022-27":3,"2022-21":4,"2022-05":3,"2021-49":15,"2021-43":2,"2021-39":1,"2019-43":1,"2019-26":1,"2018-30":1,"2024-26":1}},"filename":"out\/1707.05308_extract_Sheth_2016_Knowledge_will_Propel_Machine_Understanding_of_Content.tex.md"},"subset":"arxiv"} +{"text":"abstract: It is known that Einstein's conceptual base for his theory of relativity was the philosophy formulated by Immanuel Kant. Things appear differently to observers in different frames. However, Kant's Ding-an-Sich leads to the existence of the absolute reference frame which is not acceptable in Einstein's theory. It is possible to avoid this conflict using the ancient Chinese philosophy of Taoism where two different views can co-exist in harmony. This is not enough to explain Einstein's discovery of the mass-energy relation. The energy-momentum relations for slow and ultra-fast particles take different forms. Einstein was able to synthesize these two formulas to create his energy-mass relation. Indeed, this is what Hegelianism is about in physics. Isaac Newton synthesized open orbits for comets and closed orbits for planets to create his second law of motion. Maxwell combined electricity and magnetism to create his four equations to the present-day wireless world. In order to synthesize wave and particle views of matter, Heisenberg formulated his uncertainty principle. Relativity and quantum mechanics are the two greatest theories formulated in the 20th Century. Efforts to synthesize these two theories are discussed in detail.\n\n**Historical Approach to Physics according to Kant, Einstein, and Hegel** \n\nY. S. Kim \nCenter for Fundamental Physics, University of Maryland, \nCollege Park, Maryland, 20742, U.S.A.\n\nbased on an invited presented at the 32nd Congress of the Italian Society of Historians of Physics and Astronomy (Rome, Italy, September 2012).\n\n# Introduction\n\nEinstein studied the philosophy of Immanuel Kant during his earlier years. It is thus not difficult to see he was influenced by Kantian view of the world when he formulated his special theory of relativity. It is also known that, in formulating his philosophy, Kant was heavily influenced by the environment of Koenigsberg where he spent eighty years of entire life. The first question is what aspect of Kant's city was influential to Kant. We shall start with this issue in this report.\n\nIn Einstein's theory, one object looks differently to moving observers with different speeds. This aspect is quite consistent with Kant's philosophy. According to him, one given object or event can appear differently to observers in different environments or with different mindsets. In order to resolve this issue, Kant had to introduce the concept of \"Ding-an-Sich\" or thing in itself meaning an ultimate object of absolute truth. Indeed, Kant had a concept of relativity as Einstein did, but his Ding-an-Sich led to the absolute frame of reference. Here Kantianism breaks down in Einstein's theory. Kant's absolute frame does not exist according to Einstein. In order to resolve this issue, let us go to the ancient Chinese philosophy of Taoism. Here, there are two different observers with two opposite points of view. However, this world works when these two views form a harmony. Indeed, Einsteinism is more consistent with Taoism. The energy-momentum relations is different for a massive -slow particle and for a fast-massless particle. Einstein's relativity achieved the harmony between these two formulas.\n\nThis leads us to Hegel's approach to the world. If there are two opposite things, it is possible to derive a new thing from them. This is what Einsteinism is all about. Einstein derived his $E = mc^2$ from two different expressions of the energy-momentum relation for massive and massless particles.\n\nEinstein thus started with Kantianism, but he developed a Hegelian approach to physical problems. Indeed, this encourages us to see how this Hegelianism played the role in developing new laws of physics. For instance, Newton's equation of motion combines the open orbits for comets and the closed orbits of planets.\n\nIf this Hegelian approach is so natural to the history of physics, there is a good reason. Hegel derived his philosophy by studying history. Hegel observed that Christianity is a product of Jewish one-God religion and Greek philosophy. Since Hegel did not understand physics, his reasoning was based on historical development of human relations. It is thus interesting proposition to interpret Hegel's philosophy using the precise science of Physics.\n\nIn Secs.\u00a02 and\u00a03, we review how Kantianism was developed and how Einstein was influenced by Kant. In Sec.\u00a04, it is pointed out that Hegelianism is the natural language in understanding physics. In Sec.\u00a05, we examine whether quantum mechanics and relativity can be combined into one theory according to Hegelian approach to the history of physics.\n\n# Geographic Origin of Kantianism and Taoism\n\nImmanuel Kant (1724-1804) was born in the East Prussian city of Koenigsberg, and there he spent 80 years of his entire life. It is agreed that his philosophy was influenced by the lifestyle of Koenigsberg.\n\nThe city of Koenigbeerg is located at the Baltic wedge between Poland and Lithuania. As shown in Fig.\u00a01, this place served as the traffic enter for maritime traders in the Baltic Sea. In addition, this city is between the eastern and western worlds (Applebaum 1994). However, there are no natural boundaries such a rivers or mountains. Thus, anyone with a stronger army could come to this area and run the place.\n\nIndeed, Koenigsberg was a meeting place for many people with different ideas and different view points. Kant observed that the same thing can appear differently depending on the observer's location or state of mind.\n\nThe basic ingredients of Taoism are known to be two opposite elements Yang (plus) and Ying (minus). This world works best if these two elements form a harmony. However, the most interesting aspect of Taoism is that its geographic origin is the same as that of Kantianism. Let us look at the map of China given in Fig 1during the period from 1600 to 100 BC. China started as collection of isolated pockets of population. They then came to banks of the Yellow River, and started to communicate with those from other areas. They drew pictures for written communication leading eventually to Chinese characters.\n\nHow about different ideas? They grouped many different opinions into two groups, leading to the concept of Yang and Ying. Immanuel Kant considered many different views, but he concluded that there must be one and only truth.\n\nIndeed, Taoism and Kantianism started with the same environment, but Kant insisted on one truth called Ding-an-Sich, while Taoism ended up with two opposing elements (Kim 2006).\n\n# Kantian Influence on Einstein\n\nDuring his early years, Einstein became quite interested in Kant and studied his philosophy rigorously. This was quite common among the young students during his time. Einstein however studied also physics, and got an idea that one object could appear differently for observers moving with different speeds.\n\nLet us go to Fig.\u00a02. According Kant, an object or event look differently to different observers depending on their places or states of mind. A Coca-Cola can looks like a circle if viewed from the top. It appears like a rectangle if viewed from the side. The Coca-Cola can is an absolute thing or his Ding-an-Sich. Likewise, the electron orbit of the hydrogen atom looks like a circle for an observer when both the hydrogen and the observer are stationary. If the hydrogen atom in on a train, our first guess is that it should look like an ellipse. This is what Einstein inherited from Kant.\n\nHowever, does the hydrogen atom require a Ding-an-Sich? The answer is No. Indeed, Kant attempted to formulate his theory of relativity with an absolute coordinate system corresponding to his Ding-an-Sich. This is the basic departure of Einsteinism from Kantianism\n\nLet us come back to Einstein. Like Kant, Einstein started from different observers looking at a thing differently, but ended up with a particle at the rest and the same particle moving with a speed close to that of light. He then derived his celebrated energy-mass relation, as indicated in Fig.\u00a04 (left). Einstein had to invent a formula applicable to both. This is precisely a Hegelian approach to physics. It is not clear whether Einstein knew he was doing Hegel. This remains as an interesting historical problem. As for Einstein's hydrogen atom, we now have hadrons which are bound states of quarks, while the hydrogen atom is a bound state of a proton and an electron. The proton is a hadron and is a charged particle which can be accelerated to the speed very close to that of light. We shall return in Sec.\u00a05 to the problem presented in the right side of Fig.\u00a03.\n\nHowever, does the hydrogen atom require a Ding-an-Sich? The answer is No. Indeed, Kant attempted to formulate his theory of relativity with an absolute coordinate system corresponding to his Ding-an-Sich. This is the basic departure of Einsteinism from Kantianism Let us come back to Einstein. Like Kant, Einstein started from different observers looking at a thing differently, but ended up with a particle at the rest and the same particle moving with a speed close to that of light. He then derived his celebrated energy-mass relation, as indicated in Fig.\u00a04 (left).\n\nEinstein had to invent a formula applicable to both. This is precisely a Hegelian approach to physics. It is not clear whether Einstein knew he was doing Hegel. This remains as an interesting historical problem. As for Einstein's hydrogen atom, we now have hadrons which are bound states of quarks, while the hydrogen atom is a bound state of a proton and an electron. The proton is a hadron and is a charged particle which can be accelerated to the speed very close to that of light. We shall return in Sec.\u00a05 to the problem presented in the right side of Fig.\u00a03.\n\n# Hegelian Approach to the History of Physics\n\nSince Hegel formulated his philosophy while studying history, it is quite natural to write the history of physics according to Hegel. First of all, Isaac Newton combined hyperbolic-like orbits for comets and elliptic orbits for planets to derive his second-order differential equation which is known today as the equation of motion.\n\nJames Clerk Maxwell combined the theory of electricity and that for magnetism to formulate his electromagnetic theory leading to the present world of wireless communication. Max Planck observed that the radiation laws are different for low- and high-frequency limits. By deriving one formula for both, he discovered Planck's constant.\n\nWerner Heisenberg observed the matter appears as a particle also appears as a wave, with entirely different properties. He found the common ground for both. In so doing, he found the uncertainty relation which constitutes the foundation of quantum m echanics.\n\nIndeed, quantum mechanics and relativities were two most fundamental theories formulated in the twentieth century. They were developed independently. The question is whether they can be combined into one theory. We shall examine how Hegelian approach is appropriate for this problem in Sec.\u00a05.\n\n# How to combine Quantum Mechanics and Relativity\n\nHere again, the problem gets divided into scattering and bound state problems. Quantum field theory was developed for scattering problems, and this theory is accepted as a valid theory, as is illustrated in Fog. 4.\n\nFor bound-state problems, Paul A. M. Dirac wrote three important papers on this subject (Dirac 1927,1945, 1949). His 1927 paper tells us there is a time-energy uncertainty relation. In 1945, he attempted to use the harmonic oscillator to formulate quantum mechanics applicable to Einstein's world. If we combine or Hegelize Dirac's 1927 and 1945 papers, we end up with the circle given in Fig.\u00a05.\n\nIn 1949, Dirac showed the Lorentz boost can be described as a squeeze transformation as shown in Fig.\u00a05. If we Hegelize the circle and the squeezed rectangle, we arrive at the ellipse (Kim and Noz 2011) which can explain what happens in the real world including the quark model (Gell-Mann 1964) and the parton model (Feynman 1969). This Hegelian procedure corresponds to Step 1 in Fig.\u00a04.\n\nThe final step in constructing Lorentz-covariant quantum mechanics is to show that the scattering and bound states share the same set of fundamental principles (Han et al. 1981). This Hegelians procedure is illustrated in as Step 2 in Fig.\u00a04.\n\n# Kant, Hegel, and Einstein\n\nKant and Hegel are two of the most fundamental thinkers affecting our present-day lifestyles. However, their philosophies were based largely on social events and applicable to formulation of social sciences. It is gratifying to note that Einstein gives us a more concrete picture of their approaches to the problems. By building the bridge between Kant and Hegel as illustrated in Fig.\u00a06, Einstein not only gives us the precise description of how physical theories were developed in the past but also tells us how to approach current problems in physics.","meta":{"dup_signals":{"dup_doc_count":16,"dup_dump_count":15,"dup_details":{"curated_sources":1,"2021-10":1,"2020-10":1,"2019-18":1,"2019-13":1,"2019-09":1,"2019-04":1,"2018-43":1,"2018-30":2,"2018-26":1,"2018-09":1,"2017-34":1,"2017-26":1,"2017-22":1,"2023-50":1}},"filename":"out\/1301.6091_extract_hegel.tex.md"},"subset":"arxiv"} +{"text":"abstract: In this paper, we explore the role that attribution plays in shaping user reactions to content reuse, or remixing, in a large user-generated content community. We present two studies using data from the Scratch online community \u2013 a social media platform where hundreds of thousands of young people share and remix animations and video games. First, we present a quantitative analysis that examines the effects of a technological design intervention introducing automated attribution of remixes on users' reactions to being remixed. We compare this analysis to a parallel examination of \"manual\" credit-giving. Second, we present a qualitative analysis of twelve in-depth, semi-structured, interviews with Scratch participants on the subject of remixing and attribution. Results from both studies suggest that automatic *attribution* done by technological systems (i.e., the listing of names of contributors) plays a role that is distinct from, and less valuable than, *credit* which may superficially involve identical information but takes on new meaning when it is given by a human remixer. We discuss the implications of these findings for the designers of online communities and social media platforms.\nauthor: Andr\u00e9s Monroy-Hern\u00e1ndez$^{1,2}$, Benjamin Mako Hill$^1$ \nJazmin Gonzalez-Rivero$^2$, danah boyd$^2$ \n \n \nbibliography: chi2011.bib\ntitle: Computers Can't Give Credit: How Automatic Attribution Falls Short in an Online Remixing Community\n\n= 10000\n\n\\[\\]\n\n# Introduction\n\nNetworked information technologies have changed the way people use and reuse creative \u2013 and frequently copyrighted \u2013 materials. This change has generated excitement, and heated debate, among content-creators, technologists, legal academics, and media scholars. Media theorist Lev Manovich argues that remixing is an ancient cultural tradition (e.g., he has suggested that ancient Rome was a \"remix\" of ancient Greece) but that information technologies have accelerated these processes and made remixing more salient . Sinnreich et al. argue that \"configurable culture\" has been significantly transformed by networked technologies which introduce perfect copying and allow people not only to be inspired by extant creations but to remix the original works themselves . Legal scholars have stressed the importance of remixing in cultural creation broadly and warned that current copyright and intellectual property laws may hinder creativity and innovation .\n\nSeveral of the most influential scholarly explorations of remixing as a cultural phenomenon have focused on youth's remixing practices. For example, work on remixing by Jenkins and Ito has focused on young people's use and re-use of media. Palfrey and Gasser have suggested that the cultural practices of \"digital native\" youth have had a significant transformative effect on our culture . Throughout his book \"Remix,\" Lessig uses youth's reuse practices to support an argument against what he considers excessive copyright legal protection .\n\nYet, despite a wide interest in remixing and authorship, researchers have only recently engaged in empirical research on the subject . Several recent treatments have presented studies of video remixing communities , music remixing communities , collaborative video game communities and social network sites . There is also another quantitative study of our empirical setting focused on characterizing the variety of responses to remixing. These studies have tended to be general and largely descriptive examinations of remixing practice. This work has pointed to the existence of norms and the territoriality of digital creators and has considered issues of motivation . However, empirical work has yet to unpack in detail the key social mechanisms that scholars have suggested drive behavior, norms, and motivation in remixing communities.\n\nPerhaps no mechanism has been more frequently cited as critical for remixing activity than attribution and the related phenomena of plagiarism, reputation, and status. For example, recent survey-based work has suggested that the \"authenticity and legitimacy\" of creative work \"are premised on the explicit acknowledgment of the source materials or 'original creator'\" and that such acknowledgment is a key component of how adults assess the fairness or ethical nature of content reuse . Attribution, in this sense, can be seen as an important way that people distinguish remixing from \"theft.\"\n\nJudge and law professor Richard Posner stresses the importance of attribution and explains that this is important even when there is no monetary benefit to being attributed. For example, he explains that European copyright law is based on a doctrine of \"moral rights\" that \"entitles a writer or other artist to be credited for his original work and this 'attribution right', as it is called, would give him a legal claim against a plagiarist.\" Posner also explains that \"acknowledgment\" of another's contributions to a derivative negates any charge of plagiarism, although it may not establish originality . Attribution plays such an important role in remix culture that Creative Commons made a requirement for attribution a component of all their licenses after more than 97% of licensors opted to require attribution when it was offered as a choice .\n\nYoung people's perceptions of attribution and complications around copying have also been examined. An article by Friedman describes that adolescents who allowed \"computer pirating\" \u2013 the unauthorized copying of computer programs \u2013 did so because technological affordances made it difficult for adolescents to identify \"harmful or unjust consequences of computer-mediated actions\" . In a second study, psychologists Olson and Shaw have found that by five years old, \"children understand that others have ideas and dislike the copying of these ideas\" .\n\nYet, despite the fact that researchers in human computer interaction have begun to explore the complexity of attribution and cited its importance to remixing , many designers of online communities pay little attention to issues of attribution in their designs \u2013 a fact that is reflected in user behavior. For example, research on the use of photos from the photo sharing site Flickr , as well as a number of other user-generated content communities , suggests that most re-users fail to attribute re-used content in ways that public-use licenses require. Although theory and survey based work points to a need to design for attribution in user-generated content communities, we still know very little about how attribution works or how designers might go about doing so. Indeed, our study suggests that the most obvious efforts to design for attribution are likely to be ineffective.\n\nIn this paper, we employ a mixed methods approach that combines qualitative and quantitative analyses to explore users' reactions to attribution and its absence in a large remixing community. First, we introduce our empirical setting; using qualitative data from users forums and comments, we present a rich description of remixing and evidence to support our core proposition that credit plays a central role in remixing in our environment. Second, we contextualize and describe a technological intervention in our setting, responding directly to several user suggestions, that automated the attribution of creators of antecedent projects when content was remixed. Third, we present a tentative quantitative analysis of the effect of this intervention along with a parallel analysis of the practice of manual credit-giving. We find that credit-giving, done manually, is associated with more positive reactions but that automatic attribution by the system is not associated with a similar effect. Fourth, we present analysis of a set of in-depth interviews with twelve users which helps confirm, and add nuance and depth to, our quantitative findings.\n\nOur results suggest that young users see an important, if currently under-appreciated and under-theorized, difference between *credit* and *attribution*. Credit represents more than a public reference to an \"upstream\" user's contributions. Coming from another human, credit can involve an explicit acknowledgment, an expression of gratitude, and an expression of deference, in a way that simple attribution can not. Our results suggest that identical attribution information means something very different to users when it comes from a computer, and when it comes from a human \u2013 and that users often feel that acknowledgment is worth much less when it comes from a system. We conclude that designers should create affordances that make it easier for users to credit each other, rather than to merely pursue automated means of acknowledgment.\n\nOur study offers two distinct contributions for social scientists and for technology designers. The first is an improved understanding of the way that attribution and credit work in user-generated content communities. The second is a broader contribution to the literature on design that suggests an important limitation to technologists' ability to support community norms and a suggestion for how designers might create affordances. Functionality that allows users to express information that a system might otherwise show automatically may play an important role in successful design for social media environments.\n\n# Scratch: A Community of Young Remixers\n\nThe Scratch online community is a free and publicly available website where young people share their own video games, animated stories, interactive art, and simulations . Participants use the Scratch programming environment , a desktop application, to create these interactive projects by putting together images, music and sounds with programming command blocks (See Figure ).\n\nThe Scratch website was officially announced in 2007 and, as of September 2010, has more than 600,000 user accounts who have shared 1.3 million projects. At the time of writing, Scratch users share on average one new project per minute. Examples of projects range from an interactive virtual cake maker, to a simulation of an operating system, to a Pokemon-inspired video game, to an animation about climate change, to tutorials on how to draw cartoons. Like other user-generated content websites, such as YouTube or Flickr, Scratch projects are displayed on a webpage (See Figure ) where people can interact with them, read metadata and give feedback. Visitors can use their mouse and\/or keyboard to control a video game or other type of interactive projects or simply observe an animation play out in a web browser. Metadata displayed next to projects includes a text-based description of the project, the creator's name, the number of views, downloads, \"love its,\" remixes, and galleries (i.e., sets of projects) that the project belongs to. Users can interact with projects by giving feedback in the form of tags, comments, or clicks on the \"love it\" button, and can flag a project as \"inappropriate\" for review by site administrators.\n\nParticipants' self-reported ages range primarily from 8 to 17 years-old with 12 being the median. Thirty-six percent of users self-report as female. A large minority of users are from the United States (41%) while other countries prominently represented include the United Kingdom, Thailand, Australia, Canada, Brazil, South Korea, Taiwan, Colombia and Mexico. About 28% of all users \u2013 more than 170,000 \u2013 have uploaded at least one project.\n\n## Remixing in Scratch\n\nScratch users can download any project shared on the website, open it up in the Scratch authoring environment, learn how it was made, and \"remix\" it. In Scratch, the term \"remixing\" refers to the creation of any new version of a Scratch program by adding, removing or changing the programming blocks, images or sounds. In this section we use qualitative data from the Scratch website to provide social context for remixing and to suggest that credit plays an important role in how users conceive of appropriate remixing practice.\n\nRemixing in Scratch is not only technically possible, it is something that the administrators of the website encourage and try to foster as a way for people to learn from others and collaborate. On every project page, the Scratch website displays a hyperlink with the text \"Some rights reserved\" that points to a child-friendly interpretation of the Creative Commons Attribution-Share Alike license under which all Scratch projects are licensed.[^1] Even the name Scratch is a reference to hip hop DJs' practice of mixing records. A large portion of all projects shared on the Scratch website (28%) are remixes of other projects.\n\nThat said, remixing is not universally unproblematic in Scratch. Previous quantitative analysis of the the Scratch community showed that Scratch participants react both positively and negatively to the remixing of their projects and found that of those users who viewed a remix of their project, about one-fifth left positive comments while the same proportion of users accused the remixer of plagiarism . This ambivalent reaction to remixing is echoed, and given additional texture, in the comments and complaints left by users on the Scratch website and sent to Scratch administrators.\n\nFor example, even before the Scratch website was publicly announced, a number of early adopters became upset when they found remixes of their projects on the website. Indeed, one of the very first complaints about Scratch occurred on the discussion forums where a 13 year-old boy asked:\n\n> Is it allowed if someone uses your game, changes the theme, then calls it 'their creation'? Because I created a game called \"Paddling 1.5\" and a few days later, a user called \"julie\" redid the background, and called it 'her creation' and I am really annoyed with her for taking credit for MY project!![^2]\n\nA similar complaint was sent to the website administrators by a 14-year old boy:\n\n> I think there should be a way to report plagiarized projects I've been seeing a lot of people's projects taken and renamed. This member, named kings651, has 44 projects, and most of them are made by other people. He even has one that I saw my friend make so I know he actually made it.\n\nIn other cases, the disagreements over remixing were more public and involved communication via projects and comments. For example, user koolkid15 wrote the following message in a comment which was left is response to a remix that shows a cat frowning:\n\n> Hi i'm koolkid15 the original creator of luigi disco jayman41 copied me!! and didn't even aknowladge me he didn't change anything !! I wrote or drew!! and jayman...if your reading this think about other people!!!!\n\nDespite the fact that Scratch was conceived, designed, and launched as a platform for remixing, these users expressed their displeasure at remixing. That said, none of these users complained directly about the reuse of their project in general, but in terms of unfair \"taking credit\", plagiarism, and a lack of acknowledgment. Remixing was seen as problematic for koolkid15, for example, because of the non-transformative nature of reuse, the lack of acknowledgment of antecedent contributors, and the confusion about credit that would result.\n\nOf course, other, more positive, scenarios around remixing also played out in Scratch. For example, jellogaliboo created a remix of Catham's project and wrote the following in the project notes: \"i kinda copied Catham's \"jetpackcat\" game. i used the kitty, the blocks (i added and changed some), and the fuel thingy.\" Catham later posted his approval of the remix saying, \"I like what you changed about my project!\" Like this example, many of these positive experiences involved explicit credit-giving by a remixer to the creator of the antecedent project.\n\n# Design Intervention: Automating Attribution\n\nSeveral user complaints about remixing and plagiarism also included suggestions for how Scratch's designers might address them. For example, in response to the forum thread mentioned in the previous section, a 16 year-old proposed two potential design-based solutions:\n\n> Make it so you can only download a view of how your game\/story\/animation works. Or make it so downloadable Scratch files have read only protection. Maybe downloaded Scratch files, after being uploaded, are marked with the creators name at the bottom, and then any DIFFERENT people who edit it after are put on the list.\n\nInfluenced by these comments, Scratch administrators came to believe that negative responses towards remixing were often due to the fact that Scratch users did not acknowledge the sources of their remixes. As a result, these administrators implemented an architectural design change to the Scratch community along the lines suggested by the user in the second half of the quotation above.\n\nThe design change in question involved the introduction of a new technological facility that automatically identified and labeled remixes and inserted hyperlink pointers under each remix to the remix's antecedent and the antecedent's author (see Figure ). Two days after the introduction of this feature, functionality was added to link to a comprehensive list of derivative works from the pages of antecedent projects (see Figure ).\n\nThe new feature was announced in the discussion forums by an administrator of the website and user responses were positive. User terminator99 suggested that the change was, \"Awesome.\" Another user, marsUp, posted a comment saying, \"That's a very useful feature! I like that we can do ping-pong like modding in Scratch.\" Users who did not visit the discussion forums also responded well to the the new feature. For example, user greekPlus posted a comment on a remix he created saying, \"i remixed it for you but i do not know how to ad credit to you for thinking of it in the first place.\" A few minutes later he realized that the remix automatically displayed the attribution and posted the a comment saying, \"never mind it did it for me. cool!\"\n\n# Study 1: Human and Machine Attribution\n\nAlthough initial user feedback to the automatic attribution feature was positive, users continued to complain about remixing. In Study 1a, we present a quantitative analysis to more fully evaluate the effect of the technological design change described in the previous section. In Study 1b, we offer a parallel analysis of the relationship between manual crediting-giving by users and users' reactions to being remixed.\n\nBoth studies build on a dataset used in previous work by Hill, Monroy-Hern\u00e1ndez, and Olson . This dataset includes remix-pairs determined by an algorithm using detailed project metadata tracked by the Scratch online community. The dataset is limited in that it does not include projects whose concepts were copied by a user who had seen another's work but who did not actually copy code, graphics or sound. Similarly, the dataset contains no measure of the \"originality\" of projects or an indicator based on ideas that were taken from a source outside Scratch (e.g., a user may have created a Pacman clone which would not be considered a remix in our analysis).\n\nThe data presented here includes each coded reactions of the author of antecedent projects (i.e., originators) on remixes of their projects shared by other users in the site during a twelve week period after Scratch's launch from May 15 through October 28, 2007. Although 2,543 remixes were shared in this period, we limit our analysis to the 932 projects (37% of the total) that had been viewed at the time of data collection by the project originator \u2013 a necessary prerequisite to any response. Of these 932 remixes that were viewed by a project originator, 388 originators (42%) left comments on the remixes in question. The remaining were coded as \"silence.\" Comments left by originators were coded by two coders, blind to the hypotheses of the study and who were found to be reliable , as being positive, neutral, or negative. They were also coded as containing accusations of plagiarism (projects in which the the originator directly accused the remixer of copying, e.g., \"Hello mr plagiarist\", \"Copy-cat!\") or hinting plagiarism (projects in which the originator implied that the remixer had copied but did not state this explicitly, e.g., \"I mostly pretty much made this whole entire game\").\n\nUnless it also contained an explicitly negative reaction, an accusation of plagiarism was not coded as \"negative.\" However, because plagiarism tends to be viewed as negative within Scratch (as suggested by the quotations in the previous section) and more broadly in society , we re-coded accusations of plagiarism (both direct and hinting) as \"negative\" except, as was the case in several comments coded as \"hinting plagiarism,\" when these accusations were in comments that were also coded as positive. Previous published work using this dataset, and subsequent robustness checks, show that our results are substantively unchanged if we exclude these explicit charges of plagiarism from the \"negative\" category or exclude only the weaker \"hinting plagiarism\" accusations.\n\n## Study 1a: Automatic Attribution\n\nTo test the effectiveness of automatic attribution, we consider the effect of the design intervention described in the previous section. The design change took place six weeks after the public launch of the Scratch community and at the precise midpoint in our data collection window. The intervention affected all projects hosted on the Scratch online community including projects shared before the automatic attribution functionality was activated. As a result, we classify originators' reactions as occurring outside a technological regime of automatic attribution when a project was both uploaded and viewed by a project's originator before automatic attribution functionality was activated.\n\nA comparison of the distribution of coded comments between positive, neutral, negative, and silent in the periods before and after the intervention suggests that the introduction of automatic attribution had little effect on the distribution of reaction types (See Figure ). Although the period after the intervention saw a larger proportion of users remaining silent and a smaller proportion of both positive and negative comments, $\\chi^2$ tests suggest that there is no statistically significant difference in originator reactions between remixes viewed before or after the introduction of automatic attribution ($\\chi^2 = 3.94; df=3; p=0.27$). As a result, we cannot conclude that the there is any relationship between the presence, or absence, of an automatic attribution system in Scratch and the distribution of different types of reactions.\n\nThese results suggest that automatic attribution systems may have limited effectiveness in communities like Scratch. Of course, our analysis is not without important limitations. For example, the existence of an automatic attribution regime may also affect the behavior of users preparing remixes. A remixer might avoid making perfect copies of projects if they know that their copies will be attributed and are more likely to be discovered.\n\n## Study 1b: Manual Crediting\n\nWhile the introduction of an automatic attribution feature to Scratch appears to have had a limited effect on originators responses to remixes of their projects, the presence or absence of credit was a recurring theme in discussions on Scratch online forums \u2013 as shown in the quotes in the previous section \u2013 and in many of the coded reactions from the periods both before and after the introduction of automatic attribution. Indeed, in project descriptions or notes from the periods both before and after the change, remixers frequently \"manually\" gave credit to the originators of their work. Even after remixes were automatically attributed to originators, remixers who did not also give credit manually \u2013 essentially producing information redundant to what was already being displayed by the system \u2013 were criticized.\n\nFor example, after the introduction of automatic attribution functionality, a user left the following comment on a remix of their project:\n\n> Bryan, you need to give me Pumaboy credit for this wonderful game that I mostly pretty much kinda totally made this whole entire game ... and that you need to give me some credit for it\n\nFor this user, automatic attribution by the system did not represent a sufficient or valid form of credit-giving. In the following study, we test for this effect of \"manual\" credit-giving by remixers on coded response types using a method that parallels the analysis in Study 1a and that uses the same dataset.\n\nManual crediting can happen in multiple ways. Exploratory coding of 133 randomly selected projects showed that 35 (36%) of each remix pair gave credit. Of these 35 projects, 34 gave credit in the project description field while 1 project only gave credit in a \"credits\" screen inside the game. As a result, the authors of this study split the sample of projects used in the Study 1a and coded each of of the user-created descriptions for the presence or absence of explicit or manual credit-giving.\n\nTo first establish that we are examining distinct behaviors, we attempted to establish that automatic and manual attribution do not act as substitutes for each other. As suggested by our qualitative findings and our results in Study 1a, we found little difference in the rate of explicit credit giving between projects created in the presence or absence of automatic attribution. Overall, 276 (about 30%) of the 932 projects in our sample offered explicit credit in the description field of the project. Manual crediting-giving was a widespread practice both before automatic attribution, when 31% of projects in our sample offered explicit credit, and after, when 27% did so. The difference between these two periods was not statistically significant ($\\chi^2=1.41; df=1; p=0.24$). Previous work studying *Jumpcut*, a video remixing website, supports the idea that automatic and manual credit giving are not interchangable phenomena. One Jumpcut user with permission to creative derivative works commented that they, \"still feel a moral obligation to people as creators who have a moral right to be attributed (and notified) despite the physical design which accomplishes this automatically\" .\n\nWe measured effectiveness of manual credit giving using a parallel analysis to Study 1a. As in Study 1a, we compared the distribution of originator reactions in the presence, and absence, of manual credit-giving by remixers. We found that negative reactions are less common in the presence of manual credit but that this difference is very small (from 16% without manual credit to 14% with it). However, we see that the proportion of users who react positively almost doubles in the presence of credit-giving (from 16% with no crediting to 31% in its presence). A graph of these results are shown in Figure . Tests show that we can confidently reject the null hypothesis that these differences in the distribution of reactions are due to random variation ($\\chi^2 = 27.60; df=3;\np<0.001$).\n\nAlso important to note is a difference in the number of users who are silent after viewing a project (62% in the absence of manual credit versus 49% in its presence). This larger proportion of commenting in general may have an important substantive effect on the discourse and behavior on the site because silent originators may, for obvious reasons, have a more limited effect on attitudes toward remixing and user experience than vocal users do. As a robustness check, we considered the reaction of only originators who left comments ($n=388$) and found that even with a smaller sample, our result were stronger. In the restricted sample, 41% reacted negatively when they were not given credit. However, only 27% did so when they were credited. Similarly, 42% of users who left comments on projects that did not give credit manually left positive messages. Nearly two thirds of comments (61%) were positive when credit was given. These differences, in the reduced sample that includes only explicit reactions, were also statistically significantly different ($\\chi^2 =\n14.09; df=2; p<0.001$). We include the large number of silent participants because we believe that non-response is an important type of reaction with real effects on the community. Understanding the reasons behind non-response and the effect of silence in response to different types of credit giving remains an opportunity for further research.\n\nAlthough not presented here due to limited space, we followed the general model of previous work using this dataset and tested logistic regression models on dichotomous variables indicating the presence of negative and positive reactions and found that basic relationships described above were robust to the introduction of a control for the the intervention, to an interaction between these two variables, and to controls for the gender and age of originators and to the antecedent project's complexity. Both before or after the intervention, manual crediting resulted in more positive comments by the originators of remixed projects. Of course, the results presented here are uncontrolled, bivariate, relationships and we caution that these results, while provocative, should still be viewed as largely tentative. As we show in the subsequent qualitative analysis, attribution and credit-giving are complex social processes. We do not claim that the preceding analyses capture it fully.\n\n# Study 2: Interviews with participants\n\nIn order to explore the reasoning behind young people's remixing behavior and attitudes toward attribution as we observed it in Study 1, we engaged in a second qualitative study and directly asked kids what role attribution and credit plays in their moral evaluations of remixing.\n\n```latex\n\\begin{table*} \\begin{tabular}{lccp{4.5in}} \\hline\n\\textbf{Name (pseudonym)} & \\textbf{Age} & \\textbf{Gender} & \\textbf{Relationship to Scratch}\\\\\n\\hline\nNicole & 10 & F & She has created with hundreds of Scratch projects, mainly animations and art ones. \\\\\nKyle & 14 & M & Casual user of Scratch, interested in math\/science simulations and video games. \\\\\nAmy & 15 & F & Avid photographer, has never used Scratch. \\\\\nCharles & 9 & M & Active member of a subgroup of Scratch interested in simulation of operating systems. \\\\\nRyan & 12 & F & Long-time member of the Scratch community. Creates complex video games. \\\\\nJon & 9 & M & Casual user of Scratch, collaborates with Scratch friends in person. \\\\\nJake & 11 & M & Casual user, likes making video games. \\\\\nCody & 16 & M & Creates hip hop accessories, not active in Scratch. \\\\\nPaul & 9 & M & Creates Scratch projects with a focus on engineering and video games. \\\\\nJimena & 17 & F & Highly technical teen with programming experience but no experience with Scratch. \\\\\nMadeline & 14 & F & Very popular animator in the Scratch community. \\\\\nSusie & 10 & F & Has created hundreds of projects including games, animations and art, but preferring art. \\\\\n\\hline\n\\end{tabular} \\caption{Table listing details of interviewees used in Study 2. ($n=12$)} \\label{tab:ints} \\end{table*}\n```\n\n## Methodology\n\nWe conducted twelve one-hour semi-structured interviews with kids aged 8 to 17 years old. All of the interviewees had experience using computers and had access to the Internet at home. All the interviewees live in the United States except for one who lives in New Zealand. The participants were recruited via the Scratch website and during meet-ups with educators, teachers and young Scratch users. Eight of the interviews were conducted in person, in the Boston area, and the rest over the phone or voice over IP. The interviews were audio-recorded and transcribed before fully analyzing them. Nine of the interviewees were members of the Scratch community. The remaining three did not use Scratch but were included as a way to check if people who do not use Scratch have similar views about remixing, attribution, and credit. We found no substantive difference between the Scratch users and non-users in their answers to questions related to the hypothetical automatic and manual mechanism for attribution.\n\nBefore each interview, subjects completed a survey that elicited demographic information and posed questions about their familiarity with other technologies and which was primarily designed to get a sense of the interviewees' social and technical background. Interviews were structured around a protocol that included a set of nine fictional remixing cases intended to elicit conversations about remixing.[^3] The cases were inspired by Sinnreich et al.'s theoretical work and from three years of experience moderating the Scratch community. They were designed to present cases where remixing could be controversial but where there is no clear \"correct\" answer. The goal of the cases was to offer a concrete, and common, set of dilemmas to stimulate broad conversations about attitudes toward remixing.\n\nThe cases were presented in the form of printed screenshots of different project pages from the Scratch website (anonymized to avoid referring to real cases that users might have seen). The print outs were shown to the interviewees (or discussed over the phone) while explaining each case. All the cases included a remix and its corresponding antecedent project. The cases varied in the presence of automatic attribution, manual credit, and the degree of similarity between the remix and its antecedent. For example, the first three cases were:\n\n1. A remix and its antecedent are identical. The project notes only describe how to play the video game. The remix shows the automatic attribution but no manual credit on the notes.\n\n2. A remix and its antecedent are different (as seen visually and in project metadata) but one can clearly see the influence of its antecedent project. The project notes of the remix show manual credit but no automatic attribution. The interviewee was told to imagine the site had a glitch that prevented it from connecting it to its antecedent.\n\n3. The same set of remix and antecedent projects as in (2) but this time automatic attribution is displayed but manual credit is not.\n\nEach of the interview logs was coded using inductive codes and grounded theory . The coded responses were analyzed based on categories related to how interviewees answered specific questions about the distinction between automatic attribution and manual credit.\n\n## Results\n\nConfirming the results of Study 1, for users of Scratch, automatic attribution was generally seen as insincere and insufficient. Throughout the interviews, we found that for most of the kids, getting explicit credit from another person was preferred over attribution given automatically by the system. When asked why, kids often responded that knowing that another person had cared enough to give credit was valued more than what the computer system would do on its own. The fact that it takes some work, albeit minimal, to write an acknowledgment statement, sends a signal of empathy, authenticity and good intentions . Amy articulated this when explaining why she preferred getting credit from another person:\n\n> I would like it even more if the person did it \\[gave credit\\] on their own accord, because it would mean that \\[...\\] they weren't trying to copy it, pirate it.\n\nSimilarly, Jon explained, \"No \\[the \"Based on\" is not enough\\], because he \\[the remixer\\] didn't put that, it always says that.\" For Jon, automatic attribution is not authentic because it is always there and, as a result, it is clear that is not coming from the person doing the remix.\n\nMost of the interviewees seemed to have a clear notion of what they think a moral remix should be. For some, it is all about making something different. Jake for example, defines a \"good\" remix as, \"if it has a bunch of differences then it's a good remix. If it has like two, then it's bad.\" In addition to the differences between the remix and its antecedent project, for some, manual credit is part of what makes it moral. Charles said, \"\\[remixing\\] is taking somebody else's project and then changing a lot of it and sharing it and giving credit.\" Continuing, Charles explained:\n\n> If Green had actually said in the project notes, \"This is a remix of Red's project, full credit goes to him,\" then I would consider it a remix. But this \\[pointing at a remix without manual credit\\] is definitely a copy.\n\nLikewise, Ryan mentions that a fictional remix was, \"perfectly fine because they gave credit in the project notes.\"\n\nInterviewees suggested that manual credit also allows users to be more expressive. For example, Susie explained that expressiveness is the reason that she prefers manual credit through the project notes saying, \"I think the manual one is better because you can say 'thank you' and things like that. The automatic one just says 'it's based on.'\" Susie also notes that for her, the project notes are a space where a creator can express her wishes in regards to her intellectual property, independent, and even in contradiction to, the license of the projects:\n\n> If I do a project that has music that I really like, I often download the project, take the music. Unless it says in the project notes, \"Do not take the music.\"\n\nFor Susie and other users of Scratch, the project notes are a space for more than just instructions on how to interact with one's project; they are an expressive space where one can communicate with an audience without having to encumber the creative piece of work with it.\n\nOthers point at the fact that people do not pay as much attention to automatic attribution statement as much they do to the manual credit left in project descriptions. Jake, for example, explains that, while he agrees there is some usefulness to having both, project notes still are more important, \"because, you know, sometimes people just like skim through a project and you don't see it 'til the end.\" Jake continued to say that creators that do not have both should get a \"warning.\"\n\nEven though interviewees value manual credit, they still see the usefulness of the automatic mechanism as some sort of community-building prosthetic device \u2013 an explanation for the positive reactions to the feature's initial introduction. For example, Nicole argues that while manual credit on the notes has more value for her, the automatic attribution is useful as a backup and because it provides a link:\n\n> Well, I think that they should probably write in the notes that \u2013 then it should also say \"Based on blank's project,\" just in case they forget, and also because it gives a link to the original project and it gives a link to the user so you don't have to search for it.\n\nA similar explanation was articulated on a comment exchange on one the website's galleries. A teenage girl that actively participates in Scratch explained the pragmatic value of automatic attribution saying, \"the 'based on' thingy, it gives a link, and we all luv links, less typing,\" before reiterating that manual credit is more valuable:\n\n> at the beginning i thought that you don't have to give credit when the \"based on\" thingy is in there, but i realized a lot of people don't look at that, and i noticed people confused the remix with the original.\n\nCreating a Scratch project is a complicated task. A project's sources can be diverse and the creator can easily forget to acknowledge some, as Paul explains, when asked to choose between a system of manual credit or automatic attribution:\n\n> The thing is, it would be a lot better if they had both. Because, sometimes people probably just forget to do that. And then people would not know.\n\nThere are also situations where interviewees recognize what Posner calls the \"awkwardness of acknowledgment,\" that is, situations where credit is not really needed and it can be an unnecessary burden or go against the aesthetics of the work . For example, Paul mentioned that sometimes, there are some projects in Scratch that are remixed so much \u2013 like the sample projects that come with Scratch or some \"remix chains\"[^4] \u2013 where credit is not necessary:\n\n> There's this one called \"perfect platformer base\" which a lot of people remix. So I don't think that needs any credit. It's not actually a real game. It's all the levels and stuff are just demonstrations.\n\nSince manual crediting has a higher emotional value, some kids mentioned that conflicts over remixing could be addressed by the administrators of the site by editing the project of the remix in question, as a way to enforce credit without transforming it into attribution. Doing so would make it appear that a remixer had credited an antecedent when they had not. Susie offers a suggestion along these lines when asked about how the administrators of the website should deal with a case of a complaint over a remix that is a parody of someone else's project. Susie suggested that, \"I might remove the project but I might not, you know, maybe I would edit the notes to to give credit.\" Similarly, Charles described his approach for solving conflicts if he was the administrator of the website suggesting that, \"I probably just would stay out of the argument. I probably wouldn't remove it \\[the remix\\], I'd just add something in the project notes \\[like\\] 'based on Gray's project.'\"\n\nThis phenomena of giving less value to technologically simplified social signals is experienced in other social platforms. For example, Amy expressed how on the social network site Facebook, she loves to get comments on her photographs but dislikes those who do not leave comments or opt instead to press the \"I like it\" button:\n\n> I love when people comment on my pictures. Everybody sees them, because they tell me they have. I'm like, \"Oh really? That's great. Why didn't you comment?\" I don't like it when people just \"like it\", because you know they have something to say about it; they just don't. It's like, if they like it, then \\[they should\\] take the time to say something.\n\nAlthough not designed to be a random sample, these interviews support the proposition that both Scratch participants and other young people share a set of norms about characteristics that determine what a \"good\" or moral remix is. Among these norms, acknowledging one's sources seems to play a central role. However, participants also seem to share the opinion that this norm is not satisfied through an automated process. They clearly understand the pragmatic value of automating acknowledgment-giving, but they do not see it as a substitute for adherence to the social norm of credit-giving. They also see it as void of emotion and expressiveness. For Scratch users, normative constraints are separate from architectural constraints and one cannot replace the other. These findings support and enrich the results from our first study and help us understand better how Scratch participants, and perhaps kids in general, experience authorship norms and automation in online spaces.\n\n# Conclusions\n\nOur results from Study 1a called into the question the effectiveness of automatic attribution functionality in encouraging more positive user reactions in Scratch. We build on these results in Study 1b to suggest that manual crediting may do the work that Scratch's designers had hoped automatic attribution would. Results from the analysis of user interviews presented in Study 2 help to answer the question of \"why?\" and suggest that users find manual credit to be more authentic and more meaningful to users because it takes more time and effort. Usually, UI improvements are designed to help reduce the time and effort involved in using a system. But in trying to help users by attributing automatically, Scratch's designers misunderstood the way that attribution as a social mechanism worked for Scratch's users. Our fundamental insight is that while both attribution and credit may be important, they are distinct concepts and that credit is, socially, worth more. A system can *attribute* the work of a user but *credit*, which is seen as much more important by users and which has a greater effect on user behavior, cannot be done automatically. Computers can attribute. Crediting, however, takes a human.\n\nAs we suggested in our introduction, this fundamental result leads to two distinct contributions. First, and more specifically, our analysis offers an improved understanding of the way that attribution and credit works in user-generated content communities over what has been available in previous work. Our two studies suggest that scholars are correct to argue that credit plays an important role in social media communities and offer empirical confirmation for the important role that authenticity plays in how users conceptualize credit. In our in-depth interviews, we explain some of the reasons why this may be the case. Second, through our evaluation of an unsuccessful technological design, our work offers a broader, if more preliminary, contribution in suggesting an important limit of designers' ability to support community norms in social media systems. As the literature on design and social media grows, the importance of good support for communities with healthy norms promoting positive interactions is likely to increase. In attempting to design for these norms, we suspect that researchers will increasingly encounter similar challenges.\n\nWe argue that designers should approach interventions iteratively. This design approach can be understood through the theoretical lens of the social construction of technology : designers can't control technological outcomes which must be built through a close relationship between designers and users. Designers must move away from seeing their profession as providing solutions. They must channel users, work closely with them, and iterate together, to negotiate and achieve a set of shared goals.\n\nThe prevalence of user-generated content sites stresses the importance of how online social spaces should deal with issues of attribution and our results are likely to be immediately relevant to designers. For example, the Semantic Clipboard is a tool built as a system of automatic attribution for content reuse . Developed by researchers who found a high degree of Creative Commons license violations around the re-use of Flickr images, the tool is a Firefox plugin that provides, \"license awareness of Web media,\" and enables people to automatically, \"copy \\[media\\] along with the appropriate license metadata.\" Our results suggest one way that this approach may fall short.\n\nHowever, automatic attribution is not the only way that technologists can design to acknowledge others' contributions. Indeed, our results suggest that there may be gains from design changes which encourage credit-giving without simply automating attribution. For example, Scratch's designers might present users with a metadata field that prompts users to credit others and suggests antecedent authors whose work the system has determined may have played a role. This affordance might remind users to credit others, and might increase the amount of crediting, while maintaining a human role in the process and the extra effort that, our research has suggested, imbues manual credit giving with its value. We suggest that in other social media communities, similar affordances that help prompt or remind users to do things that a system might do automatically represent a class of increasingly important design patterns and a template for successful design interventions in support of community norms.\n\n# Acknowledgments\n\nThis research was supported by Microsoft Research. Scratch is a project of the Lifelong Kindergarten Group at the MIT Media Lab with financial support from the National Science Foundation award ITR-0325828, Microsoft Corp., Intel Foundation, Google, the MacArthur Foundation and the MIT Media Lab research consortia.\n\n[^1]: A copy of the current version of the kid-friendly license is available online at http:\/\/scratch.mit.edu\/pages\/license. The version available today encourages users to give credit manually in the project notes. A strong emphasis on credit-giving was added as a result of the findings reported here but was absent during the period of data collection for this study.\n\n[^2]: All usernames and quotes from the website were changed to disguise the identities of participants.\n\n[^3]: Our interview protocol including example cases is available at http:\/\/www.media.mit.edu\/\u00a0andresmh\/chi2011\/interview.html.\n\n[^4]: Remix chains typically start with someone sharing a project inviting others to remix (i.e. \"add your animated avatar to the park.\")","meta":{"dup_signals":{"dup_doc_count":41,"dup_dump_count":32,"dup_details":{"curated_sources":3,"2021-43":1,"2020-05":1,"2019-09":1,"2018-43":1,"2018-26":1,"2018-05":1,"2017-30":1,"2017-17":1,"2017-04":1,"2016-50":1,"2016-44":1,"2016-40":1,"2016-36":1,"2016-30":1,"2016-22":1,"2015-32":1,"2015-27":1,"2015-22":1,"2015-14":1,"2014-52":1,"2014-49":2,"2014-42":3,"2014-41":2,"2014-35":2,"2014-23":2,"2014-15":2,"2023-50":1,"2015-06":1,"2014-10":1,"2013-48":1,"2015-11":1}},"filename":"out\/1507.01285_extract_chi2011.tex.md"},"subset":"arxiv"} +{"text":"abstract: As a response to the trends of the increasing importance of computational approaches and the accelerating pace in science, I propose in this position paper to establish the concept of \"science bots\" that autonomously perform programmed tasks on input data they encounter and immediately publish the results. We can let such bots participate in a reputation system together with human users, meaning that bots and humans get positive or negative feedback by other participants. Positive reputation given to these bots would also shine on their owners, motivating them to contribute to this system, while negative reputation will allow us to filter out low-quality data, which is inevitable in an open and decentralized system.\nauthor: \nTobias Kuhn \n \ntitle: Science Bots: \n a Model for the Future of Scientific Computation?\n\n\\[Automation\\]\n\n# Introduction\n\nAs datasets become increasingly important in all branches of science, many have proposed methods and tools to publish data . Nanopublications are an approach to bundle atomic data snippets (in RDF) in small packages together with their provenance and metadata. Such nanopublications can be manually created by scientists and linked to their articles, but they can also be automatically extracted from existing datasets or be directly created by programs that implement scientific methods.\n\nIn general, computer programs form a third kind of scientific contribution, besides narrative articles and datasets. While many such programs are openly available, there are no conventions or standards of how to reliably link data to the software that produced it, including the version of the software and the input it received. Moreover, due to the focus of the scientific life cycle on the publication of articles, scientific software is typically applied only to the data available at the time of writing a paper. It is often not the case that new output data is made public when new input data becomes available. To tackle these problems, I argue that we can encapsulate certain types of scientific algorithms as small independent agents that take inputs of a given type and produce, for example, nanopublications, and they could do this in a real-time and automatic manner as new input data becomes available.\n\n# Background\n\nI borrow the term \"bot\" from Wikipedia, where bots are applied, for example, to revert edits that are the results of vandalism . A prominent example is a bot that has created around 454 000 articles for the Swedish Wikipedia . The fact that bots can be powerful also in a negative sense has become apparent with the rise of botnets , and with the increasing problem of \"social bots\" that pretend to be humans . I argue here that the power of bots could also be harnessed in a positive way for scientific computation. In contrast to the agents in the original Semantic Web paper , such bots would not propose or make decisions as a kind of personal assistant, but they would only publish data snippets and they would do that without any further interaction with humans.\n\nIn previous work, I showed how the concept of nanopublications can be extended and I mentioned the possible use of bots to create them . I also presented an approach to attach cryptographic hash values to nanopublication identifiers to make them verifiable and immutable . Based on that work, I have started to establish a nanopublication server network, with which nanopublications can be published, retrieved, and archived in a reliable, trustworthy, and decentralized manner , which could serve as the basis for the communication for bots.\n\n# Approach\n\nIn this position paper, I propose bots as a general concept for scientific computation. For example, a bot could apply text mining to extract relations from the abstracts of the constantly growing PubMed database, another bot could regularly measure the temperature at a given location and publish the results, and yet another one could infer new facts from existing nanopublications by applying specified rules or heuristics (e.g. if disease X is related to gene Y, which is targeted by drug Z then Z might help to treat X). Importantly, these bots can automatically publish the obtained data without double-checking or direct supervision by their creators, and these data can be made immediately accessible to everybody (including other bots).\n\nIn a system that treats bots as first-class citizens, we have to expect that some bots (and humans for that matter) will produce low-quality contributions, and we have to make sure that this does not affect the reliability and trustworthiness of the system. I argue that we can achieve that without introducing a central authority, without making concessions with respect to the openness of the system, and without delaying the publishing of results. We simply need a sufficiently accurate automatic method to discern good contributions from bad ones, which can be achieved by a reputation system. We can let scientists and bots participate in the same reputation system, where they would increase their reputation by receiving positive feedback by other participants on the usefulness and quality of their contributions. Positive reputation of a bot, in turn, would give credit and reputation to the scientist who created it.\n\nTo arrive at a simple exemplary model to explain the approach, we can define a relation \"is contributed by\", where bots can occur on either side: They are contributions, as they were programmed and created by somebody, but they are also contributors, as they can create new digital entities on their own. We can define a second type of relation to represent assessments. For the sake of simplicity, we model here only positive assessments and strip them of all granularity and detail, and we can call the resulting relation \"gives positive assessment for\".\n\nFig.\u00a0 shows a simple example of such a graph with two kinds of edges, representing creatorship and assessments. To determine the reputation or importance of the nodes, we can in the simplest case treat the two types of edges identically and rank the nodes by applying a network measure such as Eigenvector centrality (which is closely related to Google's PageRank algorithm to rank websites), as shown by the red numbers. The person at the top-left has a high reputation because he is endorsed by the person in the middle. The latter has a high reputation because her direct and indirect contributions were positively assessed by others (even though she has not received a direct assessment herself). The third person to the right, however, has not contributed anything that was positively assessed by others (only by his own bot), and therefore his reputation is low. Of course, there are many possible variations and extensions, such as bidirectional contribution edges for the Eigenvector calculation, as indicated by the gray numbers. In general, as one cannot influence *incoming* links from the part of the network that is not under one's control, there is no way to efficiently game the system. The scalability of such algorithms in open and decentralized systems is demonstrated by their successful application by search engines and peer-to-peer systems .\n\nBots could free scientists from routine tasks and therefore allow them to focus on the interesting questions. Furthermore, this approach could increase the value and appreciation of datasets and software as research products, and give due credit to their creators. With appropriate reputation mechanisms, this can be achieved in a fully open and decentralized environment.","meta":{"dup_signals":{"dup_doc_count":15,"dup_dump_count":12,"dup_details":{"curated_sources":2,"2021-17":1,"2019-18":1,"2018-51":2,"2018-34":1,"2018-13":1,"2017-43":2,"2017-26":1,"2017-09":1,"2023-06":1,"2017-13":1,"2024-18":1}},"filename":"out\/1503.04374_extract_savesd2015_02.tex.md"},"subset":"arxiv"} +{"text":"abstract: The nearly equal lunar and solar angular sizes as subtended at the Earth is generally regarded as a coincidence. This is, however, an incidental consequence of the tidal forces from these bodies being comparable. Comparable magnitudes implies strong temporal modulation, as the forcing frequencies are nearly but not precisely equal. We suggest that on the basis of paleogeographic reconstructions, in the Devonian period, when the first tetrapods appeared on land, a large tidal range would accompany these modulated tides. This would have been conducive to the formation of a network of isolated tidal pools, lending support to A.S. Romer's classic idea that the evaporation of shallow pools was an evolutionary impetus for the development of chiridian limbs in aquatic tetrapodomorphs. Romer saw this as the reason for the existence of limbs, but strong selection pressure for terrestrial navigation would have been present even if the limbs were aquatic in origin. Since even a modest difference in the Moon's angular size relative to the Sun's would lead to a qualitatively different tidal modulation, the fact that we live on a planet with a Sun and Moon of close apparent size is not entirely coincidental: it may have an anthropic basis.\naddress: $^{1}$Astrophysics, Keble Road, Denys Wilkinson Building, University of Oxford, Oxford OX1 3RH\nauthor: Steven A. Balbus\nsubject: Astronomy, Astrobiology\ntitle: Dynamical, biological, and anthropic consequences of equal lunar and solar angular radii\n\n# Introduction\n\nThe fact that the solid angles subtended by the Sun and Moon at the Earth are nearly equal is generally regarded as a coincidence, one that leads to the phenomenon of total solar eclipses of agreeably short duration. The purpose of ths article is to propose an anthropic explanation for why the near equality of solar and lunar angular sizes may not be entirely coincidental, and in the process lend support to an old idea of A. S. Romer \\[1\\] on the evolutionary role of tidal pools. The idea is that the near match of angular sizes is a mathematical, but incidental, by-product of the presence of a strongly modulated tidal forcing by the Sun and Moon. It is the latter that is the true biological imperative.\n\nThe two earliest known tetrapods with more than fragmentary remains, *Acanthostega* and *Ichthyostega*, are thought to have been fully (perhaps only predominantly in the case of *Ichthyostega*) aquatic creatures \\[2\\]. The coastal and estuarial waters such organisms and their immediate ancestors are believed to have inhabited would have been subject to sizeable and irregular tides, leaving an inland network of pools. The farthest inland of these pools would on occasion have been left exposed weeks at time, utimately evaporating. A creature caught in one of these isolated inland pools would consequently have faced dehydration or suffocation. But given a sense of direction to its flailing, there would have been plenty of inviting pools closer to the sea. These would be refreshed progessively more often than pools deeper inland. In addition, any fish left stranded in these inland pools would have been easy prey for those predators adapted to directed terrestrial motion \\[2\\]. The exigencies and advantages of large motor control in a network of tidal pools would surely have been an important evolutionary impetus to evolve weight-bearing chiridian limbs. This simple and important argument, which ties together evolutionary biology with established tidal theory, deserves to be much more widely known. In this paper, we illustrate by explicit calculation the sensitivity of the equilibrium tidal height to the relative lunar and solar angular sizes, and point out that the continental configuration associated with Devonian plate tectonics may have been particularly conducive to a large tidal range.\n\nThe argument associating tidal modulation with perceived lunar and solar angular sizes is very simple. Isaac Newton himself was aware of it: in the *Principia,* he shows that one may use the ratio of the spring to neap tides to deduce the fact that the Moon's tidal force is stronger than the Sun's, and that the Moon therefore must be the denser body. The relation is also clearly discussed in an unpublished 1992 manuscript by C. Tolbert and C. Sarazin \\[3\\], which goes on to note some of the biological implications that we discuss more fully in this work.\n\nThe tidal force, which arises from the quadrupole term of the two-body potential of the disturbing source and its host, is proportional to the mass of the disturber and the reciprocal cube of the distance between the centres-of-mass. Assume for the moment that the Sun exerts a tidal force on the Earth which is a fraction $f$ of the Moon's tidal force. Then, $${M_S\\over r_S^3} = f{M_m\\over r_m^3}$$ where $M_S$ is the mass of the Sun, $r_S$ the Earth-to-Sun centre-of-mass distance, and $M_m$ and $r_m$ the corresponding quantities for the Moon. The masses, in turn, are proportional to the average internal density times the cube of the body's diameter. With $\\rho$ and $D$ standing for average density and diameter respectively, subscripts $S$ and $m$ for Sun and Moon, we have $$\\label{XX}\n{\\rho_S}{\\left(D_S\\over r_S\\right)^3}=\n{f \\rho_m}{\\left(D_m\\over r_m\\right)^3}.$$ But $D\/r$ is just the apparent angular size $\\theta$ of the object subtended at the Earth. This means, for example, that the total tidal potential at latitude $l$ and time $t$ may be written $$\\Phi =GR_E^2[ \\rho_S\\theta_S^3 A(l,t) + \\rho_m\\theta_m^3 B(l,t)]$$ where $G$ is the gravitational constant, $R_E$ the radius of the Earth, and $A$ and $B$ angular functions of order unity ($t$ serves as a longitudinal variable). The relative Sun and Moon contributions are thus extremely sensitive to their relative angular sizes. Equation () implies $$\\theta_S = \\left(f\\rho_m\\over \\rho_S\\right)^{1\/3} \\theta_m.$$ Were the densities the same, equal angular sizes would mean equal tides. With $(\\rho_m\/\\rho_S)^{1\/3}= 1.34$, we see that values of $f$ even roughly near 0.5 will translate to nearly equal angular sizes for the Sun and Moon subtended at the Earth. The question of why these angular sizes should be so closely matched now becomes one of why the Sun's tides should be something like half of the Moon's. The ground has shifted from perception to dynamics, and dynamics has calculable consequences.\n\n# Analysis\n\n## Semi-qualitative considerations\n\nThere are many characteristic frequencies in the tidal problem, the precise number depending upon the level of accuracy and time baseline sought. The three most important frequencies are associated with the diurnal rotation period of the Earth, the siderial orbital period of the Moon, and the yearly passage of the Sun along the ecliptic. We experience events on the rotating surface of the Earth, which boosts the effective forcing frequencies of the Moon and Sun by $2\\pi\/(1\\\n{\\rm day})$. For the problem at hand this is a large number, and the difference between the effective lunar and solar forcing frequencies is small compared to their average. When two processes with close but unequal frequencies superpose, the net response is carried at this average frequency, with a modulation envelope at the difference frequency. In the case of the tides the modulation is particularly rich, because there are many different modulation frequencies that enter. By contrast, were one of the solar or lunar tides heavily dominant, very little of this modulation richness would be present. It is this feature of the problem that compels one to consider the importance of comparable tidal contributions from the Sun and Moon.\n\nThe close match of the Sun and Moon angular size combined with the density ratio implies that the solar tidal contribution is somewhat smaller than the lunar. Amplitude modulation of the tidal forcing would still be present if the inequality of magnitudes were reversed; might it just as easily have occurred that the Moon's contribution was smaller than that of the Sun? At an orbital radius of 1 AU the net tide would then of course be considerably smaller, and if the Earth is in fact near the inner edge of the the Sun's habitable zone as some current estimates suggest, a more distant orbital location would make the tides yet smaller. If a putative moon with a mass comparable to the true Moon had to form close to the Earth by impact or otherwise, there is a nautural link between relative and absolute tidal amplitudes. The early moon's tidal contribution, putatitive or real, would have been overwhelming, moving the Earth's crust through kilometer-scale upheavals. The lunar orbit would evolve rapidly at first, with the satellite spiraling outward via tidal dissipation to several $10^5$ km, where the orbital recession would slow to a comparative crawl, conveniently measured in centimetres per year. In the process, not only would the tides greatly diminish to below environmentally friendly meter-scale displacements, *at the same epoch* the Sun would also become a player in the tidal game. In other words, for any Moon-like satellite, orbital dynamics lead to something resembling the present tidal environment: the Moon dominant but not overwhelmingly so.\n\nThe scenario might be different if a moon formed farther out and evolved *inward*, but this is not possible if the host planet rotates more rapidly than the moon orbits, as presumably would be the case on a planet capable of supporting life. If the planet rotates more rapidly, the effect of tidal dissipation on the perturber's orbit is to cause outward migration. (It is generally thought the absence of of any moons associated with the two slowly rotating inner planets Venus and Mercury is due to the resultant *inward* tidal migration of any primordial moons circling these bodies \\[4\\].)\n\n## Tidal Potential\n\n### Coordinate expansion\n\nTo calculate the actual height of the oceanic tides at a particular location is a difficult task. The answer depends on the details of the coastline in question (especially whether resonances are present), the depth of ocean as a function of position (bathymetry), the hydrodynamics of ocean-propagating long waves (shallow water waves modified by the Earth's rotation), and coefficients known as the Love numbers, used to extract the difference between the oceanic and solid crust tidal response. We surely do not know Devonian bathymetery or the the Laurussian coastline in sufficient detail to perform a precise calculation of this type.\n\nFortunately, extreme accuracy is not required. A quantity known as the *equilibrium tide* will suffice. This is simply the displacement of a local equipotential surface caused by the introduction of the tidal potential, and is directly proportional to this potential. The displacement is calculated by setting the associated work done against the Earth's gravity equal to (minus) the disturbing tidal potential.\n\nLet $\\Phi_t$ be this potential and $g$ be Earth's gravitational field. The height $h$ of the tide is then simply $$h = -{\\Phi_t\\over g}.$$ The equilibrium tide is a reasonable measure of the scale of the reponse. In any case, we are less interested in $h$ as an actual height then the fact that it is proportional to $\\Phi_t$, the driving potential of the problem. We are particularly interested in the temporal behaviour of $\\Phi_t$, and in effect use $h$ as a convenient normalization.\n\nThe calculation of $\\Phi_t$ is a standard exercise \\[4\\] and not difficult. We briefly review it here to keep the presentation self-contained. Let the centre-of-mass distance between the Earth and Moon be $r_m$. At the centre of the Earth we erect a $\\mbox{\\boldmath{$u$}} =(u,v,w)$ coordinate system, where $w$ is the distance along the line connecting the centres-of-mass, and $u$ and $v$ are orthogonal axes oriented so that $u,v,w$ is a standard right-handed Cartesian system. The potential at the coordinate location $u,v,w$ is $$\\Phi = - {GM_m\\over [(r_m+w)^2 +u^2+ v^2]^{1\/2}}$$ It is important to note that while the origin of our $uvw$ system is fixed to the Earth's centre, the axes are not fixed to the Earth. They are defined by the Moon's instantaneous location (see below.)\n\nSince $r_m$ is large compared with $u$, $v$, or $w$, we expand $\\Phi$ through second order in small quantities: $$\\label{t1}\n\\Phi = {GM_m \\over r_m}\\left[ -1 +{w\\over r_m}+{1\\over 2r_m^2} (u^2+v^2-2w^2)\\right]$$ The first term is an additive constant contributing nothing to the force $- \\mbox{\\boldmath{$\\nabla$}} \\Phi$. The gradient of the second term returns the dominant $1\/r^2$ force along the ($w$ axis) centres-of-mass, and the third term is the desired leading order tidal potential $\\Phi_t$, giving rise to a force vector proportional to $(-u, -v, 2w)$. Relative to the Earth's centre, the $w$ force is repulsive and the $uv$ forces are attractive. We thus find $$\\label{YYY}\nh_m = {GM_m\\over 2gr_m^3} \\left( 2w^2-u^2-v^2\\right)$$\n\nThe more cumbersome calculation arises when one relates these $uvw$ coordinates, defined by and moving with the Moon's orbit, to coordinate axes *fixed* to the rotating Earth. Let us refer to these Earth body axes as $\\mbox{\\boldmath{$x_b$}} =(x_b, y_b, z_b)$, with their origin at the Earth's centre and the $x_b$-axis parallel to the North Celestial Pole. With $\\phi_m(t)$ equal to the azimuthal angle of the Moon in its own orbital plane, $i_m$ the Moon's orbital inclination relative to the equator, $\\alpha$ the shift in the azimuth of the line of nodes (i.e., the line formed by the intersection of the equatorial and lunar orbital planes), $\\Omega_E$ the Earth's diurnal angular rotation rate, and $T$ denoting transpose (so the vectors are in column form) the transformation is given by $$\\label{YY}\n \\mbox{\\boldmath{$u^T$}} = \\mbox{\\boldmath{$\\cal R_X$}} (\\phi_m) \\mbox{\\boldmath{$\\cal R_Y$}} (i_m) \\mbox{\\boldmath{$\\cal R_X$}} (\\alpha) \\mbox{\\boldmath{$\\cal R_X$}} (-\\Omega_Et) \\mbox{\\boldmath{$x^T_b$}}$$ where $\\mbox{\\boldmath{$\\cal R_X$}} (\\theta)$ is a $3\\times 3$ rotation matrix about the $x$ axis $$\\mbox{\\boldmath{$\\cal R_X$}} (\\theta) =\n \\begin{pmatrix} 1&0&0\\\\\n0&\\cos\\theta&\\sin\\theta \\\\\n0&-\\sin\\theta &\\cos\\theta \n\\end{pmatrix}$$ and $\\mbox{\\boldmath{$\\cal R_Y$}} (\\theta)$ is a $3\\times 3$ rotation matrix about the $y$ axis $$\\mbox{\\boldmath{$ \\cal R_Y$}} (\\theta) =\n \\begin{pmatrix} \\cos\\theta&0&\\sin\\theta\\\\0&1&0\\\\\n-\\sin \\theta &0&\\cos \\theta\\end{pmatrix}$$ (Note that $i_m$, $\\alpha$ and $\\phi_m$ are in effect Euler angles.) By substituting () into (), we may determine the potential at a particular terrestrial location. Exactly analogous formulae hold for the solar orbit.\n\n### Eccentricity\n\nA complication of practical importance is the eccentricity of the Moon's orbit. Its present value is \\[5\\]: $$\\epsilon_m = 0.0549.$$ The $1\/r_m^3$ behaviour of the tidal amplitude means that there is modulation from this effect alone. A more minor modification is that the temporal advancement of the azimuth in the orbital plane is no longer uniform. With $\\Omega_m$ equal to the average orbital frequency, the first order correction is: $$\\phi_m(t) = \\Omega_m t - \\varpi + 2\\epsilon_m\\sin(\\Omega_m t- \\varpi)$$ where $t$ is time and $\\varpi$ is the longitude of the pericentre. The current solar eccentricity is $\\epsilon_s=0.0167$ \\[5\\]. The separation $r_m$ is given by the usual formula \\[23\\]: $$r_m = {r_0\\over 1 + \\epsilon\\cos\\phi_m(t)}$$ where $r_0$ is the semilatus rectum of the orbit. Analogous formulae hold for the Sun.\n\n## Equilibrium tide\n\nIn figure (1), we show the total equilibrium tide, $h( \\mbox{\\boldmath{$x_b$}} )= h_m( \\mbox{\\boldmath{$x_b$}} )+h_S( \\mbox{\\boldmath{$x_b$}} )$. For this canonical case, we have chosen parameters for the solar orbit to be those of today. For the lunar orbit we use an estimated Devonian semilatus rectum of 365000 km \\[4,6\\]. (The Earth's rotational frequency has accordingly been increased to conserve total angular momentum.) We have used the current orbital inclinations and eccentricities. It is certainly possible that these have evolved for the Moon \\[7\\], and the Milankovitch cycles affect the Earth's orbit about the Sun, but our results are not sensitive to nonpathological values. The lunar and solar longitudes of the pericentres have been chosen arbitrarily ($0.63\\pi$ and $-0.17\\pi$ respectively); the precise values are inconsequential. The $\\alpha$ angle currently rotates with an 18.6 year period. Its precise value is also not critical, though small values show particularly sharp modulation. The canonical value is thus $\\alpha=0$. The latitude chosen, $35^\\circ$ S, is supposed to be representative of the late Devonian Laurussian coast. The most striking feature of $h$ is the presence of many different incommensurate frequencies, the most important of which are diurnal, semi-diurnal and twice the lunar orbital frequency. Note the asymmetry between the diurnal and semi-diurnal components, which results in a \"shading\" effect in the lower portion of the figure. (The asymmetry is caused by the orbital inclinations.) A higher time resolution detail is shown in figure (2) between 1000 and 2000 hours. The strength of the diurnal equilibrium high tide can vary by a factor of 5: it is highly modulated.\n\nThere is a self-evident repeating pattern of a build-up of the tidal strength to a maximum, corresponding to the deepest inland penetration of the sea line, followed by a recession, which can be sharp, in which the inland penetration diminishes each day. It is not the height of the spring tide (say, at 600 hours) that is relevant here. Rather, it is the rapid rate of change of the \"modulation envelope\" (at about 700 hours). Being a maximum, the spring tide changes little from one local peak to the next, but the changes of the high water mark just after the spring tide become treacherous. The waxing and waning shoreline would have left behind a series of tidal pools of increasing depth (or at least more frequent replenishment) with decreasing distance from the sea. Any aquatic tetrapod stranded in a shallow inland pool that was adapted to squirming to a nearby reservoir would clearly have been favoured over those less mobile. This selection pressure was likely to have presented itself relentlessly.\n\nThis should be contrasted with the equilibrium tide that would obtain if there were no (or a tidally unimportant) Moon (figure \\[3\\]). This tide would be one-third of the amplitude and much less variable. Local topography (and weather!) might still lead to the stranding of aquatic life forms of course, but probably not with an extensive network of pools leading back to the sea. Whether this would lead to a different course of evolution is a matter of speculation, but there is a real qualitative difference in tidal ponds and estuarial flooding in these two scenarios.\n\nFinally, in figure (4) we show for comparision the equilibrium tide for the cases corresponding to a lunar angular diameter half that of the Sun's, without changing the Moon's average density. There is some modulation, but far more gentle than and qualitatively different from our canonical case. It is only when the angular sizes become close that we start to see highly sculpted modulation.\n\n# Discussion\n\n## Tidal receptivity\n\nAny discussion of continental reconstruction prior to 200 Ma must begin by noting that this is an unavoidably speculative undertaking. Nevertheless, there seems to be some consensus on major features.\n\nAt the time of the middle Silurian (430 Ma), the broad intercontinental seaway comprising the Rheic Ocean separated the (southern) Gondwanan and (northern) Laurussian land masses. The early Devonian marked the beginning of the closure of this seaway, a protracted geological event. The squeezing of the Rheic was not uniform along its length, however. Some reconstructions show the eastern Rheic squeezed down to brackish swampland, while in the west the Rheic maintained a very broad opening of several thousand kilometers before giving on to the great Panthalassa Ocean. More recent reconstructions suggest a western closure preceeding the eastern \\[8\\]. In either scenario, a tapered, horn-shaped configuration of the intercontinental seaway was maintained throughout most of the Devonian, before the Rheic became closed off at the start of the Carboniferous, eventually uniting the Laurussian and Gondwanan land masses, forming the Pangaean supercontinent of the Permian.\n\nFrom the point of view of tidal dynamics, the Devonian is distinctive. The intercontinental seaway dividing Laurussia from Gondwana had the same generic form as the current Bay of Fundy, Bristol Channel, or northwest coast of Australia: a broad opening onto a deep ocean, tapering to shallower seas. These are all regions known for their large tidal range. The propagation speed of water waves is greater in deep water than in shallow seas, going roughly as the square root of the water's depth \\[9\\]. In the case of interest, this means that the shallows would not have responded as rapidly to tidal forcing as would the deeper water in the opening to the ocean. The resulting tidal surge propagates into the shallows, the flow convergent because of the tapering of the channel. The consequence of this behaviour is a greatly enhanced rise of the propagating tide. A correspondingly rapid egress occurs at low tide. This behaviour accounts for the famously large tidal ranges of the modern regions noted above.\n\nThe Devonian, we therefore expect, was likely to be have seen dramatic tides along the Rheic Ocean, with both significant modulation of the high tide level and a great tidal range. It is not difficut to imagine the resulting rich network of coastal tidal pools that is likely to have been present. It is therefore highly suggestive that the earliest identified tetrapod trackways are thought to have originated in the Eifelian stage of the Devonian, significantly predating body fossil tetrapod remains, at a paleogeographic location corresponding to a tight constriction of the central Rheic seaway \\[10\\]. The southern tropical coasts of the Devonian Earth may well have been a massive swamp.\n\n## A brief overview of Devonian tetrapodomorph habitats\n\nOne of the richest sources of Devonian tetrapodomorph fossils is a huge area extending through parts of modern day Lithuiania, Latvia, Estonia and Russia. In Devonian times, it was a massive delta region, with clear evidence of tidal influence in the form of currents and facies \\[11\\]. Yet more telling, from the point of view of this work, is the evidence of interruption of river current sediment deposition by tidal currents. The delta plane was graded. In the upper plane region, the evidence indicates sporadic interruption during spring tides; the lower plane region shows a much more regular pattern of tidal currents. (This is analgous to the modern day River Severn, which hosts tidal bores at spring tides \\[9\\].) The Baltic Devonian Delta thus preserves explicit evidence for the environmental influences of *modulated* tidal forcing. The formation existed for some 30 million years, and for much of this time provided habitats for tetrapods and near-tetrapods \\[11, 24\\].\n\nMore generally, the distribution of Devonian tetrapodomorph fossils indicates a preference for marginal marine environments \\[2\\]; the *extent* of the distribution implies an ability to cross narrow seaways. It is of interest to consider also elpistostegids, such as *Pandrichthys,* *Livoniana* and *Tiktaalik,* the fish group from which the tetrapods are thought to have evolved. While they retained paired fins (not limbs), they probably enjoyed some terrestrial maneuverability. Elpistostegid fossils extend further back in time than the earliest true tetrapod body fossils, but do not predate the earliest known tetrapod trackways \\[10\\]. Evidently, there was an extended period of coexistence between the two groups. From the mid-Devonian, fossils of the more primitive tetrapodomorph *Eusthenopteron* have been found in coastal marine sediments \\[2, 13\\] (Eastern Canada), as have *Panderichthys* and *Livoniana* fossils (Latvia) \\[12\\]; *Tiktaalik* remains are found in non-marine fluvial deposits (Ellesmere Island, Canada) from the same period \\[14\\]. The former environment would very likely have been subject to tidal influences; the latter is less certain but by no means impossible. The case for a tidally-influenced environment also applies to the tetrapods *Elginopteron* \\[15\\] (non-marine fluvial deposits, Scotland) and *Tulerpeton* \\[16\\] (coastal lagoon, Russia). By contrast, *Ichthyostega* \\[17\\] and *Acanthostega* \\[18\\], the earliest known tetrapods with fairly complete body fossils (later than *Elginopteron*, for which only fragmentary remains are known) are associated with a non-marine inland basin (Greenland), a more ambiguous tidal zone. Remains of the approximately contemporaneous tetrapod *Ventastega* \\[19\\] were found in river bed tidal deposits (Latvia). Note that *Ventastega,* *Panderichthys,* *Livoniana* all left remains in the massive Baltic Devonian Delta system described above. Finally, the Eifelian trackway is an important datum, as it represents the earliest known evidence of tetrapod activity. The trackway was found in sediments associated with a coastal lagoon (Poland) \\[10\\].\n\nThe fossil record generally supports the notion that tidal modulations contributed to the shaping of the environment of tetrapodomorphs.\n\n# Conclusion\n\nThe very near angular sizes of the Moon and Sun as seen from the Earth are a mathematical by-product of the existence half-meter, highly modulated, quasi-periodic equilibrium tides associated with a planet of order 1 AU from a Sun-like star. These conditions have been examined quantitatively in this work by explicit calculation of the equilibrium tides under a variety of different assumptions. It is probably rare for a planet to harbour highly complex macroscopic organisms (though hard data on this are of course scarce!), and it must also be unlikely for a planet to have a large moon nearly matching the central star in angular diameter. If these outlandish features are unrelated, why should the same planet just happen to have both? What is certain is that the Sun and Moon both are able to contribute significantly to the net tide, that this introduces very strong amplitude modulation effects that would otherwise be absent, and that early tetrapods would have had to cope with becoming stranded in a constantly changing network of shallow inland tidal pools. The uncertainty is whether these ineluctable consequences of strong tidal modulation were essential, or merely incidental, to creating an evolutionary pathway leading to a contemplative species.\n\nIt may be just a coincidence that our planet has all these features in common for no particular reason. But this line of argument doesn't sit well, and is in any case totally sterile. In terms of the sheer number of phyla and diversity of species, the Earth's intertidal zones are among the richest habitats on the planet \\[2\\]. Despite their ostensible stranding hazards, these regions stimulated diversity, not avoidance. It seems reasonable to consider the notion that not just the existence of the tide, but its particular form, may have influenced the course of evolution, selecting for (among other things), efficient maneuverability and motility in networks of shallow tidal pools. A.S. Romer's classic vision of trapped tetrapods striving for accommodating pools is supported by the apparent coincidence in angular sizes of the Sun and Moon. The fact that the resulting tidal pools would not have been in arid zones\u2014one of the early criticisms of part of Romer's theory that has allowed it to fall into disfavour\u2014is irrelevant, if isolated shallow pools are common because of the tidal dynamics. As has been noted elsewhere \\[20\\], aridity is really not an issue. Puddles can drain or be rendered unsuitable for habitation under humid conditions as well. For Romer's purposes, what is really needed is a developed network of ponds, and this is what the dynamics of modulated tides provides. A striking contemporary example of pond-searching is evinced by the so-called \"climbing perches\", *Anabas testudineus* \\[21\\], air breathing fish who literally save themselves by terrestrial locomotion from one drying puddle to a deeper pool. These fish, whose behaviour constitutes a sort of Romerian ideal, inhabit wetlands in southeast Asia, hardly an arid climate. It is not a great conceptual leap to envision a similar survival imperative (with no contemporaneous land-based predators) in Devonian swamps.\n\nThere may also be interesting geophysical consequences of noncommensurate modulated tides acting over billions of years that have yet to be explored\u2014the Earth's Love numbers, by which one measures the solid planetary response are by no means tiny. Yet further afield, it is of interest to note that the search for the moons of extrasolar planets (\"exomoons\") is now in its infancy, and is expected to return significant results in the next few years \\[22\\]. The discussion of this paper suggests a special role for those moons providing a tidal force comparable to the planet's host star. For if it is necessary to have the sort of heavily modulated tides we experience on the Earth in order to influence a planet's evolutionary course in a manner constructive for evolving complex land-based organisms, the mystery of nearly equal angular sizes of the Sun and Moon would evaporate, rather like an inland Devonian tidal pool.\n\n# Acknowledgements\n\nThe author has benefitted greatly from interactions with many colleagues over the course of this work. He is deeply grateful to A. Lister for his willingness to guide an outsider through pertinent biological literature and for critical advice, and to P. Ahlberg for an exceptionally meticulous review and extended correspondence. He would also like to thank D. Lynden-Bell for drawing the author's attention to Newton's early work on tides; C. Sarazin for sending his unpublished manuscript; D. Balbus, R. Harvey and M. Rees for stimulating conversations; and J. Clack and R. Dawkins for their helpful advice and active encouragement. Support from the Royal Society in the form of a Wolfson Research Merit Award is gratefully acknowledged.","meta":{"dup_signals":{"dup_doc_count":18,"dup_dump_count":16,"dup_details":{"curated_sources":2,"2018-34":1,"2018-22":1,"2018-09":1,"2017-47":1,"2017-39":1,"2017-34":1,"2017-26":1,"2017-22":1,"2017-09":1,"2017-04":1,"2016-50":1,"2016-44":2,"2016-36":1,"2018-43":1,"2017-13":1}},"filename":"out\/1406.0323_extract_Balbus2.tex.md"},"subset":"arxiv"} +{"text":"author: Pushpa Khare \n*Physics Department, Utkal University, Bhubaneswar, 751004, India* \nand \nS. Ikeuchi \n*Department of Earth and Space Science, Osaka University, Toyonaka, Osaka 560, Japan*\ntitle: The ionization and abundance of C and Si in QSO absorbers\n\nC and Si in QSO absorbers\n\n# Introduction\n\nQuasar absorption lines have proved to be extremely useful probes of the high redshift Universe. In recent years important information has been obtained, among other things, about the chemical abundances in galaxies at high redshifts. Evidence for evolution in chemical abundances in these galaxies, the abundance increasing with decreasing redshift, has been obtained (i) directly, through abundance determination in damped Lyman alpha systems (DLAS) at different redshifts ( Pettini etal 1995; Lu etal 1996), and (ii) indirectly, from the variation of the number of C IV systems and Lyman limit systems (LLS) per unit redshift interval, per line of sight as a function of redshift ( Steidel, Sargent & Boksenberg, 1988; Khare & Rana, 1993). The absorption line studies have also given clues for understanding the history of stellar nucleosynthesis (Lauroesch etal 1996). Lu etal (1996), through an analysis of the observed abundances of 23 DLAS, have shown that the relative abundance patterns of several elements are consistent with their formation in Type II supernovae. In particular, they find evidence for \\[Si\/Fe\\] $\\simeq$ 0.4, which is very similar to the overabundance of Si found in the Galactic halo stars. Evidence for overabundance of Si with respect to C, by a factor of three, has also been found in Lyman alpha forest clouds (LAFCs) (Songaila & Cowie, 1996; hereafter SC96) as well as in some intervening and associated systems (Petitjean, Rauch & Carswell 1994, Savaglio etal 1996).\n\nThe shape of the UV background radiation is important in deciding the ionization balance of various elements in QSO absorbers. This radiation most likely originates in AGNs, however, a significant contribution from young star forming galaxies can not be ruled out (Bechtold etal 1987; Bajtlik, Duncan $\\&$ Ostriker, 1988, Madau & Shull, 1996). The shape of the background radiation not only depends on the relative contribution from these two sources but is also decided by the absorption by the material giving rise to QSO absorption lines. Recent observation of a significant Gunn-Peterson optical depth ($\\tau\\ge$``{=html}1.7 for redshift $>$ 3.0) at the wavelength of the Lyman alpha line of He II towards a QSO (Jackobsen etal 1994) indicates the presence of a large break by factors $>$ 25 (Madau & Meiksin, 1994) at the He II ionization edge. Similar break was also found necessary in order to explain the observations of column density of C and Si ions in the Lyman alpha forest lines (SC96) as well as in some heavy element systems (Savaglio etal 1996). The shape of the background UV field due to AGNs, at various redshifts, taking into account the absorption and reemission from the QSO absorbers has been recently determined by Haardt & Madau (1996).\n\nA lower He II optical depth ($\\tau\\simeq$``{=html}1.0) at somewhat lower redshift (z $\\simeq$ 2.5) has been observed by Davidsen etal ( 1996) towards HS1700+64, indicating a higher degree of ionization of He below z=3.0. Evidence for the lowering of the optical depth of the Universe to the He II ionizing photons below z=3.1 has also been obtained by SC96, from the observations of the column density ratios of Si IV and C IV in the LAFCs. They find an abrupt change in the values of this ratio at z = 3.1, the ratio being higher at higher redshifts. As the absorbers producing heavy element systems in the QSO spectra are also most likely ionized by the intergalactic UV background (Srianand & Khare, 1995), it may be of interest to look for signatures of a change in opacity of the Universe in the absorption line data of heavy element systems.\n\nA correlation with redshift of the ratio, of equivalent widths of lines of Si IV ($\\lambda$``{=html}1393) and C IV ($\\lambda$``{=html}1548) in the heavy element systems, observed at intermediate resolution, the ratio increasing with increasing redshift, was found by Bergeron & Ikeuchi (1990). High resolution data (FWHM $\\le$ 23 km s$^{-1}$) for the column densities of C IV and Si IV are now available for a number of QSOs. In this paper we present the analysis of these data in order to understand the ionization of these elements at different redshifts as well as to study the overabundance of Si with respect to C in these systems. In section 2 we investigate the presence of a correlation between ratios of column densities of Si IV and C IV and redshift. In section 3 we present the results of photoionization models and compare these with the observations. Conclusions are presented in section 4.\n\n# Correlation Analysis \n\nWe have collected from the literature (Cristiani etal 1995, Savaglio etal 1996, Petitjean, Rauch & Carswell 1994, Petitjean & Bergeron 1994, Giallongo etal 1993, Fan & Tytler 1994, Tripp, Lu & Savage 1996, Prochaska & Wolfe 1996, Wampler 1991, Wampler, Petitjean & Bergeron 1993) the column densities of C IV and Si IV observed in the heavy element systems towards several QSOs observed with FWHM between 8 to 23 km s$^{-1}$. The total sample consists of 30 non-DLA intervening systems including 10 for which only upper limits on the Si IV column density are available. In addition we have 23 components of intervening DLAS.\n\nWe note that the data are rather inhomogeneous in the sense that the observations have been made at somewhat different resolutions and with different S\/N values. This in principle can introduce an incompleteness in the sample as the lower limit for detection of lines for different QSOs may be different. This could affect the correlation analysis if the incompleteness introduces a redshift dependence of the minimum detectable column density, as a result, introducing an artificial redshift dependence of the column densities in the sample. In order to check this we performed Spearman rank correlation tests for redshift dependence of column densities of C IV for the non-DLA and DLA intervening systems separately. No such correlation was found, the chance probabilities being 0.53 and 0.38 respectively. Similar exercise for column densities of Si IV also rules out any correlation with redshift, the chance probabilities being 0.84 and 0.74 respectively. We thus believe that our data, though inhomogeneous, are not biased and can be used for the correlation analysis.\n\nIn Fig.1, we have plotted the ratios of column densities of Si IV and C IV, R, and the upper limits on R as a function of redshift for both categories of intervening absorption lines mentioned above. The non-DLA intervening systems are believed to be produced by the gas in the halos of absorbing galaxies and are most likely irradiated by intergalactic UV background radiation (Srianand & Khare, 1995). Most DLAS have components with high as well as low ionization states. These presumably arise in the halo and disk components of the absorbing galaxy (Wolf etal 1996). Lu etal (1996) have pointed out that the Si IV and C IV absorption profiles of DLAS always resemble each other and have a different appearance from the low-ion absorption lines. They suggest that the bulk of the high-ions could arise from the halo clouds. The DLA components showing high-ions thus, very likely, belong to the same population as that of the non-DLA intervening systems. We can therefore combine the non-DLA intervening systems and the DLA components showing high-ions together, while looking for the redshift dependence of column density ratios of high-ions.\n\nSpearman rank correlation tests for both categories of intervening systems taken together as well as taken separately rule out any correlation between R and z, the values of chance probability being 0.61, 0.229 and 0.213 for all intervening systems, non-DLA intervening systems and DLA intervening systems respectively. Generalized Kendall test (Isobe, Feigelson & Nelson, 1986) applied to the 53 values, including upper limits, for all intervening systems and separately to the 30 values, including upper limits, for the non-DLA intervening systems gives probability of 0.684 and 0.174 respectively, which again indicate the absence of any correlation. We also tried to study the correlation for systems restricted to smaller redshift ranges. No correlation was however seen. KS tests rule out any abrupt change in R values at any particular redshift. Note that Spearman rank correlation test applied to the values of R observed in LAFCs by SC96 gives chance probability equal to 4.27$\\times10^{-3}$, showing a good correlation.\n\n# Ionization State of the Absorbers and the Background Field\n\nFor 14 systems ( 7 DLA intervening, 5 non-DLA intervening and 2 associated systems) the column density of Si II is also known. We can thus try to investigate the shape of the UV radiation field as well as the Si overabundance with respect to C in these systems. The neutral hydrogen column density is not known for several of these systems. However, as will be seen below, the exact value of this column density is not very important. We have constructed a grid of photoionization models using the code 'cloudy 84' written by Prof. G. Ferland, for three values of neutral hydrogen column density, N$\\rm_{H\\;I}$, typical of DLAS (10$^{20}$ cm$^{-2}$), of LLS (3.0$\\times$``{=html}10$^{17}$ cm$^{-2}$) and of high ionization intervening systems ($\\le 10 ^{16}$ cm$^{-2}$), for different shapes of UV radiation field. Heavy element abundance was assumed to be one thirtieth of the solar value, with solar values of relative abundances of different chemical elements. The results, which we present in the form of column density ratios, however, are not sensitive to the heavy element abundances as well as the particle density. The ratios are also independent of N$\\rm_{H\\; I}$ for N$\\rm_{H\\; I}\\;\\le\\; 10^{16}\\; cm\\;s^{-1}$. The ratios for these three values of N$\\rm_{H\\; I}$ bracket the ion ratios for all the intervening and associated systems considered here.\n\nHeavy element absorbers have complex structures and it has been argued that the lines of different ions may be produced in different regions of the absorbers and also it is possible that a hot collisionally ionized phase may exist in the absorbers (Giroux, Sutherland & Shull, 1994). However, here, we are restricting our analysis to high resolution observations. We can therefore assume that lines produced in individual clouds have been resolved and the the results of 'cloudy' models, which take into account the radiation transfer inside a cloud, can be applied to the column densities seen in individual clouds. Also as we are restricting to the ions of Si II, Si IV, C II and C IV only, the contributions from the collisionally ionized phase may not be very important.\n\nIn Fig 2 we have plotted R vs. the column density ratios of Si II to Si IV, S, for the three values of neutral hydrogen column densities mentioned above. Fig 2a, 2b and 2c are for different shapes of the background UV radiation field; these are (a) power law with a slope of -1.5, which corresponds to typical unprocessed AGN spectra, (b) AGN background filtered through the intervening galactic and intergalactic absorbers as given by Haardt & Madau (1996, hereafter HM96) for the redshift of 2.5 and (c) power law with slope of -1.5 and with a cutoff at 4 Ryd, which will be the case if the He II ionization fronts of different QSOs have not yet overlapped at the redshift of the absorbers as suggested by SC96. We have also plotted the observed ratios with their error bars. The observed values have large errors. These are essentially a result of the fact that the lines are often saturated and profile fitting analysis can yield acceptable fits to the observed profiles over a range of column density values. However, as will be seen below, definite conclusions regarding the overabundance and the background can be drawn from the comparison of the observed values with the results of the photoionization models. As seen from the figure the shape of the spectrum makes significant difference to the lower values of the ratios. Pure power law produces very low values of R for S$\\;<$``{=html}0.1. Values of R for S$\\;<$``{=html}0.1 increase with increase in the magnitude of the break at the He II ionization edge, however, the maximum value of R is around 0.1 for infinite break. For larger values of S, spectra of HM96 gives higher values of R compared to all other spectral shapes. This is due to the decrement in spectra of HM96 short wards of hydrogen ionization edge.\n\nAs the heavy element absorption systems are associated with galaxies, it is possible that the radiation field incident on the absorbers may get a significant contribution from the stellar sources. As the stellar radiation field has very few photons beyond 3 Ryd, high values of R are possible if the UV field is dominated by galactic contribution. We have constructed photoionization models for clouds irradiated by different proportions of galactic and UV background (HM96) fields. Steidel (1995), from his study of a large sample of galaxies associated with QSO absorption lines, finds these galaxies to be normal in the sense of their star formation rates. We have therefore taken the shape of the galactic field to be that given by Bruzal (1983). Pure galactic background can not reproduce the observed ratios as it produces much higher values of R, compared to the observed values. A contribution to the radiation field from the intergalactic AGN background is necessary. In Figure 2d we have plotted the column density ratios calculated for the case when the ratio of the flux due to galactic radiation is 90% of the total flux at 1 Ryd, the rest 10% being contributed by the AGN background (HM96). The observed values of R for all the intervening systems are lower than the results of this model. We can thus conclude that a minimum of 10 % (and possibly a much larger fraction, as will be seen below) of the background radiation incident on the intervening absorbers is contributed by the AGNs.\n\n## Non-DLA intervening systems\n\nAssuming the shape of the radiation field incident on the absorbers producing these lines to be that given by HM96, the observed ion ratios for two of the systems, with redshifts 2.1 and 2.77, are consistent with the results of the photoionization models. For other three systems, one with redshifts 2.1 and two with redshifts 2.7, the observed values of R, even after allowing for the errors are considerably smaller than the model predictions. Adding galactic radiation field to that of HM96 can only increase R, thereby increasing the discrepancy. For other shapes of the background radiation the observed ratios for four of the five systems are either consistent with or are lower than the model values. For three of these systems we can definitely rule out overabundance of Si w.r.t. C for any shape and intensity of the background radiation. For these systems we can also rule out any contribution of the galactic field to the radiation field. This also holds for the remaining two systems if the shape of the background is that given by HM96. Overabundance of Si by factors $>$ 1.5 is necessary only for one system for other shapes of the background. Alternatively a small contribution from the stellar sources is required.\n\n## DLA intervening systems\n\nFor two of the systems, with redshifts 3.08 and 3.39, we can rule out the overabundance of Si for any shape of the background. Between the three shapes of the background radiation, the observed ion ratios for other 3 sytems with redshifts between 1.7 and 3.38, are consistent with the results of the photoionization models, without requiring any overabundance of Si or any contribution from the galactic sources. For the remaining two systems, with redshifts 2.84 and 3.39, the observed values of R are higher than model results for all the three shapes of the background. An overabundance of Si by factors $>$ 1.5 or a significant contribution (Fig.2d) from the galactic sources is needed.\n\n## Associated systems\n\nFor the two associated systems with redshifts around 3.0, S is smaller than 0.1 and R is larger than 0.3. Associated systems being close to the QSO are expected to be ionized by pure power law radiation field. Such a field, produces (Fig. 2a) smaller values of R. The errors in the observed ratios are, however, too large and within those error bars the ratios may be consistent with the model results. However, though no overabundance is warranted by the observations due to the large uncertainty in the observed values, one can not rule out the possibility of Si being overabundant w.r.t. C by a large ($>$``{=html}10) factor in these systems.\n\n## Lyman alpha forest clouds\n\nIn Fig 3 we have plotted the observed ratios, R, vs. the ratio of column densities of C II to C IV in Lyman alpha forest lines observed by SC96. The theoretical results, assuming the UV background near the Lyman alpha absorbers is purely from AGNs, for four spectral shapes (i) power law with a slope of -1.0 (ii) power law with a slope of -1.5 (iii) power law with a slope of -1.5 with a cutoff at 4 Ryd and (iv) field of HM96, are also plotted for N$\\rm_{H\\;I}\\;=\\;10^{16}$ km s$^{-1}$. These results are independent of N$\\rm_{H\\;I}$ for N$\\rm_{H\\;I}\\;\\le\\;10^{16}$ km s$^{-1}$. No overabundance of Si is required for half of the LAFCs which are mostly at redshifts smaller than 3.1 (SC96). The column densities for these systems are consistent with radiation field without any break at 4 Ryd. The amount of overabundance of Si needed for the other systems (mostly at redshifts lager than 3.1) does depend on the shape of the background and is smaller than a factor of 2 if a complete cutoff beyond 4 Ryd is assumed. We thus agree with the conclusion of SC(96) that the data do indicate a break in the background at 4 Ryd at high redshifts indicating a higher He II opacity at these redshifts.\n\n# Summary\n\nWe have analyzed high resolution observations of Si and C absorption lines in the QSO spectra in order to study the ionization state of absorbers and its change with redshift as well as to understand the overabundance of Si w.r.t. C.\n\nThe column density ratios of Si IV to C IV in LAFCs, show a correlation with redshift, as already noted by SC96. The observations of this ratio in 30 non-DLA intervening absorbers, as well as in 23 intervening DLAS, however, fail to show any correlation with redshift. The data do not show any abrupt change in the ionic ratios at any particular redshift, unlike the case of LAFCs noted by SC96. Thus there is no evidence for the change in the opacity of the Universe beyond 4 Ryd from the column density ratios of the heavy element line systems in the QSO spectra. It may be argued that the radiation field incident on the heavy element absorbers gets a significant contribution from local stellar sources, thereby diluting the effect which is seen in the LAFCs. Our analysis of ion ratios, presented in the previous section, argues against such a possibility. Srianand and Khare (1995) have also presented several arguments against such a possibility.\n\nObserved column density ratios of Si IV to C IV and of Si II to Si IV in several intervening and associated systems have been compared with the results of photoionization models with different shapes of the incident radiation. In spite of a large uncertainty in the observed values, definite conclusions can be drawn about the overabundance. We find that for all the non-DLA intervening systems the overabundance of Si can be ruled out if the shape of the background is as given by HM96. For other shapes of the background also, the overabundance can be ruled out for three of the five systems. The other two systems may allow an overabundance by factors $>$ 1.5 if no contribution from galactic sources to the background is assumed to be present. For two of the DLA systems also, overabundance can be ruled out for any shape of the background. Three other DLA systems are consistent with the observations for shape of the background given by HM96. The remaining two DLA systems, however, either require an overabundance by factors $>$ 1.5 or a significant contribution from the galactic radiation to the background. The possibility of overabundance by factors $>$ 10 can not be ruled out for the associated systems. Lyman alpha forest clouds at high ($>$``{=html}3) redshifts do indicate an overabundance of Si over C as well as higher opacity of the Universe to radiation beyond He II ionization edge at these redshifts.\n\nThe authors are greatful to the Japan Society for Promotion of Science and the Department of Science and Technology (Government of India) for sponsoring the collaboration. PK thanks the members of the Theoretical Astrophysics group of the University of Osaka for warm hospitality and R. Srianand for discussion. This work was partially supported by a grant (No. SP\/S2\/013\/93) by the Department of Science and Technology, Government of India.\n\n# References\n\nBahcall, J. N. et al 1993, ApJ, 87, 1 Bajtlik, S., Duncan R.C., Ostriker, J. P. 1988, ApJ., 327, 57 Bechtold, J., Weyman, R. J., Lin, Z. , Malkan, M. A. 1987, ApJ, 315, 180 Bergeron, J. & Ikeuchi, S. 1990 A&A, 235, 8 Bruzal, G. 1983, ApJS, 53, 497 Cristiani, S., D'Odorico, S., Fontana, A., Giallongo, E. , Savaglio, S. 1995, MNRAS, 273, 1016 Davidsen, A. F., Kriss, G. A. , Zheng, W. 1996, Nature, 380, 47 Fan, X. , Tytler, D., 1994, ApJS, 94, 17 Giallongo, E., Cristiani, S., Fontana, A. , Trevese, D. 1993, ApJ, 416, 137 Giroux, M. L., Sutherland, R. S. , Shull, J. M. 1994, ApJ, 435, L101 Haardt, F. , Madau, P. 1996, ApJ, 461, 20 Isobe, Feigelson , Nelson, 1986 ApJ, 306, 490 Jackobsen, P., Boksenberg, A., Deharveng, J. M., Greenfield, P., Jedrzejewski, R. , Paresce, F. 1994, Nature, 370, 35 Khare, P. , Rana, N. C. 1993, Journal of Astron & Astrophys, 14, 83 Lauroesch, J.T., Truran, J.W., Welty, D.E. , York, D.G. 1996, PASP, 108, 641 Lu, L., Sargent, W.L.W., Barlow, T.A., Churchill C. W. , Vogt, S. S. 1996, ApJS, 107, 475 Madau, P. , Meiksin, A. 1994, ApJ, 433, L53 Madau, P. , Shull, J. M. 1996, ApJ, 457, 551 Petitjean, P. , Bergeron, J. 1994, A&A, 283, 759 Petitjean, P., Rauch, M. , Carswell, R. F. 1994, A&A, 291, 29 Pettini, M., King, D. L., Smith, L. J. , Hunstead, R. W. 1995, 'QSO Absorption Lines' Ed: G. Meylan, (Springer), P 71 Prochaska, J. X. , Wolfe, A. M. 1996, preprint Sargent, W. L. W., Boksenberg, A. , Steidel, C. C. 1988, ApJS, 68, 539 Savaglio, S., Cristiani, S., D'Odorico, S., Fontana, A., Giallongo, E. , Molaro, P. 1996, preprint Songaila, A. , Cowie, L. L. 1996, AJ, 112, 335 Srianand, R. Khare, P. 1995 ApJ, 444, 643 Steidel, C. C., 1995, in 'QSO Absorption Lines' Ed: G. Meylan, (Springer), 139 Steidel, C. C., Sargent, W. L. W. , Boksenberg, A. 1988, ApJ, 333, L5 Storrie-Lombardi, L. J., McMahon, M. J., Irwin, M. J. , Hazard, C. 1996, ApJ, 468, 121 Tripp, T. M., Lu, L. , Savage, B. 1996, ApJS, 102, 239 Wampler, E. J., 1991, ApJ, 368, 40 Wampler, E. J., Petitjean, P. , Bergeron, J. 1993, AA, 273, 15 Wolfe, A., Fan, X., Tytler, D., Vogt, S., Keane, M. J. , Lanzetta, K. M. 1994, ApJ, 435, 101\n\n**Figure Captions**\n\nFig 1: Plot of the ratio of column densities of Si IV and C IV with redshift. Triangles and circles represent non-DLA and DLA intervening systems respectively. Upper limits are indicated by T. \nFig. 2: Theoretical and observed column density ratios of Si IV and C IV vs that of Si II and Si IV. The dotted line is for N$_{\\rm\nH\\;I}\\le10^{16}$ cm$^{-2}$, solid line is for N$_{\\rm H\\;I}=\n3.0\\times10^{17}$ cm$^{-2}$ and the dash-dotted line is for N$_{\\rm\nH\\;I}= 10^{20}$ cm$^{-2}$. Circles correspond to the DLAS, squares correspond to the associated systems and triangles correspond to the non-DLA intervening systems. The shape of the UV continuum for Fig 2a, 2b and 2c is power law with slope =-1.5, Haardt & Madau (96) spectra for redshift of 2.5 and power law with slope =-1.5, with a cutoff at 4 Ryd, respectively. Fig 2d shows the results for a combined background field due to galaxies (90$\\%$ at 1 Ryd) and that given by Haardt & Madau (96) at the redshift of 2.5 (10$\\%$ at 1 Ryd). Horizontal dashed lines indicate the range of observed values. \nFig. 3: Theoretical and observed (SC96) column density ratios of Si IV and C IV vs that of C II and C IV for Lyman alpha forest lines for different shapes of UV background for N$_{\\rm H\\;I}\\le10^{16}$ cm$^{-2}$. Solid line is for Haardt & Madau (96) spectra for redshift of 2.5; dashed line is for power law with slope of -1.0; dotted line is for power law with slope of -1.5 and dash-dotted line is for power law with slope of -1.5 with a cutoff at 4 Ryd.","meta":{"dup_signals":{"dup_doc_count":12,"dup_dump_count":3,"dup_details":{"curated_sources":1,"2017-13":1,"unknown":10}},"filename":"out\/astro-ph9712168_extract_write2.tex.md"},"subset":"arxiv"} +{"text":"author: F. Ligni\u00e8res ; P. Petit ; T. B\u00f6hm ; M. Auri\u00e8re\ndate: Received March 6, 2009\/ Accepted April 29, 2009\nsubtitle: Towards a new class of magnetic A-type stars[^1]\ntitle: First evidence of a magnetic field on Vega\n\nWe report the detection of a magnetic field on Vega through spectropolarimetric observations. We acquired 257 Stokes V, high signal-to-noise and high-resolution echelle spectra during four consecutive nights with the NARVAL spectropolarimeter at the 2-m Telescope Bernard Lyot of Observatoire du Pic du Midi (France). A circularly polarized signal in line profiles is unambiguously detected after combining the contribution of about $1200$ spectral lines for each spectrum and summing the signal over the 257 spectra. Due to the low amplitude of the polarized signal, various tests have been performed to discard the possibility of a spurious polarized signal. They all point towards a stellar origin of the polarized signal. Interpreting this polarization as a Zeeman signature leads to a value of $-0.6 \\pm 0.3$\u00a0G for the disk-averaged line-of-sight component of the surface magnetic field. This is the first strong evidence of a magnetic field in an A-type star which is not an Ap chemically peculiar star. Moreover, this longitudinal magnetic field is smaller by about two orders of magnitude than the longitudinal magnetic field (taken at its maximum phase) of the most weakly magnetic Ap stars. Magnetic fields similar to the Vega magnetic field could be present but still undetected in many other A-type stars.\n\n# Introduction\n\nDespite recent progress in stellar magnetic field measurements, spectropolarimetric surveys of early-type stars indicate that photospheric magnetic fields can only be detected in a small fraction of these stars. Without direct constraints on the magnetic field of the vast majority of early-type stars, our understanding of the role of magnetic fields on the structure and evolution of intermediate mass and massive stars is necessarily limited. In this Letter, we report the detection of a magnetic field on Vega and argue that Vega is probably the first member of a new class of yet undetected magnetic A-type stars.\n\nThe proportion of stars hosting a detectable magnetic field is more firmly established for main sequence stars of intermediate mass (late-B and A-type stars) than for massive stars (early B and O-type stars) or intermediate mass pre-main-sequence stars (Herbig Ae\/Be stars). Magnetic A-type stars are indeed identified with the group of Ap-Bp chemically peculiar stars (excluding the subgroup of HgMn stars) since all known magnetic A-type stars belong to this group and, when observed with sufficient precision, Ap\/Bp stars always show photospheric magnetic fields . The incidence of the Ap\/Bp chemical peculiarity among A-type stars then leads to a $5-10\\%$ estimate of magnetic stars . Note that magnetic field detections have been reported for a few Am and HgMn stars but remain debated because they could not be confirmed by further investigations . Thanks to new high-resolution spectropolarimeters, magnetic fields are now also detected in pre-main-sequence stars and in massive stars. According to recent surveys, the fraction of magnetic stars among Herbig Ae\/Be stars is $7 \\%$ , while the rate of detection for early B and O-type stars is also small .\n\nThe magnetic fields of Ap\/Bp stars are characterized by a strong dipolar component, a long-term stability and dipolar strengths ranging from a lower limit of about 300 Gauss to tens of kilo-Gauss . Thus, if a population of weak dipolar-like fields corresponding to a weak field continuation of Ap\/Bp stars exists, a longitudinal component of the magnetic field in the range of $10$ to $100$ Gauss should have been detected by recent spectropolarimetric surveys of non Ap\/Bp stars . Instead, these surveys suggest there is a dichotomy between the population of strong, stable and dipolar-like magnetic fields corresponding to the Ap\/Bp stars and the rest of A-type stars, whose magnetic properties remain unknown, except that their surface longitudinal magnetic field should be very small.\n\nVega is well suited for the search of magnetic fields among A-type non Ap\/Bp stars. Its brightness and its low equatorial projected velocity ensure high signal-to-noise V spectra, while the number of spectral lines of an A0-type star is important enough to allow a very large multiplex gain by gathering the polarimetric signal of all the lines using a cross-correlation technique (Least-Squares Deconvolution, , LSD hereafter). Another advantage of Vega's brightness is that its fundamental parameters are well known relative to other more anonymous stars . In particular, spectral analysis and interferometric observations have shown that Vega is a rapidly rotating star seen nearly pole-on .\n\nVega was already included in a previous spectropolarimetric survey of A-type non Ap\/Bp stars using NARVAL at the Telescope Bernard Lyot of Pic du Midi, but the analysis of its 11 Stokes V spectra was not conclusive. Here we present the results of a four night observing run fully dedicated to Vega, during which more than 300 Stokes V spectra were obtained. Summing the information over a large number of these spectra leads to an unambiguous detection of a polarized signal.\n\nThe observations are described and interpreted in the next section. In section 3, the origin of a $\\sim \\!1$\u00a0G longitudinal magnetic field in an A-type non Ap\/Bp star is discussed and some of the perspectives opened by this field detection are considered. Our conclusions are given in section 4.\n\n# Instrumental setup, data reduction and multi-line extraction of Zeeman signatures\n\nThe observing material was gathered at the Telescope Bernard Lyot (Observatoire du Pic du Midi, France) using the NARVAL spectropolarimeter. As a strict copy of ESPaDOnS , NARVAL spectra provide a simultaneous coverage of the whole optical domain (from 370 nm to 1,000 nm) at high spectral resolution ($R=$``{=html}65,000). The instrument consists of a bench-mounted spectrograph and a Cassegrain-mounted polarimeter, with an optical fiber carrying the light between the two units. A series of 3 Fresnel rhombs (two half-wave rhombs that can rotate about the optical axis and one fixed quarter-wave rhomb) are used in the polarimeter, followed by a Wollaston prism which splits the incident light into two beams, respectively containing light linearly polarized perpendicular\/parallel to the axis of the prism. Each Stokes V spectrum is obtained from a combination of four sub-exposures taken with the half-wave rhombs oriented at different azimuths . The data reduction is performed by Libre-Esprit, a dedicated, fully automated software described by .\n\nThe data were collected during 4 consecutive nights in the summer of 2008, from July 25 to July 28, using 6\u00a0sec integration times for each sub-exposure of the Stokes V sequences (except the first two sequences of the run, for which exposure times of 15 and 10 sec were adopted). We retained the 257 Stokes V spectra with a typical peak signal-to-noise ratio (S\/N hereafter) of 1,500 per 1.8 \u00a0km\u2006s$^{-1}$, around $\\lambda = 600$\u00a0nm.\n\nFor each spectrum, both Stokes I & V parameters were processed using the LSD cross-correlation method . Using a line mask computed from a stellar atmospheric model with T$_{\\rm eff}$=10,000\u00a0K and $\\log g$ = 4.0 , we calculated LSD line profiles from a total of 1,200 photospheric lines. The multiplex gain in the S\/N from the raw spectra to the LSD mean profiles is about 30, reducing the noise level of the cross-correlation profiles to between $\\sigma = 3$ and $7\\times 10^{-5}I_{c}$, where $\\sigma$ is the standard deviation of the noise and $I_{c}$ stands for the intensity of continuum. Since no signature was observed above noise level in individual Stokes V LSD profiles, we then calculated an average of the 257 profiles, where each profile is weighted by the square of its S\/N. In this global profile, the noise in the Stokes V parameter is further decreased to $\\sigma = 2\\times 10^{-6}I_{c}$ (Fig. ) and a signature is now observed in circular polarization with an amplitude of $10^{-5}I_{c}$ (that is $5$ times the noise level). Running a $\\chi^2$ test on the signature , we found a reduced $\\chi^2$ of 3.5, which corresponds to a false-alarm probability of $3\\times 10^{-11}$.\n\nVarious tests have been performed to make sure that the polarization is of stellar origin and not due to an artefact of the instrument or the reduction process. This is particularly important in the present case, as the amplitude of the polarized signal is the lowest detected by NARVAL to date. A strong test to discard the possibility of a spurious signal is the \"null\" profile calculated from a different combination of the four sub-exposures constituting the polarimetric sequence . As shown in Fig. , no detectable counterpart of the Stokes V signal is seen in the \"null\" profile (note that a similar conclusion is reached by calculating another null profile (not shown here) from another possible combination of the sub-exposures). We then checked that the signal possesses the expected properties of a stellar polarized signal. First, we split the whole time-series into two independent subsets, containing respectively the first and second half of the observing run, both subsets having equivalent signal-to-noise ratios. As can be seen in Fig. a, the polarized signal is present in both sets, the false-alarm probabilities based on the $\\chi^2$ test being $5\\times 10^{-6}$ and $6\\times 10^{-3}$, respectively. Second, we built two line-lists from the atmospheric model, containing all spectral lines with Land\u00e9 factors respectively higher and lower than $g_c = 1.2$. The Stokes V profiles computed from the two line-lists are plotted in Fig. b. As expected, the amplitude of the polarized signal appears higher for the high Land\u00e9 factor lines than for the low Land\u00e9 factor lines. The peak-to-peak amplitudes of the polarized signals taken inside the line profile are respectively $2.8\n\\times 10^{-5}I_c$ and $2.0\\times 10^{-5}I_c$. Their difference slightly exceeds the noise level ($\\sigma = 6\\times 10^{-6}I_c$ and $5\\times 10^{-6}I_c$, respectively) and their ratio is roughly consistent with the ratio of the average Land\u00e9 factors of the line-lists ($g_m=1.51$ and $g_m=0.94$ respectively), taking into account that the corresponding Stokes I LSD profiles have a similar depth, within 10%. Third, we checked that the signal was still consistently recovered when other ways of splitting our line-list were considered (low versus high excitation potential or low versus high wavelengths). Finally, we tested the effect of changing the line mask. Indeed, spectroscopic and interferometric studies of Vega have shown that its surface temperature is inhomogeneous, due to the gravity darkening effect induced by its rapid rotation. According to the latest model based on interferometric results , the equatorial velocity of Vega is 274\u00a0km\u2006s$^{-1}$ and the effective temperature and the gravity decrease from 9988\u00a0K and $\\log g = 4.07$ at the pole to 7600\u00a0K and $\\log g = 3.5$ at the equator. There is a significant discrepancy with the spectroscopic analysis of , who find that the polar to equator temperature difference is only $\\sim\\!900$\u00a0K while the equatorial velocity is reduced to 175\u00a0km\u2006s$^{-1}$. Taking the extreme case of a T$_{\\rm eff}$=7500\u00a0K and $\\log g$ = 3.5 stellar atmosphere, we computed the LSD line profiles (not shown here) for the associated line mask and this time obtained a false-alarm probability of $10^{-1}$. The polarized signal is consistent with the one derived from our previous (hotter) atmospheric model but the low significance of the detection is due to the inadequacy of the line mask that results in a much higher noise level.\n\nThese tests strongly support that the signal is of stellar origin and therefore that Vega possesses a magnetic field.\n\nThe circularly polarized signal has the typical anti-symmetric shape of a Zeeman signature (Fig. ). However, as compared to the width of the Stokes I line profile, it only shows up within a limited range of radial velocities about the line-center. This suggests that the magnetic field distribution is axisymmetric and confined in the polar region. However, a more detailed analysis will be needed to specify the surface field distribution of Vega. First, as the 257 spectra at our disposal cover a range of rotation phases, Zeeman signatures from non-axisymmetric magnetic features, if any, are mostly averaged out from the time-averaged line profile. Second, due to Vega's temperature inhomogeneities, the weak line profiles range from flat-bottomed to \"V\" shapes . As the LSD profile is obtained by assuming that all lines have a common profile, its interpretation in terms of the surface field distribution is not straightforward in the present context.\n\nWe use the center-of-gravity method to estimate the longitudinal magnetic field $B_l$ : $$B_l = -2.14\\times10^{11}\\frac{\\int vV(v)dv}{\\lambda_0 g_m c\\int(I_c - I(v))dv}$$ where $v$ (km\u2006s$^{-1}$) is the radial velocity, $\\lambda_0$ (nm) the mean wavelength of the line-list used to compute the LSD profiles, $g_m$ the mean Land\u00e9 factor and $c$ (km\u2006s$^{-1}$) the light velocity. The integration limits cover a $\\pm30$\u00a0km\u2006s$^{-1}$ velocity range around the line centroid. Using this equation, we obtain $B_l = -0.6 \\pm 0.3$\u00a0G.\n\n# Discussion\n\nThree basic features distinguish the present detection from previous measurements of magnetic fields in main-sequence stars of intermediate mass : (i) It is the first time a magnetic field is detected in an A-type star which is not an Ap\/Bp chemically peculiar star (if we exclude the debated field detections in a few Am and HgMn stars discussed in ). (ii) The longitudinal magnetic field of Vega is smaller by about two orders of magnitude than the field of the most weakly magnetic Ap\/Bp stars. Indeed, the longitudinal field of a 300\u00a0G dipolar field aligned with the stellar rotation axis and viewed pole-on is close to 100\u00a0G, that is about two orders of magnitude larger than the $0.6$\u00a0G field of Vega. The longitudinal component of a dipolar field actually depends on its angle with respect to the rotation axis. But, whatever this angle, the amplitude of the circular polarization in the LSD Stokes V profile of a 300\u00a0G dipolar field will be more than one order of magnitude larger than that of Vega. (iii) The LSD Stokes V profile of Vega is also qualitatively distinct from LSD Stokes V profiles of Ap\/Bp stars since the polarized signal of Vega is concentrated in the weakly Doppler shifted regions of the projected stellar disk.\n\nThese marked observational differences between Vega and the Ap\/Bp magnetic stars suggest that we should consider Vega as a new type of magnetic A-type star. As there is no reason to believe Vega is unique among A-type stars, Vega should be considered as the first member of a new class of magnetic A-type stars.\n\nThe existence of such a new class might help understand some otherwise puzzling observations of the pre-main-sequence and post-main-sequence intermediate mass stars. The Herbig Ae\/Be stars show a strong activity which has led investigators to suspect a widespread presence of magnetic fields in these stars . Nevertheless, these magnetic fields have not been found, since only a small fraction of Herbig Ae\/Be stars appears to host one . Note that a similar discrepancy between widespread activity and a small fraction of detected fields exists in OB stars . A new class of magnetic A-type stars would shed new light on this issue. Indeed, the progenitors of these magnetic A-type stars could be the Herbig Ae\/Be stars where magnetic fields have not been detected yet, this non-detections being compatible with the fact that magnetic fields of the same intensity are much more difficult to detect in the faint Herbig Ae\/Be stars than in a bright A-type star like Vega. On the post-main-sequence side, the study of the white dwarf magnetic fields suggests that Ap\/Bp stars are not sufficient to be the progenitors of magnetic white dwarfs . A new class of magnetic A-type stars might also help to resolve this issue.\n\nThe consequences for Vega itself should also be considered. Its magnetic field could indeed trigger active phenomena in its atmosphere. Signs of spectroscopic variability have been reported, but have not been confirmed since . On the other hand, despite its status as a photometric standard, a photometric variability of $1-2 \\%$ with occasional excursions to $4\\%$ has been reported . This might be produced by photospheric temperature inhomogeneities induced by its magnetic field. However, because of the near pole-on configuration of Vega, the variability would rather be due to intrinsic changes of the magnetic field than to rotational modulation.\n\nThe origin of Vega's magnetic field could be attributed to one of the three mechanisms generally invoked for early-type stars, namely (i) the fossil field hypothesis, (ii) the envelope dynamo, (iii) the convective core dynamo. Let us first consider the fossil field hypothesis, whereby the ISM magnetic field is confined and amplified during stellar formation. It is regarded as the most consistent explanation of the magnetic fields observed in Ap\/Bp stars , but, as proposed by , it could also account for another population of stars hosting weak longitudinal magnetic fields. Their argument is based on the fact that large-scale, organized magnetic field configurations are subjected to a pinch-type instability driven by differential rotation when the magnetic field drops below a critical value. Consequently, for a distribution of large scale organized fields of different strengths issued from the star formation process, the instability would produce a magnetic dichotomy between a population of strong and stable large scale fields like in Ap\/Bp stars and another population where the destabilized configuration is now structured at small length scales, thus resulting in a weak longitudinal field. A simple estimate of the critical field has been found to be consistent with the reported lower limit of Ap\/Bp stars. Here, both the detection of a very small longitudinal field in Vega and the gap between this field and the lowest magnetic fields of Ap\/Bp stars reinforce this scenario. Nevertheless, this scenario is not complete as it does not say what happens to the destabilized field configuration, which could either decay or be regenerated by a dynamo.\n\nThe magnetic field of Vega could indeed be generated by an envelope dynamo where the energy source is the rotation of the star. Following , the dynamo loop initiated by the differential rotation could be closed by the pinch-type instability mentioned above. This interesting possibility has been investigated by numerical simulations in a simplified cylindrical configuration and in a solar context , leading to opposite outcomes. Simulations in more realistic conditions for A-type stars are clearly needed to test this envelope dynamo. An important issue concerns the origin of the envelope differential rotation, which is a basic ingredient of this dynamo but which is not forced by a strong stellar wind in A-type stars, contrary to what is expected to occur in OB and Herbig Ae\/Be stars . The third possibility is a dynamo in the convection core. While magnetic fields are likely to be generated there, an efficient mechanism to transport it throughout the radiative envelope to the star surface has not yet clearly been identified . We note that in the three cases considered, the magnetic field is expected to be structured at small scales and also probably variable in time. This calls for a spectropolarimetric monitoring of Vega that will investigate the surface distribution and the temporal variation of its magnetic field.\n\n# Conclusion\n\nA circularly polarized signal has been detected by accumulating a large number of high-quality echelle spectra of Vega with the NARVAL spectropolarimeter. The data analysis strongly supports a stellar origin of the polarization and thus the presence of a magnetic field on Vega. Due to the unprecedented low level of the detected polarization, new independent measurements will still be important to confirm this result. A magnetic field on Vega suggests that other A-type stars which are not Ap\/Bp stars host weak magnetic fields and that their study can shed a new light on early-type star magnetism. While a spectropolarimetric survey of bright A-type stars will be necessary to find these stars, a detailed investigation of Vega's magnetic field should also provide clues to the origin of this magnetism.\n\n[^1]: Based on observations at Telescope Bernard Lyot of Observatoire du Pic du Midi, CNRS\/INSU and Universit\u00e9 de Toulouse, France","meta":{"dup_signals":{"dup_doc_count":11},"filename":"out\/0903.1247_extract_Lignieres.tex.md"},"subset":"arxiv"} +{"text":"author: Donald Arseneau [^1]\ndate: 2013-09-16\ntitle: `url.sty` version 3.4\n\nThe package defines a form of command that allows linebreaks at certain characters or combinations of characters, accepts reconfiguration, and can usually be used in the argument to another command. It is intended for formatting email addresses, hypertext links, directories\/paths, etc., which normally have no spaces. The font used may be selected using the command, and new url-like commands may be defined using . This package does not make hyper-links! For that purpose, see the hyperref package (or some other deprecated ones).\n\n| Usage | Conditions |\n|:---|:---|\n| `\\url{ }` | The argument must not contain unbalanced braces. If used in the argument to another command, the argument cannot contain any \"`%`\",+\", \"`#`\", or \"`^^`+\", or end with \"`\\`+\". |\n| `\\url| |` | where \"`|`\" is any character not used in the argument and not \"`{`\" or a space. The same restrictions apply as above except that the argument may contain unbalanced braces. |\n| | for the defined-url \"\"; such a command can be used anywhere, no matter what characters it contains. |\n\nThe \"\" command is fragile, and its argument is likely to be very fragile, but a defined-url is robust.\n\n# Package options\n\nPackage Option: `obeyspaces`\n\nOrdinarily, all spaces are ignored in the url-text. The \"`[obeyspaces]`\" option allows spaces, but may introduce spurious spaces when a url containing \"\" characters is given in the argument to another command. So if you need to obey spaces you can say \"`[obeyspaces]``'{url``'}`\", and if you need both spaces and backslashes, use a defined-url.\n\nPackage Option: `hyphens`\n\nOrdinarily, breaks are not allowed after \"`-`\" characters because this leads to confusion. (Is the \"`-`\" part of the address or just a hyphen?) The package option \"`[hyphens]`\" allows breaks after explicit hyphen characters. The command will **never ever** hyphenate words.\n\nPackage Option: `spaces`\n\nLikewise, given the \"`[obeyspaces]`\" option, breaks are not usually allowed after the spaces, but if you give the options \"`[obeyspaces,spaces]`\", will allow breaks at those spaces.\n\n> Note that it seems logical to allow the sole option \"`[spaces]`\" to let input spaces indicate break points, but not to display them in the output. This would be easy to implement, but is left out to avoid(?)\u00a0confusion.\n\nPackage Option: `lowtilde`\n\nNormal treatment of the `~` character is to use the font's \"\" character, if it has one (or claims to). Otherwise, the character is faked using a mathematical \"\". The \"`[lowtilde]`\" option causes a faked character to be used always (and a bit lower than usual).\n\nPackage Option: `allowmove`\n\nThis option suppresses the test for being used in a so-called moving argument (check \"fragile command\"). Using it will enable to function in more contexts, but when it does fail, the error message may be incomprehensible.\n\n# Defining a defined-url\n\nTake for example the email address \"firstname.lastname@example.com<\/a>\" which could not be given (using \"\" or \"\") in a caption or parbox due to the percent sign. This address can be predefined with\n\n> `\\urldef{\\myself}\\email@example.com}` or \n> `\\urldef{\\myself}\\firstname.lastname@example.com|`\n\nand then you may use \"\" instead of \"`\\firstname.lastname@example.com}`\" in an argument, and even in a moving argument like a caption because a defined-url is robust.\n\n# Style\n\nYou can switch the style of printing using \"`\\urlstyle{`$xx$`}`\", where \"$xx$\" can be any defined style. The pre-defined styles are \"`tt`\", \"`rm`\", \"`sf`\" and \"`same`\" which all allow the same linebreaks but use different fonts\u00a0\u2014 the first three select a specific font and the \"`same`\" style uses the current text font. You can define your own styles with different fonts and\/or line-breaking by following the explanations below. The \"\" command follows whatever the currently-set style dictates.\n\n# Alternate commands\n\nIt may be desireable to have different things treated differently, each in a predefined style; e.g., if you want directory paths to always be in typewriter and email addresses to be roman, then you would define new url-like commands as follows:\n\n> `\\DeclareUrlCommand``{``}` \n> `\\DeclareUrlCommand\\email{\\urlstyle{rm}}` \n> `\\DeclareUrlCommand\\directory{\\urlstyle{tt}}`.\n\nIn fact, this example is exactly the definition which might be pre-defined by the package. Furthermore, basic is defined with\n\n> `\\DeclareUrlCommand\\url{}`,\n\nwithout any *settings*, so it uses whatever and other settings are already in effect.\n\nYou can make a defined-url for these other styles, using the usual command as in this example:\n\n> `\\urldef{\\myself}{\\firstname.lastname@example.com}`\n\nwhich makes act like `\\firstname.lastname@example.com}`, if the command is defined as above. The command would then be robust.\n\n# Defining styles\n\nBefore describing how to customize the printing style, it is best to mention something about the unusual implementation of . Although the material is textual in nature, and the font specification required is a text-font command, the text is actually typeset in *math* mode. This allows the context-sensitive linebreaking, but also accounts for the default behavior of ignoring spaces. (Maybe that underlying design will eventually change.) Now on to defining styles.\n\nTo change the font or the list of characters that allow linebreaks, you could redefine the commands , , , etc., directly in the document, but it is better to define a new 'url-style' (following the example of and ) which defines all of , , , , and .\n\n## Changing font\n\nThe command selects the font. The definition of done by the pre-defined styles varies to cope with a variety of LaTeX font selection schemes, but it could be as simple as `\\def\\UrlFont{\\tt}`. Depending on the font selected, some characters may need to be defined in the list because many fonts don't contain all the standard input characters.\n\n## Changing linebreaks\n\nThe list of characters after which line-breaks are permitted is given by the two commands (list macros) and . They consist of repeating for each relevant character `c`.\n\nThe differences are that 'BigBreaks' typically have a lower penalty (more easily chosen) and do not break within a repeating sequence (e.g., \"`DEC::NODE`\"). (For gurus: 'BigBreaks' are treated as mathrels while 'Breaks' are mathbins; see *The TeXbook*, p.\u2006170.) The result is that a series of consecutive 'BigBreak' characters will break at the end and only at the end; a series of 'Break' characters will break after the first and after every following *pair*; there will be no break between a 'Break' character and a following 'BigBreak' char; breaks are permitted when a 'BigBreak' character is followed by 'Break' or any other char. In the case of `http:\/\/` it doesn't matter whether `:` is a 'Break' or 'BigBreak'\u00a0\u2014 the breaks are the same in either case; but for (now ancient) *DECnet* addresses using `::` it was important to prevent breaks *between* the colons, and that is why colons are 'BigBreaks'. (The only other 'BigBreak' character is, optionally, the hyphen; slashes are regular 'Break's.)\n\nIt is possible for characters to prevent breaks after the next following character (this is used for parentheses). Specify these in .\n\nYou can allow some spacing around the breakable characters by assigning\n\n> `\\Urlmuskip = 0mu plus 1mu`\n\n(with `mu` units because of math mode). You can change the penalties used for BigBreaks and Breaks by assigning\n\n> `\\mathchardef\\UrlBreakPenalty=100` \n> `\\mathchardef\\UrlBigBreakPenalty=100`\n\nThe default penalties are and . These have such odd non-LaTeX syntax because I don't expect people to need to change them often. (The `\\mathchardef` does not relate to math mode; it is only a way to store a number without consuming registers.)\n\n### Arbitrary character actions\n\nYou can do arbitrarily complex things with characters by specifying their definition(s) in . This makes them 'active' in math mode (mathcode `\"8000`). The format for setting each special character `c` is: `\\do\\c{``}`, but other definitions not following this style can also be included.\n\nHere is an example to make \"`!`\"\u00a0inside force a line break instead of being treated verbatim (it uses LaTeX's ):\n\n> `\\makeatletter \\g@addto@macro\\UrlSpecials{\\do\\!{\\newline}}`\n\nHere is another overly-complicated example to put extra flexible muglue around each \"`\/`\" character, except when followed by another \"`\/`\", as in \"`http:\/\/`\", where extra spacing looks poor.\n\n> % what we'll insert before and after each (lone) slash:\n> \\newmuskip\\Urlslashmuskip \n> \\Urlslashmuskip=2mu plus2mu minus2mu\n>\n> % change what \/ does:\n> \\g@addto@macro\\UrlSpecials{\\do\\\/{\\Urlspaceyslash}}\n>\n> % need to look ahead:\n> \\def\\Urlspaceyslash{\\futurelet\\Urlssnext\\finishUrlspaceyslash}\n>\n> \\def\\finishUrlspaceyslash{%\n> \\mskip\\Urlslashmuskip % extra space before\n> \\mathchar8239 % \"202f, i.e., binary op, \\fam, \/ char\n> % if we see \/\/, eliminate the extra space to taste:\n> \\ifx\\Urlssnext\/\\mskip-\\Urlslashmuskip\n> \\else\\mskip\\Urlslashmuskip \\fi\n> }\n\nIf this sounds confusing\u00a0\u2026 well, it is! But I hope you won't need to redefine breakpoints\u00a0\u2014 the default assignments seem to work well for a wide variety of applications. If you do need to make changes, you can test for breakpoints using regular math mode and the characters \"`+=(a`\".\n\n# Yet more flexibility\n\nYou can also customize the presentation of verbatim text by defining and\/or . An example for ISO formatting of urls surrounded by `< >` is\n\n> \\DeclareUrlCommand\\url{\\def\\UrlLeft{}%\n> \\urlstyle{tt}}\n\nThe meanings of and are *not* reproduced verbatim. This lets you use formatting commands there, but you must be careful not to use TeX's special characters (`\\^_%~#$&{}` etc.)\u00a0improperly. You can also define to reprocess the verbatim text, but the format of the definition is special:\n\n> `\\def\\UrlLeft#1\\UrlRight{`\u2006\u2026\u00a0do things with `#1` \u2026\u2006`}`\n\nYes, that is `#1` followed by then the definition (a TeX\u00a0macro with delimited arguments). For example, to produce a hyperTeX\u00a0hypertext link:\n\n> `\\def\\UrlLeft#1\\UrlRight{%` \n> ` \\special{html:}#1\\special{html:<\/a>}}`\n\nUsing this technique, can provide a convenient interface for performing various operations on verbatim text. You don't even need to print out the argument! For greatest efficiency in such obscure applications, you can define a null url-style where all the lists like are empty.\n\nPlease note that this method is *not* how the hyperref package manages urls for its command, even though it makes use of . Instead, hyperref's reads its argument in a less-verbatim manner than described above, produces its hyperlink, and invokes to format the text. is the command descibed herein.\n\n[^1]: Thanks to Robin Fairbairns for documentation conversion!","meta":{"dup_signals":{"dup_doc_count":16,"dup_dump_count":2,"dup_details":{"curated_sources":8,"unknown":8}},"filename":"out\/1407.7041_extract_url.tex.md"},"subset":"arxiv"} +{"text":"abstract: We introduce new algebro-topological invariants of directed networks, based on the topological construction of the directed clique complex. The shape of the underlying directed graph is encoded in a way that can be studied mathematically to obtain network invariants such as the Euler characteristic and the Betti numbers.\n .\n Two different cases illustrate the application of the Euler characteristic. We investigate how the evolution of a Boolean recurrent artificial neural network is influenced by its topology in a dynamics involving pruning and strengthening of the connections, and to show that the topological features of the directed clique complex influence the dynamical evolution of the network. The second application considers the directed clique complex in a broader framework, to define an invariant of directed networks, the network degree invariant, which is constructed by computing the topological invariant on a sequence of sub-networks filtered by the minimum in- or out-degree of the nodes.\n .\n The application of the Euler characteristic presented here can be extended to any directed network and provides a new method for the assessment of specific functional features associated with the network topology.\naddress: Neuroheuristic Research Group, Faculty of Business and Economics (HEC), University of Lausanne, Quartier Dorigny, 1015 Lausanne, Switzerland\nauthor: Paolo Masulli and Alessandro E. P. Villa\ntitle: The topology of the directed clique complex as a network invariant\n\n# Background\n\nThe main interest of algebraic topology is to study and understand the functional properties of spatial structures. Algebro-topological constructions have been applied successfully in the field of data science with the application of the framework of persistent homology, which has proved to be a powerful tool to understand the inner structure of a data set by representing it as a sequence of topological spaces. A network is a set of points satisfying precise properties of connectedness, which can be used to define a class of topological spaces. Network theory aims to understand and describe the shape and the structure of networks, and the application of the tools developed within the framework of algebraic topology can provide new insights of network properties in several research fields.\n\nThe directed clique complex is a rigorous way to encode the topological features of a network in the mathematical framework of a simplicial complex, allowing the construction of a class of invariants which have only been recently applied for the first time in the context of network theory . Active nodes are those nodes whose state depend on a set of precise rules that depend on network topology and dynamics. In a highly interconnected network of such nodes, the activity of each node is necessarily related to the combined activity of the afferent nodes transmitted by the connecting edges. Due to the presence of reciprocal connections between certain nodes, re-entrant activity occurs within such network. Hence, selected pathways through the network may emerge because of dynamical processes that may produce activity-dependent connection pruning. The overall goal of these studies is to understand the properties of a network given the topology described by its link structure.\n\nNeuronal networks are a complex system characterized by coupled nonlinear dynamics. This topic is a long-standing scientific program in mathematics and physics . In general, the synchronization of two systems means that their time evolution is periodic, with the same period and, perhaps, the same phase . This notion of synchronization is not sufficient in a context where the systems are excited by non-periodic signals, representing their complex environment. Synchronization of chaotic systems has been discovered and since then it has become an important research topic in mathematics , physics and engineering . In interconnected cell assemblies embedded in a recurrent neural network, some ordered sequences of intervals within spike trains of individual neurons, and across spike trains recorded from different neurons, will recur whenever an identical stimulus is presented. Such recurring, ordered, and precise interspike interval relationships are referred to as \"preferred firing sequences\". One such example can be represented by brain circuits shaped by developmental and learning processes . The application of tools from algebraic topology to the study of these systems and networks will be of great use for determining deterministic chaotic behavior in experimental data and develop biologically relevant neural network models that do not wipe out temporal information\n\nIn the current study we introduce a mathematical object, called directed clique complex, encoding the link structure of networks in which the edges (or links) have a given orientation. This object is a simplicial complex that can be studied with the techniques of algebraic topology to obtain invariants such as the Euler characteristic and the Betti numbers. We propose general constructions valid for any directed network, but we present an application to evolvable Boolean recurrent neural networks with convergent\/divergent layered structure with an embedded dynamics of synaptic plasticity. The Euler characteristic, which is defined given the network connectivity, is computed during the network evolution. We show evidence that this topological invariant predicts how the network is going to evolve under the effect of the pruning dynamics. Despite being just a toy-example of the dynamics observed in biological neuronal networks, we suggest that algebraic topology can be used to investigate the properties of more refined biologically-inspired models and their temporal patterns. We show also that, for a directed network, the Euler characteristic computed on a sequence of networks generated by filtrating its nodes by in- and out-degrees can provide a general metric helpful for a network classification. Hence, the topological invariants computed for each network in the filtration give a sequence of numbers that may be interpreted as a fingerprint of the complete network.\n\n## Acknowledgements\n\nThis work was partially supported by the Swiss National Science Foundation grant CR13I1-138032. We wish to thank Prof. Kathryn Hess, Prof. Ran Levi and Pawe\u0142 D\u0142otko for suggesting the idea of using oriented cliques in order to define simplices in a directed graph, as it was shown in the talk given at the SIAM DS15 Conference in Snowbird (USA) in May 2015.\n\n# Results and discussion\n\n## Dynamics of artificial neural networks\n\nWe considered a directed graph representing a simplified model of feedforward neural network with convergent\/divergent layered structure with few embedded recurrent connections. In this model, the nodes represent individual neurons and the connections between them are oriented edges with a weight given by the connection strength. We have computed the Euler characteristic and its variation during the evolution of such networks, both for the entirety of the nodes in the network and for the sub-network induced by the nodes that are active at each time step in order to detect how the structure changes as the network evolves. The Betti numbers and their variation during the network evolution were also computed but we do not discuss further this topological measurement. Notice that activation of the networks follows a very simple dynamics. The nodes of the input layer are activated at regular time intervals, which is not meant to be biologically realistic, but has been adopted to favor the simplicity of the model. It was shown elsewhere that a stable activity level in a network like this could be achieved only with an appropriate balance of excitatory and inhibitory connections. The networks studied here are oversimplified and formed only by excitatory nodes. We selected the ranges of the parameters such that the simulations maintained a level of activity for $100$ steps without neither saturation nor extinction of the activity, thus suggesting that connection pruning enabled topological changes. However, notice that even within selected areas of the parameter space of the simulations we observed that the activity level tended either to increase towards paroxysmal activation (i.e., saturation) or to decrease towards complete inactivation (i.e, extinction).\n\nWe observed that the Euler characteristic of the entire network could detect the pruning activity during the neural network evolution (Figure\u00a0A). In particular, the step to step variation of the Euler characteristic matched the number of connections pruned over time. If we considered only the sub-network of the active nodes, we observed that the Euler characteristic decreased or increased if the number of active nodes increased or decreased, respectively (Figure\u00a0B). Then, the Euler characteristic is a good estimator of the activity level within the network. These results confirmed that the Euler characteristic gives a precise measure of the topological changes in a network associated with connection pruning for the entire network and associated with activation patterns for the active sub-network.\n\nThe type of dynamics undergoing the neural network evolution and the structure of the directed clique complex of that network at the very beginning of the simulation (i.e. before the occurrence of connection pruning) were correlated. This was possibly the most unexpected and significant result regarding the dynamics of artificial neural networks. In the simulations leading to the activation of at least $5\\%$ of the nodes, the average number of active units was correlated to the number of simplices, in the directed clique complex, of dimension two (Pearson correlation coefficient $r_{(370)} = .560$, $p < 0.001$) and dimension three ($r_{(370)} = 0.445$,$p < 0.001$). This may appear surprising because the topology of the directed clique complex of a network *a priori* ignores any dynamics of pruning, the evolution of the network topology and how this is going to influence the activation level. However, the rationale is that directed cliques are fully connected sub-networks, i.e. sub-networks with an initial and a final node that are connected in the highest possible number of ways. Then, a high number of directed cliques leads to a higher chance of propagation of the activation through the network. Notice the fact that it is essential to consider here only *directed* cliques, because the activation of a node occurs only if the connected upstream nodes are activated. Activation is indeed a phenomenon happening in a directional way prescribed by the connectivity pattern. The invariant presented should also be considered as a complementary measurement of complexity for the assessment of the computational power of Boolean recurrent neural networks .\n\n## Network filtrations and invariants\n\nThe in- and out-degrees of nodes are important factors in shaping the network topology. We applied our topological construction to devise invariants for any directed networks. We compute the Euler characteristic on a sequence of sub-networks defined by *directed degeneracy* of their nodes, or in other words the in- and out-degrees of the vertices, as described in detail in the Methods section. Two separate sequences are defined because in- and out-degrees represent different aspects in the network connectivity. The sequences of sub-networks of a given network are a *filtration* in the sense that each network appearing in the sequence is contained in all those that follow. The values of the Euler characteristic for each network of the sequences gives rise to two separate sequences of integers that give a measure of the shape and the topology of the complete network. We propose this invariant to describe general directed networks.\n\nThe sequences of the Euler characteristic of the in-degree and the out-degree filtrations are plotted as a function of the normalized minimum degree of vertices for representative types of a scale-free network (SF) , a random network (RN) , and a small world network (SW) (Figure\u00a0). This normalization is necessary to compare networks of different sizes at each filtration level posing the maximum degree of the vertices in the network to 1, as described in the Methods section. Each network type was simulated 50 times using different random seeds. In the case of a SF network the values of the Euler characteristic of the in-degree filtration (dotted red line) is always larger than the curve of the out-degree filtration (solid blue line, Figure\u00a0A). Moreover, for SF networks the curve of the in-degree increases sharply at near $0.4$ and reaches a maximum value of the Euler characteristic approximately at $0.6$ of the normalized maximum value of the vertex minimum degree. The out-degree curve increases monotonically after this level of vertex minimum degree but does not reach the in-degree curve. In the case of a RN network the curves of in- and out-degrees overlap at all levels of the filtration (Figure\u00a0B). It is interesting to notice that the maximum value of the Euler characteristic is observed for the smallest values of the vertex minimum degree. Then, for RN networks, both curves decrease to a minimum at approximately $0.8$ of the normalized maximum value of the vertex minimum degree, followed by a monotonic increase. For a SW network both curves of in- and out-degrees start from the minimal value of the Euler characteristic with the least vertex minimum degree, followed by a non monotonic increase and a tendency of overlap between the two curves (Figure\u00a0C). The monotonicity of the curves and the differences between in- and out-degree filtration differ greatly for the three types of networks, thus suggesting that this invariant is a good descriptor of network topology.\n\nA distinct topological invariant defined for non-directed networks, referred to the Betti curves, was recently proposed by following the idea of filtering the network by the weight of connections. This invariant appears well suited for continuously distributed connection weights, for instance when the weights are related to the distances of points and represent a symmetric relation between nodes. In the case of directed networks with modifiable values of connection weights restricted to a limited set , the network dynamics evolves towards a bimodal distribution of the connection weights densely grouped near the minimum and maximum values of the range. This is a general behaviour in neuronal networks . In this kind of networks, filtering the network by the connection weights following is not suitable, because most connections would have the same weight. Our approach for directed networks is to filter the connections by the in- and out-degrees separately in order to measure how the nodes of each degree shape the topology of the network. It is important to point out that other methods are based on spectral properties of the adjacency matrix and therefore only make sense if all the transformations of the network data are linear .\n\nThe results presented here open the way to further applications of the topological invariants. The analytic study of the values of the Euler characteristic in the filtrations framework can provide a metric of similarity between networks which is only dependent on their internal topology, thus allowing the application of clustering algorithms for the detection of distinct functional classes of networks. The study of brain complex networks in clinical neuroscience offers as a particularly promising field of application of the new topological invariant, as suggested by other studies using different techniques to the same aim . Another promising application is the study of the temporal dynamics in neural activity. The finding of precise and repeating firing sequences in experimental and simulated spike train recordings has been discussed with respect to the existence of synfire chains or chaotic attractors . In both cases the underlying network structure is assumed to be a directed graph. This hypothesis together with the assumption of spike-timing modifiable connections provide a rational basis for the application of topological invariants towards understanding the association between topological structures and neural coding.\n\n# Conclusions\n\nWe have developed new invariants for directed networks using techniques derived from algebraic topology, showing that this subject provides a very useful set of tools for understanding networks and their functional and dynamical properties. Simple invariants such as the Euler characteristic can already detect the changes in the network topology. The promising results shown here are a contribution to the application of algebraic topology to the study of more complex networks and their dynamics, including models of neuronal networks that are biologically inspired. We believe that the framework present here may open the way to many computational applications to unveil data structures in data and network sciences.\n\n# Methods\n\n## Graphs and clique complexes\n\nAn *abstract oriented simplicial complex* $K$ is the data of a set $K_0$ of vertices and sets $K_n$ of lists $\\sigma = (x_0, \\dots, x_n)$ of elements of $K_0$ (called *$n$-simplices*), for $n \\geq 1$, with the property that, if $\\sigma = (x_0, \\dots, x_n)$ belongs to $K_n$, then any sublist $(x_{i_0}, \\dots, x_{i_k})$ of $\\sigma$ belongs to $K_k$. The sublists of $\\sigma$ are called *faces*.\n\nWe consider a finite directed weighted graph $G = (V,E)$ with vertex set $V$ and edge set $E$ with no self-loops and no double edges, and denote with $N$ the cardinality of $V$. Associated to $G$, we can construct its *(directed) clique complex* $K(G)$, which is the directed simplicial complex given by $K(G)_0 = V$ and $$\\label{eq:directed_clique_complex}\nK(G)_n = \\{(v_0, \\dots, v_n) \\colon (v_i, v_j) \\in E \\textrm{ for all } i < j \\} \\quad \\textrm{ for } n \\geq1.$$ In other words, an $n$-simplex contained in $K(G)_n$ is a directed $(n+1)$-clique or a completely connected directed subgraph with $n+1$ vertices. Notice that an $n$-simplex is though of as an object of dimension $n$ and consists of $n+1$ vertices.\n\nBy definition, a directed clique (or a simplex in our complex) is a fully-connected directed sub-network (Figure ) this means that the nodes are ordered and there is one source and one sink in the sub-network, and the presence of the directed clique in the network means that the former is connected to the latter in all the possible ways within the sub-network.\n\n## The topological invariants\n\nThe directed clique complex is the basic topological object that allows us to introduce invariants of the graph: the *Euler characteristic* of the directed clique complex $K(G)$ of $G$ is the integer defined by $$\\chi(K(G)) = \\sum_{n=0}^N (-1)^n \\ \\vert K(G)_n \\vert,$$ or in other words the alternating sum of the number of simplices that are present in each dimension.\n\nLet us now consider, for each $n$, the vector space $\\mathbf{Z}\/2\\langle K(G)_n\\rangle$ given by the linear combinations of $n$-simplices with coefficients in the field of two elements $\\mathbf{Z}\/2$. We can define the *boundary maps* $\\partial_n \\colon \\mathbf{Z}\/2\\langle K(G)_n\\rangle \\to \\mathbf{Z}\/2\\langle K(G)_{n-1}\\rangle$ which are given by mapping each simplex to the sum of its faces. Then we can define the quantities: $$\\beta_n(K(G)) = \\mathrm{dim}(\\mathrm{ker}\\ \\partial_n) - \\mathrm{dim}(\\mathrm{Im}\\ \\partial_{n+1}),$$ given by the difference of the dimension of the space of the $n$-simplices whose boundary is zero and the dimension of the space of boundaries of $(n+1)$-simplices. It can be checked that, if we apply a boundary map twice on any linear combination of simplices, we get zero, and so the quantities $\\beta_n(K(G))$ are always non-negative integers. These classically known numbers take the name of *Betti numbers* and, for each $n$, the $n$-th Betti number $\\beta_n(K(G))$ corresponds to the dimension of the $n$-th homology space (with $\\mathbf{Z}\/2$-coefficients) of the clique complex $K(G)$ of $G$.\n\nThe intuitive sense of this construction is to count the \"holes\" that remain in the graph after we have filled all the directed cliques. In particular, the $n$-th Betti number is counting the $n$-dimensional holes. One can also see that $\\beta_0$ counts the number of connected components of the graph. A classical result in topology shows a connection between the Euler characteristic and the Betti numbers, expressed by the identity: $\\chi(K(G)) = \\sum_{n=0}^N (-1)^n \\beta_n(K(G))$, which gives another way of computing the Euler characteristic.\n\nNotice that the construction of the directed clique complex of a given network $G$ does not involve any choice, and therefore, since the Betti numbers and the Euler characteristic of a simplicial complex are well-defined quantities for a simplicial complex , our constructions produce quantities that are well-defined for the network $G$, and we shall refer to them simply as the Euler characteristic and the Betti numbers of $G$.\n\n## Boolean recurrent artificial neural networks\n\n### Network structure and dynamics\n\nThe artificial recurrent neural networks consist of a finite number of Boolean neurons organized in layers with a convergent\/divergent connection structure . The networks are composed by $50$ layers, each of them with $10$ Boolean neurons. The first layer is the input layer and all its $10$ neurons get activated at the same time at a fixed frequency of $0.1$, i.e. every $10$ time steps of the history. Each neuron in a layer is connected to a randomly uniformly distributed number of target neurons $f$ belonging to the next downstream layer. The networks include *recurrence* in their structure, meaning that a small fraction $g$ of the neurons appears in two different layers. This means that a neuron $k$ that is also identified as neuron $l$, is characterized by the union of the input connections of neurons $k$ and $l$, as well as by the union of their respective efferent projections.\n\nThe state $S_i(t)$ of a neuron $i$ take values $0$ (inactive) or $1$ (active) and all Boolean neurons are set inactive at the beginning of the simulation. The state $S_i(t)$ is a function of the its activation variable $V_i(t)$ and a threshold $\\theta$, such that $S_i(t) = \\mathcal{H}(V_i(t)-\\theta)$. $\\mathcal{H}$ is the Heaviside function, $\\mathcal{H}(x)=0 : x<0$, $\\mathcal{H}(x)=1 : x\\ge0$. At each time step, the value $V_i(t)$ of the activation variable of the $i^{th}$ neuron is calculated such that $V_i(t+1) = \\sum_{j} S_j(t) w_{ji}(t)$, where $w_{ji}(t)$ are the weights of the directed connections from any $j^{th}$ neuron projecting to neuron $i$. The connection weights can only take four values, i.e. $w_1 = 0.1$, $w_2 = 0.2$, $w_3 = 0.4$, $w_4 = 0.8$. At the begin of the simulations all connection weights are randomly uniformly distributed among the four possible values. The weights of all the neurons are computed synchronously at each time step.\n\nThe network dynamics implements activity-dependent plasticity of the connection weights. Whenever the activation of a connection does not lead to the activation of its target neuron during an interval lasting $a$ time steps, its weight is weakened to the level immediately lower than the current one. Whenever the weight of a connection reaches the lowest level, the connection is removed from the network . Then, the pruning of the connections provokes the selection of the most significant ones and changes the topology of the network. Similarly, whenever a connection with a weight $w_m$ is activated at least $m+1$ consecutive time steps, the connection weight is strengthened to the level immediately higher than the current one. Hence, the parameter space of our simulations was defined by four parameters: the number $f$ of layer-to-layer downstream connections in the range $3$\u2013$10$ by steps of 1, the small fraction $g$ of the neurons appearing in two different layers in the range $1$\u2013$3$% by steps of 1%, the threshold of activation $\\theta$ in the range $0.8$\u2013$1.4$ by steps of 0.1, and the interval $a$ of the weakening dynamics of the connections in the range $7$\u2013$9$ by steps of 1.\n\n### Implementation of the simulations\n\nThe simulation software was implemented from scratch in Python. The network evolved with the dynamics explained above and the program computed the directed clique complex at each change of the network topology. For the entire network, the directed clique complex was computed each time the connectivity changed because of pruning. For the sub-network of the active nodes, the computation was carried out at each step of the simulation.\n\nThe computed directed clique complexes were used to compute the Euler characteristic both for the complexes representing the entire network and for the sub-complexes of the active nodes. To compute the directed clique complex of a network we used the implementation of the algorithm of in the `igraph` Python package , adapted to find directed cliques. The experiments were run in parallel on several CPUs using the tool GNU Parallel .\n\n## Network filtrations\n\n### Network structures\n\nMany essential topological features of a network are determined by the distribution of edges over its graph. Different types of distributions result in different types of networks. For instance, pure random networks (RN) are formed assuming that edges in the network are independent of each other and they are equally likely to occur . For RN we have used the algorithm implemented in the Python package 'NetworkX' (https:\/\/networkx.github.io\/) with the function 'erdos_renyi_graph' with parameters number of nodes $n=40$ and the probability for edge creation $p=0.2$.\n\nThese simple construction assumptions are generally not followed in networks obtained experimentally from ecological or gene systems, telecommunication networks or the Internet which are characterized by short average path lengths and high clustering, resulting in the so called small-world topology (SW) . For SW we used the same Python package 'NetworkX' with the function 'newman_watts_strogatz_graph' with parameters number of nodes $n=40$ and the number of connected neighbours in ring topology $k=20$ and the probability for adding a new edge $p=0.4$.\n\nOther real-world networks such as brain, social networks, power grids and transportation networks exhibit topologies where more connected nodes, hubs, are more likely to receive new edges. The presence of these hubs and a power law distribution for the degree of the nodes defines scale-free networks (SF) . For SF we used the same Python package 'NetworkX' with the function 'barabasi_albert_graph' with parameters number of nodes $n=40$ and the number of edges to attach from a new node $m=10$.\n\n### Network degree invariant\n\nGiven a directed network $G$, we define two filtrations by sub-networks (ordered sequences of networks in which each network is a sub-network of all the following ones) using the in- and out-degree of nodes. Let $ODF(G)$ be the out-degree filtration of $G$: the $i$-th network $ODF(G)_i$ in this filtration is the sub-network of $G$ induced by the vertices having out-degree at least $i$ and all the target nodes of their outgoing connections. In the same way we define the in-degree filtration $IDF(G)$: the $i$-th network $IDF(G)_i$ in this filtration is the sub-network of $G$ induced by the vertices having in-degree at least $i$ and all the source nodes of their incoming connections.\n\nWe computed the Euler characteristic for each network of the two filtrations, obtaining two sequences of integers, which are plotted to display a measure of the network topology, as a function of the degree levels of the filtration, normalized by the maximum degree present in the network. For example, let us consider the case illustrated in Figure B: one of the random networks with $n=40$ vertices that we have generated with a parameter $p=0.20$, as described above, had a maximum out-degree of its vertices equal to $19$. Therefore all the filtration levels have been divided by this value to normalize them (between 0 and 1).\n\nFor each network family (SF, RN, SW), we generated $N=50$ distinct networks with different seeds for the random numbers generator (the seeds were uniformly distributed integers in the interval $[1, 10000]$). We calculated the network degree filtration invariant sequences for in- and out-degree, which were then averaged for each network family and represented in Figure with the $95\\%$ pointwise confidence bands.\n\n# Competing interests\n\nThe authors declare that they have no competing interests.\n\n# Author's contributions\n\nConceived and designed the experiments: PM, AEPV. Developed the mathematical construction, implemented the simulation, analyzed the results and drafted the manuscript: PM, AEPV. Both authors reviewed, read and approved the final manuscript.","meta":{"dup_signals":{"dup_doc_count":21,"dup_dump_count":18,"dup_details":{"curated_sources":2,"2023-14":1,"2022-49":1,"2022-27":2,"2022-05":1,"2021-39":1,"2021-25":1,"2021-10":1,"2020-50":1,"2020-40":1,"2020-29":2,"2020-16":1,"2020-05":1,"2019-47":1,"2019-39":1,"2023-40":1,"2024-10":1,"2024-26":1}},"filename":"out\/1510.00660_extract_algtop_invariant_v7_arXiv.tex.md"},"subset":"arxiv"} +{"text":"abstract: Multifrequency media access control has been well understood in general wireless ad hoc networks, while in wireless sensor networks, researchers still focus on single frequency solutions. In wireless sensor networks, each device is typically equipped with a single radio transceiver and applications adopt much smaller packet sizes compared to those in general wireless ad hoc networks. Hence, the multifrequency MAC protocols proposed for general wireless ad hoc networks are not suitable for wireless sensor network applications, which we further demonstrate through our simulation experiments. In this article, we propose MMSN, which takes advantage of multifrequency availability while, at the same time, takes into consideration the restrictions of wireless sensor networks. Through extensive experiments, MMSN exhibits the prominent ability to utilize parallel transmissions among neighboring nodes. When multiple physical frequencies are available, it also achieves increased energy efficiency, demonstrating the ability to work against radio interference and the tolerance to a wide range of measured time synchronization errors.\nauthor: GANG ZHOU YAFENG WU TING YAN TIAN HE CHENGDU HUANG JOHN A. STANKOVIC TAREK F. ABDELZAHER\nbibliography: bibliography.bib\ntitle: A Multifrequency MAC Specially Designed for Wireless Sensor Network Applications\n\n# Introduction\n\nAs a new technology, Wireless Sensor Networks (WSNs) has a wide range of applications \\[Culler 2001,Bahl 2002,Akyildiz 2001\\], including environment monitoring, smart buildings, medical care, industrial and military applications. Among them, a recent trend is to develop commercial sensor networks that require pervasive sensing of both environment and human beings, for example, assisted living \\[Akyildiz 2002,Harvard 2001,CROSSBOW\\] and smart homes \\[Harvard 2001,Adya 2001,CROSSBOW\\].\n\n> \"For these applications, sensor devices are incorporated into human cloths \\[Natarajan 2001,Zhou 2006,Bahl 2002,Adya 2001\\] for monitoring health related information like EKG readings, fall detection, and voice recognition\".\n\nWhile collecting all these multimedia information \\[Akyildiz 2002\\] requires a high network throughput, off-the-shelf sensor devices only provide very limited bandwidth in a single channel: 19.2Kbps in MICA2 \\[Bahl 2002\\] and 250Kbps in MICAz.\n\nIn this article, we propose MMSN, abbreviation for Multifrequency Media access control for wireless Sensor Networks. The main contributions of this work can be summarized as follows.\n\n- To the best of our knowledge, the MMSN protocol is the first multifrequency MAC protocol especially designed for WSNs, in which each device is equipped with a single radio transceiver and the MAC layer packet size is very small.\n\n- Instead of using pairwise RTS\/CTS frequency negotiation \\[Adya 2001,Culler 2001; Tzamaloukas 2001; Zhou 2006\\], we propose lightweight frequency assignments, which are good choices for many deployed comparatively static WSNs.\n\n- We develop new toggle transmission and snooping techniques to enable a single radio transceiver in a sensor device to achieve scalable performance, avoiding the nonscalable \"one control channel + multiple data channels\" design \\[Natarajan 2001\\].\n\n# MMSN Protocol\n\n## Frequency Assignment\n\nWe propose a suboptimal distribution to be used by each node, which is easy to compute and does not depend on the number of competing nodes. A natural candidate is an increasing geometric sequence, in which $$\\label{eqn:01}\nP(t)=\\frac{b^{\\frac{t+1}{T+1}}-b^{\\frac{t}{T+1}}}{b-1},$$ where $t=0,{\\ldots}\\,,T$, and $b$ is a number greater than $1$.\n\nIn our algorithm, we use the suboptimal approach for simplicity and generality. We need to make the distribution of the selected back-off time slice at each node conform to what is shown in Equation (). It is implemented as follows: First, a random variable $\\alpha$ with a uniform distribution within the interval $(0, 1)$ is generated on each node, then time slice $i$ is selected according to the following equation: $$i=\\lfloor(T+1)\\log_b[\\alpha(b-1)+1]\\rfloor.$$ It can be easily proven that the distribution of $i$ conforms to Equation ().\n\nSo protocols \\[Bahl 2002,Culler 2001,Zhou 2006,Adya 2001,Culler 2001; Tzamaloukas-01; Akyildiz-01\\] that use RTS\/CTS controls[^1] for frequency negotiation and reservation are not suitable for WSN applications, even though they exhibit good performance in general wireless ad hoc networks.\n\n### Exclusive Frequency Assignment\n\nIn exclusive frequency assignment, nodes first exchange their IDs among two communication hops so that each node knows its two-hop neighbors' IDs. In the second broadcast, each node beacons all neighbors' IDs it has collected during the first broadcast period.\n\n#### Eavesdropping\n\nEven though the even selection scheme leads to even sharing of available frequencies among any two-hop neighborhood, it involves a number of two-hop broadcasts. To reduce the communication cost, we propose a lightweight eavesdropping scheme.\n\n## Basic Notations\n\nAs states, for each frequency number, each node calculates a random number (${\\textit{Rnd}}_{\\alpha}$) for itself and a random number (${\\textit{Rnd}}_{\\beta}$) for each of its two-hop neighbors with the same pseudorandom number generator.\n\nBus masters are divided into two disjoint sets, $\\mathcal{M}_{RT}$ and $\\mathcal{M}_{NRT}$.\n\nRT Masters\n\n: $\\mathcal{M}_{RT}=\\{ \\vec{m}_{1},\\dots,\\vec{m}_{n}\\}$ denotes the $n$ RT masters issuing real-time constrained requests. To model the current request issued by an $\\vec{m}_{i}$ in $\\mathcal{M}_{RT}$, three parameters\u2014the recurrence time $(r_i)$, the service cycle $(c_i)$, and the relative deadline $(d_i)$\u2014are used, with their relationships.\n\nNRT Masters\n\n: $\\mathcal{M}_{NRT}=\\{ \\vec{m}_{n+1},\\dots,\\vec{m}_{n+m}\\}$ is a set of $m$ masters issuing nonreal-time constrained requests. In our model, each $\\vec{m}_{j}$ in $\\mathcal{M}_{NRT}$ needs only one parameter, the service cycle, to model the current request it issues.\n\nHere, a question may arise, since each node has a global ID. Why don't we just map nodes' IDs within two hops into a group of frequency numbers and assign those numbers to all nodes within two hops?\n\n# Simulator\n\nIf the model checker requests successors of a state which are not created yet, the state space uses the simulator to create the successors on-the-fly. To create successor states the simulator conducts the following steps.\n\n1. Load state into microcontroller model.\n\n2. Determine assignments needed for resolving nondeterminism.\n\n3. For each assignment.\n\n 1. either call interrupt handler or simulate effect of next instruction, or\n\n 2. evaluate truth values of atomic propositions.\n\n4. Return resulting states.\n\nshows a typical microcontroller C program that controls an automotive power window lift. The program is one of the programs used in the case study described in . At first sight, the programs looks like an ANSI\u00a0C program. It contains function calls, assignments, if clauses, and while loops.\n\n## Problem Formulation\n\nThe objective of variable coalescence-based offset assignment is to find both the coalescence scheme and the MWPC on the coalesced graph. We start with a few definitions and lemmas for variable coalescence.\n\n*Proof.* C-MWPC can be easily reduced to the MWPC problem assuming a coalescence graph without any edge or a fully connected interference graph. Therefore, each C-node is an uncoalesced live range after value separation and C-PC is equivalent to PC. A fully connected interference graph is made possible when all live ranges interfere with each other. Thus, the C-MWPC problem is NP-complete.\u00a0\u25fb\n\n*Proof.* Simply, any solution to the MWPC is also a solution to the C-MWPC. But some solutions to C-MWPC may not apply to the MWPC (if any coalescing were made).\u00a0\u25fb\n\n# Performance Evaluation\n\nDuring all the experiments, the Geographic Forwarding (GF) \\[Akyildiz 2001\\] routing protocol is used. GF exploits geographic information of nodes and conducts local data-forwarding to achieve end-to-end routing. Our simulation is configured according to the settings in . Each run lasts for 2 minutes and repeated 100 times. For each data value we present in the results, we also give its 90% confidence interval.\n\n```latex\n\\tbl{Simulation Configuration\\label{tab:one}}{%\n\\begin{tabular}{|l|l|}\n\\hline\nTERRAIN{$^a$} & (200m$\\times$200m) Square\\\\\\hline\nNode Number & 289\\\\\\hline\nNode Placement & Uniform\\\\\\hline\nApplication & Many-to-Many\/Gossip CBR Streams\\\\\\hline\nPayload Size & 32 bytes\\\\\\hline\nRouting Layer & GF\\\\\\hline\nMAC Layer & CSMA\/MMSN\\\\\\hline\nRadio Layer & RADIO-ACCNOISE\\\\\\hline\nRadio Bandwidth & 250Kbps\\\\\\hline\nRadio Range & 20m--45m\\\\\\hline\n\\end{tabular}}\n```\n\n# Conclusions\n\nIn this article, we develop the first multifrequency MAC protocol for WSN applications in which each device adopts a single radio transceiver. The different MAC design requirements for WSNs and general wireless ad-hoc networks are compared, and a complete WSN multifrequency MAC design (MMSN) is put forth. During the MMSN design, we analyze and evaluate different choices for frequency assignments and also discuss the nonuniform back-off algorithms for the slotted media access design.\n\n# Typical references in new ACM Reference Format\n\nA paginated journal article , an enumerated journal article , a reference to an entire issue , a monograph (whole book) , a monograph\/whole book in a series (see 2a in spec. document) , a divisible-book such as an anthology or compilation followed by the same example, however we only output the series if the volume number is given (so Editor00a's series should NOT be present since it has no vol. no.), a chapter in a divisible book , a chapter in a divisible book in a series , a multi-volume work as book , an article in a proceedings (of a conference, symposium, workshop for example) (paginated proceedings article) , a proceedings article with all possible elements , an example of an enumerated proceedings article , an informally published work , a doctoral dissertation , a master's thesis: , an online document \/ world wide web resource , , , a video game (Case 1) and (Case 2) and and (Case 3) a patent , work accepted for publication , 'YYYYb'-test for prolific author and . Other cites might contain 'duplicate' DOI and URLs (some SIAM articles) . Boris \/ Barbara Beeton: multi-volume works as books and .\n\n# APPENDIX\n\nIn this appendix, we measure the channel switching time of Micaz \\[CROSSBOW\\] sensor devices. In our experiments, one mote alternatingly switches between Channels 11 and 12. Every time after the node switches to a channel, it sends out a packet immediately and then changes to a new channel as soon as the transmission is finished. We measure the number of packets the test mote can send in 10 seconds, denoted as $N_{1}$. In contrast, we also measure the same value of the test mote without switching channels, denoted as $N_{2}$. We calculate the channel-switching time $s$ as $$\\begin{aligned}\n%\ns=\\frac{10}{N_{1}}-\\frac{10}{N_{2}}. \\nonumber\n\\end{aligned}$$ By repeating the experiments 100 times, we get the average channel-switching time of Micaz motes: 24.3$\\mu$s.\n\n# This is an example of Appendix section head\n\nChannel-switching time is measured as the time length it takes for motes to successfully switch from one channel to another. This parameter impacts the maximum network throughput, because motes cannot receive or send any packet during this period of time, and it also affects the efficiency of toggle snooping in MMSN, where motes need to sense through channels rapidly.\n\nBy repeating experiments 100 times, we get the average channel-switching time of Micaz motes: 24.3 $\\mu$s. We then conduct the same experiments with different Micaz motes, as well as experiments with the transmitter switching from Channel 11 to other channels. In both scenarios, the channel-switching time does not have obvious changes. (In our experiments, all values are in the range of 23.6 $\\mu$s to 24.9 $\\mu$s.)\n\n# Appendix section head\n\nThe primary consumer of energy in WSNs is idle listening. The key to reduce idle listening is executing low duty-cycle on nodes. Two primary approaches are considered in controlling duty-cycles in the MAC layer.\n\n[^1]: RTS\/CTS controls are required to be implemented by 802.11-compliant devices. They can be used as an optional mechanism to avoid Hidden Terminal Problems in the 802.11 standard and protocols based on those similar to \\[Akyildiz 2001\\] and \\[Adya 2001\\].","meta":{"dup_signals":{"dup_doc_count":14,"dup_dump_count":2,"dup_details":{"curated_sources":7,"unknown":7}},"filename":"out\/1702.00686_extract_template_sample.tex.md"},"subset":"arxiv"} +{"text":"abstract: In sparse linear bandits, a learning agent sequentially selects an action and receive reward feedback, and the reward function depends linearly on a few coordinates of the covariates of the actions. This has applications in many real-world sequential decision making problems. In this paper, we propose a simple and computationally efficient sparse linear estimation method called $\\textsc{PopArt}$ that enjoys a tighter $\\ell_1$ recovery guarantee compared to Lasso (Tibshirani, 1996) in many problems. Our bound naturally motivates an experimental design criterion that is convex and thus computationally efficient to solve. Based on our novel estimator and design criterion, we derive sparse linear bandit algorithms that enjoy improved regret upper bounds upon the state of the art (Hao et al., 2020), especially w.r.t. the geometry of the given action set. Finally, we prove a matching lower bound for sparse linear bandits in the data-poor regime, which closes the gap between upper and lower bounds in prior work.\nauthor: Kyoungseok Jang \nUniversity of Arizona \n`email@example.com` \nChicheng Zhang \nUniversity of Arizona \n`email@example.com ` \nKwang-Sung Jun \nUniversity of Arizona \n`email@example.com` \nbibliography: library-shared.bib\ntitle: PopArt: Efficient Sparse Regression and Experimental Design for Optimal Sparse Linear Bandits\n\n```latex\n\\begin{bibunit}[plainnat]\n \n\n\\setlength{\\abovedisplayskip}{5pt}\n\\setlength{\\belowdisplayskip}{5pt}\n\\setlength{\\abovedisplayshortskip}{5pt}\n\\setlength{\\belowdisplayshortskip}{5pt}\n\n\n%-- for reducing space after algorithm environment\n\n\\maketitle\n\n% \\vspace{-10pt}\n\\begin{abstract}\n% \\vspace{-10pt}\nIn sparse linear bandits, a learning agent sequentially selects an action and receive reward feedback, and the reward function depends linearly on a few coordinates of the covariates of the actions. \nThis has applications in many real-world sequential decision making problems. \n%, such as online advertising and personalized medicine. \nIn this paper, we propose a simple and computationally efficient sparse linear estimation method called \\ensuremath{\\textsc{PopArt}} that enjoys a tighter $\\ell_1$ recovery guarantee compared to Lasso (Tibshirani, 1996) in many problems. \nOur bound naturally motivates an experimental design criterion that is convex and thus computationally efficient to solve.\n% and a new experimental design criterion that promotes tighter estimation guarantees compared to Lasso.\n% our bound naturally motivates an experimental design criterion that is convex and thus computationally efficient to solve. \nBased on our novel estimator and design criterion, we derive sparse linear bandit algorithms that enjoy improved regret upper bounds upon the state of the art (Hao et al., 2020), especially w.r.t. the geometry of the given action set.\n% for data-poor regime and data-rich regime, respectively, \nFinally, we prove a matching lower bound for sparse linear bandits in the data-poor regime, which closes the gap between upper and lower bounds in prior work.\n% showing that our sparse linear bandit algorithm is optimal.\n% close the gap between upper and lower bounds\n\\end{abstract}\n%\\chicheng{sometimes tighter?} \\kj{I changed the wording.}\n% \\kj{would-be-great: \n% \\begin{itemize}\n% \\item (optional) maybe no need to explain what a sparse linear bandit is \n% \\item mention explicit regret bounds. $\\tilde O( (H^2 sn)^{2\/3})$ where $H^2$ is our novel quantity that captures the geometry of the action set. (also needs to explain $s$ and $n$).\n% \\end{itemize}\n\n% }\n\n\n\\textfloatsep=.6em\n\n% \\vspace{-20pt}\n\\section{Introduction}\n\\label{sec:intro}\n% \\vspace{-8pt}\n\nIn many modern science and engineering applications, high-dimensional data naturally emerges, where the number of features significantly outnumber the number of samples. \nIn gene microarray analysis for cancer prediction~\\cite{ramaswamy2001multiclass}, for example, tens of thousands of genes expression data are measured per patient, far exceeding the number of patients. Such practical settings motivate the study of high-dimensional statistics, where certain structures of the data are exploited to make statistical inference possible. One representative example is sparse linear models~\\cite{hastie2015statistical}, where we assume that a linear regression task's underlying predictor depends only on a small subset of the input features.\n\n%for example, in personalized medicine, high dimensional features of patients can \n\nOn the other hand, online learning with bandit feedback, due to its practicality in many applications such as online news recommendations~\\cite{li10acontextual} or clinical trials~\\cite{liao2016sample, woodroofe1979one}, has attracted a surge of research interests. \nOf particular interest is linear bandits, where in $n$ rounds, the learner repeatedly takes an action $A_t$ (e.g., some feature representation of a product or a medicine) from a set of available actions $\\cA \\subset \\RR^d$ and receives a reward $r_t = {\\langle\\theta^*,A_t\\rangle} + \\eta_t$ as feedback where $\\eta_t \\in \\RR$ is an independent zero-mean, $\\sigma$-sub-Gaussian noise. \nSparsity structure is abundant in linear bandit applications: for example, customers' interests on a product depend only on a number of its key specs; the effectiveness of a medicine only depends on a number of key medicinal properties, which means that the unknown parameter $\\theta^*$ sparse; i.e., it has a small number of nonzero entries. \n\n%$$r_t = \\inner{\\theta^*}{A_t} + \\eta_t, $$ \n%\\kj{maybe ``''?}\n%\\chicheng{I rewrote the paragraph above. Kyoungseok and Kwang, please check if you have a chance!} \\kj{done!}\n%\\chicheng{Set $\\sigma = 1$ here, to be consistent with the preliminaries section?}\n%\\chicheng{Small comment: it would be good if we are consistent with using $a^\\T b$ or $\\inner{a}{b}$.}\n\n%In many applications of linear bandits, \n%\\cz{I recommend compressing the first two paragraphs into just one, and start introducing examples that are related to experimental design \/ bandits. How about something like this: We can use e.g. \\url{https:\/\/hastie.su.domains\/StatLearnSparsity_files\/SLS_corrected_1.4.16.pdf} to find more examples \n%} \\ja{I also think your introduction is much better than my one. I replaced the starting of intro to yours}\n\n\n%The bandit field was no exception, and the problem called sparse linear bandit is now one of the mainstream in the linear bandit field. Let $\\mathcal{A} \\subset \\mathbb{R}^{d}$ be the action set. For each round $t$, the agent chooses an actions $a_t \\in \\mathcal{A}$ and then receives a reward $r_t$ as a noisy linear function:\n% $$r_t = a_t^\\top \\theta^* + \\eta_t $$\n%where $\\eta_t \\in \\mathbf{R}$ is a $\\sigma$ sub-Gaussian noise. The objective of the agent is to maximize its cumulative rewards.\n\n%Starting from \n% people notice that\n%bandit algorithms can achieve some informational advantage from sparsity. \n\n\n%\\cz{Here, we can also discuss the difference of our setup with other lines of works, e.g. the ones with stochastic contexts~\\cite{oh2021sparsity,kim19doubly,bastani2020online,sivakumar2020structured,liu2020smoothed,pmlr-v80-wang18j,wang2020nearly} - basically those that appear in Hao et al's Table 1 and follow up works.\n%}\n%\\ja{I used it on the related works section now...... However, currently I am considering about the structure of the introduction...... should I have to include related works in introduction part?} \\cz{I see - I agree that these may be more suitable for ``related work'' section.}\n\n%\\begin{itemize}\n% \\item $\\sqrt{sdn} $ is optimal, but it requires $n \\ge ds$, and there is no known computationally efficient procedure for the generic arm set.\n% \\item however, we know that a good set of measurements can accelerate the recovery. for example, L1 estimation error or the minimum signal.\n% \\item thus the idea of having a nonvacuous regret bound for $n = o(d)$ and a computationally efficient procedure is quite attractive. here, the regret bound must naturally be arm set dependent .\n% \\item hao et al. did this but there were a few open problems. first, the gap between the upper and lower bounds. second, the arm set dependent quantity.\n%\\end{itemize}\n\nEarly studies~\\cite{ay12online,carpentier2012bandit,lattimore2015linear} on sparse linear bandits have revealed that leveraging sparsity assumptions yields bandit algorithms with lower regret than those provided by full-dimensional linear bandit algorithms~\\cite{abe99associative,auer02using,dani08stochastic,abbasi2011improved}. \nHowever, most existing studies either rely on a particular arm set (e.g., a norm ball), which is unrealistic in many applications, or use computationally intractable algorithms.\nIf we consider an arbitrary arm set, however, the optimal worst-case regret is ${\\Theta}(\\sqrt{sdn})$ where $s$ is the sparsity level of $\\th^*$, which means that as long as $n = O(sd)$, there exists an instance for which the algorithm suffers a linear regret~\\cite{lattimore18bandit}.\nThis is in stark contrast to supervised learning where it is possible to enjoy nontrivial prediction error bounds for $n = o(d)$~\\cite{foster94risk}.\nThis motivates a natural research question: Can we develop computationally efficient {sparse linear bandit} algorithms that allow a generic arm set yet enjoy nonvacuous bounds in the data-poor regime by exploiting problem-dependent characteristics?\n\nThe seminal work of~\\citet{hao2020high} provides a positive answer to this question.\nThey propose algorithms that enjoy nonvacuous regret bounds with an arbitrary arm set in the data poor regime using Lasso.\nSpecifically, they have obtained a regret bound of $\\tilde O({\\Cmin}^{-2\/3} s^{2\/3}n^{2\/3})$ where ${\\Cmin}$ is an arm-set-dependent quantity. \nHowever, their work still left a few open problems.\nFirst, their regret upper bound does not match with their lower bound $\\Omega({\\Cmin}^{-1\/3} s^{1\/3}n^{2\/3})$.\nSecond, it is not clear if ${\\Cmin}$ is the right problem-dependent constant that captures the geometry of the arm set. \n\n\n\n%In this work, \n%\n%\n%To get around this issue and still enjoy a meaningful regret bound for the data poor regime of $n=o(d)$, one may go beyond the minimax regret and attempt to derive a regret bound that depends on the geometry of the given arm set or the known parameter $\\th^*$.\n%\n%\n%existing high-dimensional regression algorithms like best subset selection (BSS) or Lasso can enjoy \n%\n\n%\\chicheng{I made a pass on this part - some (usused) texts are commented out here.}\n%Though it is a special case, Recalling that the limit in the linear bandit, which is the parent topic of the sparse linear bandit, is $\\Theta(d\\sqrt{n})$\\cite{dani08stochastic, abbasi2011improved, lattimore18bandit}, we can say that those results are ideal result as much as the algorithm already knows where the nonzero coordinate is.\n%design one of the earliest analysis about the sparse linear bandit studies which\n%the matching upper bound to the known regret lower bound of the\n%for the exploration \n%\\red{\n%Early studies of~\\citet{ay12online,carpentier2012bandit,lattimore2015linear} reveal that leveraging sparsity assumptions yields bandit algorithms with lower regret than those provided by full-dimensional linear bandit algorithms~\\cite{dani08stochastic, abbasi2011improved, lattimore18bandit}. \n%\\citet{ay12online} devised the \\textrm{SeqSEW} algorithm which has a near-optimal $\\Omega(\\sqrt{sdn})$ regret bound for sparse linear bandit. However, their algorithm was computationally intractable. \\cite{carpentier2012bandit, lattimore2015linear} studied the special cases where the action set is the unit $\\ell_2$ ball or a scaled hypercube, respectively and proved $\\tilde{O}(s\\sqrt{n})$ regret bounds. \n%}\n\n%interesting \n%studies is\n%sparse linear bandit\n%In high-dimensional regression problems, usually the feature dimension $d$ is a big quantity that can be even larger than the number of samples.\n%Therefore it could be beneficial to remove the $d$ term from the regret order analysis. \n%number of interaction rounds\n%\\red{\n%A recent work~\\citep{hao2020high} initiated the research on designing\n%sparse linear bandit algorithms with general (fixed) action sets that have dimension-free regret guarantees. This is motivated by high-dimensional linear bandit applications where the feature dimension $d$ can even be larger than $n$, the time horizon length, in which case even a $O(\\sqrt{sdn})$ regret bound is unaffordable.\n%\\cite{hao2020high} proposed an explore-then-commit-based algorithm, ESTC, for this setting. They show a dimension-free regret upper bound that depends on $\\mathcal{C}_{\\min}$, a constant that captures the geometry of the action set. In addition, they proved the first $\\Omega(n^{2\/3})$ dimension-free regret lower bound for sparse linear bandits.\n%relieved the dimension dependency of the regret upper bound by introducing\n%for the fixed action set setting\n% constant instead\n% geometry\n\n%points for improvement\n%The pioneering study~\\cite{hao2020high} left out several open questions. First, its regret upper and lower bounds are not matching, and it was left as an open problem that what is the minimax optimal regret bound order. Moreover, there was a gap between theoretical estimation error and practical exploration planning for the bandit algorithm. The algorithm ESTC is mainly based on Lasso \\cite{tibshirani96regression}, which has a $\\ell_1$ norm bound error based on the compatibility constant \\cite{wainwright2019high,bv11}. However, it is difficult to compute the compatibility constant with current techniques, and they used $\\mathcal{C}_{\\min}$ instead, which is easier to compute but provides a looser analysis. See Table \\ref{table: results} for a recap of the results in \\citet{hao2020high}.}\n\n%In particular, as it was known that Lasso~\\cite{tibshirani96regression}, a celebrated $\\ell_1$-regularization method, has the effect of naturally and efficiently controlling the number of non-zero coordinates in sparse linear models, many research methods are based on Lasso. Plus, due to the mathematical characteristics of the Lasso, people generally assume eigenvalue-related conditions such as restricted eigenvalue condition or compatibility condition for better results.\n\n%However, by focusing only on the decomposition of the dot product through Holder's inequality and the $\\ell_1$ norm guarantee of Lasso, many other papers failed to consider the predictive error bound for a fixed point. In particular, although Lasso has its own disadvantages such as bias, previous sparse linear bandit researches did not consider much about other estimators other than Lasso.\n%\\cz{I think the above discussion on lasso is better put to the ``comparison with lasso'' section.}\n\n\n\\begin{table}[t]\n%--------- \"\\ding{55} \\ding{51}\"\n% \\vskip 0.15in\n\\begin{center}\n\\begin{small}\n\\begin{sc}\n% \\setlength\\tabcolsep{4.1pt}\n\\begin{tabular}{lcccr}\n\\toprule\n & Regret Bound & Data-poor & Assumptions \\\\\n\\midrule\n\\citet{hao2020high} & $\\tilde{O}(s^{2\/3}\\mathcal{C}_{\\min}^{-2\/3}n^{2\/3})$ & \\ding{51} & $\\mathcal{A}$ spans $\\mathbb{R}^d$ \\\\\n\\citet{hao2020high} & ${\\Omega(s^{1\/3}\\kappa^{-2\/3}n^{2\/3})}$ &\\ding{51} & $\\cA$ spans $\\RR^d$ \\\\\n% (\\autoref{sec:etc-wpopart})\nAlgorithm~\\ref{alg:etc-sparse} (Ours) & $\\tilde{O}(s^{2\/3} H_*^{2\/3}n^{2\/3})$ & \\ding{51} & $\\mathcal{A}$ spans $\\mathbb{R}^d$\\\\\n% (\\autoref{sec:phased-elim-wpopart})\nTheorem~\\ref{thm:lower} (Ours) & $\\Omega(s^{2\/3}\\kappa^{-2\/3}n^{2\/3})$ &\\ding{51} & $\\cA$ spans $\\RR^d$\\\\\n\\hline\n\\citet{hao2020high} & $\\tilde{O}(\\sqrt{\\mathcal{C}_{\\min}^{-1}sn})$ & \\ding{55} & $\\mathcal{A}$ spans $\\mathbb{R}^d$, Min. Signal \\\\\nAlgorithm~\\ref{alg:phase-elim} (Ours) & $\\tilde{O}(\\sqrt{sn})$ & \\ding{55} & $\\mathcal{A}$ spans $\\mathbb{R}^d$, Min. Signal\\\\\n%\\xmark \\cmark\n\\bottomrule\n\\end{tabular}\n\\end{sc}\n\\caption{\n Regret bounds of our work and the prior art where $s$, $d$, $n$ are the sparsity level, the feature dimension, and the number of rounds, respectively. \n The quantities $\\mathcal{C}_{\\min}$ and $H_*^2$ are the constants that captures the geometry of the action set (see Eq.~\\eqref{def: Cmin} and~\\eqref{def: H2}), and $\\kappa$ is a parameter for a specific family of arm sets that satisfies $ \\kappa^{-2} = \\Theta( \\mathcal{C}_{\\min}^{-1}) = \\Theta( H^2_* )$. \n% \\kj{do we want to say $\\kappa^2 \\le H^2_*$ as well?}\n In general, $ H_*^2 \\leq \\mathcal{C}_{\\min}^{-1} \\le \\mathcal{C}_{\\min}^{-2}$ (Propositon~\\ref{prop:H2 vs Cmin}).\n%\\kj{TODO: use $\\kappa$ but explain that $\\kappa=\\mathcal{C}_{\\min}=H^2$}\n%\\kj{$T$ to $n$}\n% \\kj{lower order terms are omitted for brevity.}\n}\n\\label{table: results}\n\\end{small}\n\\end{center}\n% \\vskip -0.1in\n\\end{table}\n\n%\\chicheng{Ideally (if we have time) we should also mention some historical contexts of sparse linear regression and experimental design, and how we improve over those. But it is perhaps secondary.}\nIn this paper, we make significant progress in high-dimensional linear regression and sparse linear bandits, which resolves or partly answers the aforementioned open problems.\n\n\\textbf{First} (Section~\\ref{sec:popart}), we propose a novel and computationally efficient estimator called \\ensuremath{\\textsc{PopArt}}(POPulation covariance regression with hARd Thresholding) that enjoys a tighter $\\ell_1$ norm recovery bound than the de facto standard sparse linear regression method Lasso in many problems.\nMotivated by the $\\ell_1$ norm recovery bound of \\ensuremath{\\textsc{PopArt}}, we develop a computationally-tractable design of experiment objective for finding the sampling distribution that minimize the error bound of \\ensuremath{\\textsc{PopArt}}, which is useful in settings where we have control on the sampling distribution (such as compressed sensing).\nOur design of experiments results in an $\\ell_1$ norm error bound that depends on the measurement set dependent quantity denoted by $H_*^{2}$ (see Eq.~\\eqref{def: H2} for precise definition) that is provably better than ${\\Cmin}^{-1}$ that appears in the $\\ell_1$ norm error bound used in \\citet{hao2020high}, thus leading to an improved planning method for {sparse linear} bandits.\n\\textbf{Second} (Section~\\ref{sec:bandits}), Using \\ensuremath{\\textsc{PopArt}}, we design new algorithms for the sparse linear bandit problem, and improve the regret upper bound of prior work~\\cite{hao2020high}; see Table~\\ref{table: results} for the summary.\n\\textbf{Third} (Section~\\ref{sec:lower-bound}), We prove a matching lower bound in data-poor regime, showing that the regret rate obtained by our algorithm is optimal. \nThe key insight in our lower bound is a novel application of the algorithmic symmetrization technique \\cite{simchowitz2017simulator}. {Unlike the conjecture of \\citet[Remark 4.5]{hao2020high}, the improvable part was not the algorithm but the lower bound for sparsity $s$.}\n\nWe empirically verify our theoretical findings in Section~\\ref{sec:expr} where \\ensuremath{\\textsc{PopArt}} shows a favorable performance over Lasso.\nFinally, we conclude our paper with future research enabled by \\ensuremath{\\textsc{PopArt}} in Section~\\ref{sec:conclusion}.\nFor space constraint, we discuss related work in Appendix~\\ref{sec:related} but closely related studies are discussed in depth throughout the paper.\n\n%\\red{Finally, we verify our theoretical improvement of the algorithm by the experiment. } \\kj{in terms of L1 recovery guarantee of \\popart and the sparse linear bandit problems.}\n\n%\\kj{====}\n\n%\n%Specifically,\n%\\begin{itemize}\n% \\item We propose a novel and computationally efficient estimator called \\popart (POPulation covariance regression with hARd Thresholding) that enjoys a tighter $\\ell_1$ norm recovery bound than the de facto standard sparse linear regression method Lasso in many problems. % compared to prior approaches like Lasso.\n% \\item Motivated by the $\\ell_1$ norm recovery bound of \\popart, we develop a computationally-tractable design of experiment objective for finding the sampling distribution that minimize the error bound of \\popart, which can is useful in settings where we have control on the sampling distribution (such as compressed sensing).\n% %and \n% %make the following\n% % which achieves a performance improvement by calculating variance for each coordinate and thresholding, instead of the traditional lasso-based estimation.\n% \\item Using \\popart, we design new algorithms for the sparse linear bandit problem, and improve the regret upper bound of prior work~\\cite{hao2020high}. \n% \\item We prove a matching lower bound in data-poor regime, showing that the regret rate obtained by our algorithm is optimal. \n% The key insight in our lower bound is a novel application of the algorithmic symmetrization technique \\cite{simchowitz2017simulator}. \\red{Unlike the speculation of \\citet[Remark 4.5]{hao2020high}, the improvable part was not the algorithm but the lower bound. }\\kj{see if hao et al. is saying UB can be improved; if true, then say `unlike their speculation'}\n% \\item \\red{Finally, we verify our theoretical improvement of the algorithm by the experiment. } \\kj{in terms of L1 recovery guarantee of \\popart and the sparse linear bandit problems.}\n% % which is about the optimal order of the instance dependent constant. \n%\\end{itemize}\n%We summarize our theoretical results and their comparison with~\\citet{hao2020high} in Table~\\ref{table: results}.\n\n\\vspace{-7pt}\n\\section{Problem Definition and Preliminaries} \\label{sec: prelim}\n\\vspace{-6pt}\n\n\\noindent\\textbf{Sparse linear bandits.} We study the sparse linear bandit learning setting, where the learner is given access to an action space ${\\cA} \\subset \\{a\\in\\RR^d: \\|a\\|_\\infty \\le 1\\}$, and repeatedly interacts with the environment as follows: at each round $t = 1,\\ldots,n$, the learner chooses some action $A_t \\in \\cA$, and receives reward feedback $r_t = {\\langle\\theta^*,A_t\\rangle} + \\eta_t$, where ${\\theta^*} \\in \\RR^d$ is the underlying reward predictor, and ${\\eta_t}$ is an independent zero-mean $\\sigma$-subgaussian noise.\nWe assume that $\\theta^*$ is $s$-sparse; that is, it has at most $s$ nonzero entries. \nThe goal of the learner is to minimize its pseudo-regret defined as \n\\[\n\\Reg(n) = n \\max_{a \\in \\cA} {\\langle\\theta^*,a\\rangle} - \\sum_{t=1}^n {\\langle\\theta^*,A_t\\rangle}.\n\\]\n%maximize its cumulative reward \\sum_{t=1}^T \n%\\subsection{Experimental design}\n%(called experiments)\n% \\cz{Adding this for more background.}\n\\noindent\\textbf{Experimental design for linear regression.} In the experimental design for linear regression problem, one has a pool of unlabeled examples $\\cX$, and some underlying predictor $\\theta^*$ to be learned. \n%Each example $x$ is associated with a label . \nQuerying the label of $x$, i.e. conducting experiment $x$, reveals a random label $y = {\\langle\\theta^*,x\\rangle} + \\eta$ associated with it, where $\\eta$ is a zero mean noise random variable. The goal is to accurately estimate $\\theta^*$, while using as few queries $x$ as possible. \n\n%nice \n% with\n\n%In the following description, the experimental design plays an important role. \n\n% The following definition describes some of the key designs and constants.\n\n\n\\begin{definition}(Population covariance matrix $Q$) \nLet $\\mathcal{P}({\\cX})$ be the space of probability measures over $\\mathcal{X}$ with the Borel $\\sigma$-algebra, and define the population covariance matrix for the distribution $\\mu \\in \\mathcal{P}(\\cX)$ as follows:\n\\begin{equation}\nQ(\\mu):=\\int_{a \\in \\mathcal{X}} a a^\\top d \\mu(a)\n\\end{equation}\nClassical approaches for experimental design focus on finding a distribution $\\mu$ such that its induced population covariance matrix $Q(\\mu)$ has properties amenable for building a low-error estimator, such as D-, A-, G-optimality~\\cite{fedorov2013theory}.\n\n\\textbf{Compatibility condition for Lasso.~}\n%\nFor a positive definite matrix $\\Sigma \\in \\RR^{d\\times d}$ and a sparsity level $s\\in[d] := \\{1,\\ldots,d\\}$, % set of indices $S \\subset [d]$, \nwe define its compatibility constant $\\phi_0^2 (\\Sigma, s)$\n% \\cz{Should this be the compatibility constant of a particular subset $S$ or all subset of size $s$? Also, shall we emphasize this dependence using notation $\\phi_0^2(\\Sigma, s)$, similar to~\\cite{bickel2009simultaneous}? }\\ja{done. Sorry for the confusion}\nas follows:\n\\begin{equation}\n\\phi_0^2 (\\Sigma, s):=\\min_{S\\subseteq[d]: |S| = s}~ \\min_{v: \\| v_{S} \\|_1 \\leq 3\\| v_{-S} \\|_1}\\frac{s v^\\top \\Sigma v}{\\| v_S \\|_1^2},\\label{def: comp const}\n\\end{equation}\nwhere $v_S \\in\\RR^d$ denotes the vector that agrees with $v$ in coordinates in $S$ and $0$ everywhere else and $v_{-S}\\in\\RR^d$ denotes $v - v_S$. \n\n\\textbf{Notation.~}\nLet $e_i$ be the $i$-th {canonical basis} vector.\nWe define $[x] = \\{1,2,\\ldots,x\\}$.\nLet $\\supp(\\theta)$ be the set of coordinate indices $i$ where $\\theta_i \\neq 0$.\nWe use $a \\lesssim b $ to denote that there exists an absolute constant $c$ such that $a \\le cb$.\n\n%\\cz{As discussed in the meeting, perhaps use \n%\\[\n%\\phi_0^2\n%:=\n%\\min_{S: |S| = s} \\min_{v: \\| v_{-S} \\|_1 \\leq 3\\| v_{-S} \\|_1}\n%\\frac{s v^\\top \\Sigma v}{\\| v_S \\|_1^2},\n%\\]\n%for more explicit expression.\n%}\\ja{Change applied. Thanks for the fix.}\n\\end{definition}\n\n\\section{Improved Linear Regression and Experimental Design for Sparse Models}\n\\label{sec:popart}\n% algorithm\n%\\cz{Proposed change of title: } \\ja{Change applied. Thanks for your idea. }\n%namely \n%namely \n\nIn this section, we discuss our novel sparse linear estimator \\ensuremath{\\textsc{PopArt}} for the setting where the population covariance matrix is known and show its strong theoretical properties.\nWe then present a variation of \\ensuremath{\\textsc{PopArt}} called \\ensuremath{\\textsc{Warm-PopArt}} that amends a potential weakness of \\ensuremath{\\textsc{PopArt}}, followed by our novel experimental design for \\ensuremath{\\textsc{PopArt}} and discuss its merit over prior art.\n\n%\\ja{ Thanks for your writing about the section introduction. I write it on here officially.}\n\n%\\cz{Now that I think about organizing this section, does it make more sense to use the following order: (1) \\popart (with general $\\mu$); (2) \\wpopart (still with general $\\mu$; (3) new experimental design criterion? This way, we make \\wpopart and experimental design decoupled. And in the subsequent bandit algorithm, we can let the algorithm for solve the optimal experimental design problem to get $\\mu^*$ and run $\\wpopart$ on $\\mu^*$.} \\ja{Done. Changed the order of the section and few words}\n\n%\\popart\n\\textbf{\\ensuremath{\\textsc{PopArt}}(POPulation covariance regression with hARd Thresholding).}\n% \\kj{remove the subsection; but word smithing around the beginning of the next paragraph.}\nUnlike typical estimators for the statistical learning setup, our main estimator \\ensuremath{\\textsc{PopArt}} described in Algorithm~\\ref{alg:popart} takes the population covariance matrix denoted by $Q$ as input.\nWe summarize our assumption for \\ensuremath{\\textsc{PopArt}}.\n% Specifically, in \\popart, we assume that the given data points ${(\\cova_t,\\resp_t)}_{t=1}^n$ satisfy that $\\cova_t \\sr{\\text{i.i.d.}}{\\sim} \\mu$ from some distribution $\\mu$ and that $Q = Q(\\mu) := \\EE_{X\\sim \\mu}[X X^\\T]$.\n% Furthermore, we assume that $\\resp_t = \\inner{\\theta^*}{\\cova_t} + \\eta_t$ with $\\eta_t$ being zero-mean $\\sig$-subgaussian noise.\n% % where $\\cova_t$'s are drawn iid from $\\mu$ and $\\resp_t = \\inner{\\theta^*}{\\cova_t} + \\eta_t$ with $\\eta_t$ being a zero-mean 1-subgaussian noise;\n\n\\begin{assumption}\\label{ass:popart}\n(Assumptions on the input of \\ensuremath{\\textsc{PopArt}})\nThere exists $\\mu$ such that the input data points $\\{(X_t,Y_t)\\}_{t=1}^n$ satisfy that $X_t \\sr{\\text{i.i.d.}}{\\sim} \\mu$ and $Q=Q(\\mu) := \\EE_{X\\sim \\mu}[X X^\\T]$.\nFurthermore, $Y_t = {\\langle\\theta^*,X_t\\rangle} + \\eta_t$ with $\\eta_t$ being zero-mean $\\sig$-subgaussian noise.\nAlso, $R_0 \\geq \\max_{a \\in \\mathcal{A}} |\\langle a, \\theta^* - \\theta_0 \\rangle|$.\n\\end{assumption}\n\n% \\chicheng{I see the phrase ``pilot estimator'' used in \\url{https:\/\/arxiv.org\/abs\/1604.08098v3} - so it is not completely ungrounded.}\n\n%\\chicheng{For experimental design for linear regression, shall we use $(A_t, r_t)$ notation or shall we use $(x_t, y_t)$ notation?}\n\n%\\chicheng{Rewrote the algorithm's different stages and some motivations on algorithm design - please check.}\n\\begin{algorithm}[h]\n\\caption{ \\ensuremath{\\textsc{PopArt}}(POPulation covariance regression with hARd Thresholding)}\n\\label{alg:popart}\n\\begin{algorithmic}[1]\n% A_2, \\cdots, A_t\n%number of samples $n$, \n% \\STATE \\textbf{Input:} Samples $(\\cova_t, \\resp_t)_{t=1}^n$, where $(\\cova_t)_{t=1}^n$ are drawn iid from $\\mu$, population covariance matrix $Q(\\mu) = \\EE_{\\cova \\sim \\mu} [\\cova \\cova^\\top]$, pilot estimator $\\theta_0 \\in \\mathbb{R}^d$, failure rate $\\delta$, $R_0$, an upper bound of $\\max_{a \\in \\mathcal{A}} |\\langle a, \\theta^* - \\theta_0 \\rangle|$\n\\STATE \\textbf{Input:} Samples $\\{(X_t, Y_t)\\}_{t=1}^n$, the population covariance matrix $Q\\in\\RR^{d\\times d}$, pilot estimator $\\theta_0 \\in \\mathbb{R}^d$, an upper bound $R_0$ of $\\max_{a \\in \\mathcal{A}} |\\langle a, \\theta^* - \\theta_0 \\rangle|$, failure rate $\\delta$. \n\\STATE \\textbf{Output:} estimator $\\hat{\\theta}$\n\\FOR{$t=1,\\ldots,n$ } \n\\STATE $\\tilde{\\theta}_t = Q^{-1}X_t (Y_t-\\langle X_t, \\theta_0\\rangle )+ \\theta_0$\n\\label{step:one-sample-estimator}\n\\ENDFOR\n% \\FOR{$i=1,\\ldots, d$}\n\\STATE $\\forall i\\in[d], {\\theta}_i'=\\textsf{Catoni}(\\{\\tilde{\\theta}_{ti}:=\\langle\\tilde{\\theta}_{t}, e_i \\rangle\\}_{t=1}^n , \\alpha_i, \\frac{\\delta}{2d})$ where $\\alpha_i:= \\sqrt{\\frac{2\\log \\frac{2d}{\\delta}}{n(R_0^2 + \\sigma^2)(Q^{-1})_{ii}(1+ \\frac{2\\log \\frac{2d}{\\delta}}{n-2 \\log \\frac{2d}{\\delta}})}}$\n\\label{step:catoni-i}\n% \\ENDFOR\n\\STATE $\\hat{\\theta}\\leftarrow \\textsf{clip}_\\lambda ({\\theta'}):= [{\\theta}'_i \\one(|{\\theta}'_i|>\\lambda_i)]_{i=1}^d$ where $\\lambda_i$ is defined in Proposition~\\ref{prop:individual conf bound}.\n\\label{step:hard-threshold}\n\\RETURN $\\hat{\\theta}$\n\\end{algorithmic}\n\\end{algorithm}\n\n\\ensuremath{\\textsc{PopArt}} consists of several stages. In the first stage, for each $(X_t, Y_t)$ pair, we create a one-sample estimator $\\tilde{\\theta}_t$ (step~\\ref{step:one-sample-estimator}).\nThe estimator, $\\tilde{\\theta}_t$, can be viewed as a generalization of doubly-robust estimator~\\cite{chernozhukov2019semi,dudik2011doubly} for linear models. %$\\theta^*$.\nSpecifically, it is the sum of two parts: one is the pilot estimator $\\theta_0$ that is a hyperparameter of \\ensuremath{\\textsc{PopArt}}; the other is $Q(\\mu)^{-1} X_t (Y_t-\\langle X_t, \\theta_0 \\rangle )$, an unbiased estimator of the difference $\\theta^* - \\theta_0$. \nThus, it is not hard to see that $\\tilde{\\theta}_t$ is an unbiased estimator of $\\theta^*$.\nAs we will see in Theorem~\\ref{thm: main bounds of estimator}, the variance of $\\tilde{\\theta}_t$ is smaller when $\\theta_0$ is closer to $\\theta^*$, showing the advantage of allowing a pilot estimator $\\theta_0$ as input.\nIf no good pilot estimator is available a priori, one can set $\\theta_0=0$.\n\n%This stage\n%instead of the already \nFrom the discussion above, it is natural to take an average of $\\tilde{\\theta}_t$. Indeed, when $n$ is large, the population covariance matrix $Q(\\pi)$ is close to empirical covariance matrix $\\hat{Q} := \\frac1n \\sum_{t=1}^n X_t X_t^\\top$, which makes $\\hat{\\theta}_{\\text{avg}} := \\frac1n \\sum_{t=1}^n \\tilde{\\theta}_t$ close to the ordinary least squares estimator $\\hat{\\theta}_{\\text{OLS}} = \\hat{Q}^{-1} (\\frac1n\\sum_{t=1}^n X_t Y_t)$. \nHowever, for technical reasons, the concentration property of $\\tilde{\\theta}_{\\text{avg}}$ is hard to establish.\nThis motivates \\ensuremath{\\textsc{PopArt}}'s second stage (step~\\ref{step:catoni-i}), where, for each coordinate $i \\in [d]$, we employ\nCatoni's estimator \\cite{lugosi2019mean} (see Appendix~\\ref{sec:catoni} for a recap) to obtain an intermediate estimate for each $\\theta_i^*$, namely $\\theta_i'$. \n\n%H^2 (Q)\n\n%; for details\nTo use Catoni's estimator, we need to have an upper bound of the variance of $\\theta_i'$ for its $\\alpha_i$ parameter. A direct calculation yields that, for all $i\\in[d]$ and $t\\in[n]$, $$\\textrm{Var}(\\tilde{\\theta}_{ti})\\leq \\del{ \\max_{a \\in \\cA} {\\langle\\theta^*-\\theta_0,a\\rangle}^2 + \\sigma^2 } \\max_i (Q(\\mu)^{-1})_{ii}$$ where $\\tilde{\\theta}_{ti}:=\\langle \\tilde{\\theta}_t,e_i\\rangle$. This implies that $\\del{ R_0^2 + \\sigma^2 } \\max_i (Q(\\mu)^{-1})_{ii}$ is an upper bound of $\\textrm{Var}(\\tilde{\\theta}_{ti})$. By the standard concentration inequality of Catoni's estimator (see Lemma~\\ref{lem:catoni-error}), we obtain the following estimation error guarantee for $\\theta_i'$; the proof can be found in Appendix~\\ref{appendix:proof-prop}.\nHereafter, all proofs are deferred to appendix unless noted otherwise.\n\n%(see Proposition~\\ref{prop:individual conf bound})\n%which in turn on $\\max_{a \\in \\cA} \\inner{\\theta^*-\\theta_0}{a}$; we choose to use \\chicheng{todo for CZ: explain $H^2(Q)$.}\n\n%The parameter $\\alpha$ in Catoni's estimator, relies on \n\n% Definition \\ref{def: Catoni}\n% for the definition\n%\\kj{write some connecting sentence, introducing this result.}\n% at least\n\\begin{proposition} \\label{prop:individual conf bound} \nSuppose Assumption~\\ref{ass:popart} holds.\nIn \\ensuremath{\\textsc{PopArt}}, for $i\\in [d]$, if $n\\geq 2 \\ln \\frac{2d}{\\delta}$, the following inequality holds with probability $1-\\frac{\\delta}{d}$:\n$$|\\theta_i' - \\theta_i^*|< \\sqrt{\\frac{4(R_0^2 + \\sigma^2) (Q(\\mu)^{-1})_{ii}^2}{n} \\log \\frac{2d}{\\delta}} =: \\lambda_i$$\n\\end{proposition}\n%\\begin{proof}\n%See .\n%\\end{proof}\n%The proof proceeds by combining concentration bound of the Catoni's esimator with the computation of the variance of our estimator, we get the following proposition, which shows the main advantage of this approach. The full proof can be found in\n\n%\\kj{thanks to Proposition 1, it is natural to perform hard thresholding ; mention that ``easy to see that with high probability we do not have false positives''}\n\n%The second stage ensures that $\\theta'$ and $\\theta^*$ are close in $\\ell_\\infty$ norm. \n%To exploit the sparsity of $\\theta^*$,\n\n%By the coordinate-wise error guarantee of $\\theta'$, \n%\\kj{how about just ``confidence interval''?}\n%notation?} \nProposition~\\ref{prop:individual conf bound} shows that,\nfor each coordinate $i$, \n$(\\theta_i' - \\lambda_i, \\theta_i' + \\lambda_i)$ forms a confidence interval for $\\theta_i^*$. \nTherefore, if $0 \\notin (\\theta_i' - \\lambda_i, \\theta_i' + \\lambda_i)$, we can infer that $\\theta_i^* \\neq 0$, i.e., $i \\in \\supp(\\theta^*)$. \nBased on the observation above, \\ensuremath{\\textsc{PopArt}}'s last stage (step~\\ref{step:hard-threshold})\nperforms a hard-thresholding for each of the coordinates of $\\theta'$, using the threshold $\\lambda_i$ for coordinate $i$.\nThanks to the thresholding step, with high probability, $\\hat{\\theta}$'s support is contained in that of $\\theta^*$, which means that all coordinates $i$ outside the support of $\\th^*$ (typically the vast majority of the coordinates when $s \\ll d$) satisfy $\\hat\\theta_i = \\theta^*_i = 0$.\nMeanwhile, for coordinate $i$'s in $\\supp(\\theta^*)$, the estimated value $\\hat{\\theta}_i$ is not too far from $\\theta^*_i$. \n\n\n%We remark that replacing hard thresholding with soft thresholding also enjoys a similar $\\ell_1$-closeness guarantee.\n\n%%an additionally achieves an $\\ell_1$-closeness to $\\theta^*$. \n%$\\hat{\\theta}$ preserves the $\\ell_\\infty$ closeness to $\\theta^*$,\n%$\\theta_i'$, with threshold to obtain its final estimator, $\\hat{\\theta}$. \n% with essentially the same guarantee\n%\\chicheng{Minor comment: soft thresholding also works here with essentially the same guarantee.}\n% and its corresponding reward $r_1, r_2, \\cdots, r_t$\n% perspective\nThe following theorem states \\ensuremath{\\textsc{PopArt}}'s estimation error bound in terms of its output $\\hat{\\theta}$'s $\\ell_\\infty$, $\\ell_0$, and $\\ell_1$ errors, respectively. We remark that replacing hard thresholding in the last stage with soft thresholding enjoys similar guarantees.\n%\n%Let $H^2(Q):=\\max_{i \\in [d]} (Q^{-1})_{ii}$, , $\\beta := \\sqrt{\\frac{4 (R_{0}^2 + \\sigma^2)H^2(Q)}{n}\\log \\frac{2d}{\\delta}}$, and $\\alpha:= \\sqrt{\\frac{2\\log \\frac{1}{\\delta}}{n(R_0^2 + \\sigma^2)H^2(Q)(1+ \\frac{2\\log \\frac{1}{\\delta}}{n-2 \\log \\frac{1}{\\delta}})}} $. \n%If the threshold $\\lambda > \\beta$ and $n \\geq 2\\ln\\frac{2d}{\\delta}$,\n\\begin{theorem}\\label{thm: main bounds of estimator}\nTake Assumption~\\ref{ass:popart}.\nLet $H^2(Q):=\\max_{i \\in [d]} (Q^{-1})_{ii}$.\n%If \\popart receives inputs $\\cbr{\\cova_t, \\resp_t}_{t=1}^n$ drawn from $\\mu$, $Q(\\mu)$, \\gray{pilot estimator $\\theta_0$, failure rate $\\delta$,}\\kj{seems unnecessary?} and $R_0$ such that $R_0 \\geq \\max_{a \\in \\mathcal{A}} |\\langle a, \\theta^* - \\theta_0 \\rangle|$, then all the following items hold \nThen, \\ensuremath{\\textsc{PopArt}} has the following guarantees\nwith probability at least $1-\\delta$: %\\kj{similar to my comment in proposition 1, we can just define the assumption with theorem environment and then refer to it}\n%\\chicheng{As discussed in today's meeting, let's assume $n \\geq 2\\ln\\frac{2d}{\\delta}$, and use the observation that $n - 2\\ln\\frac{2d}{\\delta} \\geq \\frac n 2$ to simplify the above theorem's presentation.} \\ja{Done.}\n\\begin{enumerate}[label=(\\roman*)]\n \\item $\\forall i \\in [d], |\\hat{\\theta}_i - \\theta_i^*|< 2 \\sqrt{\\frac{4 (R_0^2 + \\sigma^2)(Q(\\mu)^{-1})_{ii}}{n}\\log \\frac{2d}{\\delta}}$ so $\\|\\hat{\\theta}-\\theta^*\\|_\\infty < 2 \\sqrt{\\frac{4 (R_0^2 + \\sigma^2)H^2(Q(\\mu))}{n}\\log \\frac{2d}{\\delta}}, $ \n \\item $\\textrm{supp}(\\hat{\\theta})\\subset \\textrm{supp}({\\theta}^*)$ so $\\|\\hat{\\theta}-\\theta^*\\|_0 \\leq s$,\n \\item $\\|\\hat{\\theta}-\\theta^*\\|_1 \\leq 2s \\sqrt{\\frac{4 (R_0^2 + \\sigma^2)H^2(Q(\\mu))}{n}\\log \\frac{2d}{\\delta}}$\n %\\chicheng{$\\lambda := \\sqrt{\\frac{4 (\\bar{R}^2 + \\sigma^2)H^2(Q)}{n}\\log \\frac{2d}{\\delta}}$}\n\\end{enumerate}\n\\end{theorem}\nInterestingly, \\ensuremath{\\textsc{PopArt}} has no false positive for identifying the sparsity pattern and enjoys an $\\ell_\\infty$ error bound, which is not available from Lasso, to our knowledge.\nUnfortunately, a direct comparison with Lasso is nontrivial since the largest compatibility constant $\\phi_0^2 (\\hat{\\Sigma}, s)$ is defined as the solution of the optimization problem~\\eqref{def: comp const}, let alone the fact that $\\phi_0^2 (\\hat{\\Sigma}, s)$ is a function of the empirical covariance matrix.\nWhile we leave further investigation as future work, our experiment results in Section~\\ref{sec:expr} suggest that there might be a case where \\ensuremath{\\textsc{PopArt}} makes a meaningful improvement over Lasso.\n\\vspace{-0.25cm}\n\\begin{proof}[Proof of Theorem~\\ref{thm: main bounds of estimator}]\n{Let $\\lambda := \\max_i \\lambda_i = \\sqrt{\\frac{4(R_0^2 + \\sigma^2) H^2(Q(\\mu))}{n} \\log \\frac{2d}{\\delta}}$}\nFrom Proposition \\ref{prop:individual conf bound} and the union bound, one can check that \\begin{equation} \\label{eqn: result of prop1}\n \\|{\\theta'}-\\theta^*\\|_\\infty < \\lambda\n\\end{equation} with probability $1-\\delta$. Therefore, the coordinates in $\\textrm{supp}(\\theta^*)^c$ will be thresholded out because of $\\|{\\theta'}-\\theta^*\\|_\\infty \\leq \\lambda$. Therefore, (ii) holds and for all $i \\in \\textrm{supp}(\\theta^*)^c$, $|\\hat{\\theta}_i - \\theta_i^*|=0$. \n\nBy definition, $\\hat{\\theta}=\\textsf{clip}_\\lambda ({\\theta'})$, we can say that $\\|\\hat{\\theta}-\\theta'\\|_\\infty \\leq \\lambda$. Plus, by Eq. (\\ref{eqn: result of prop1}), $\\|\\theta'-\\theta^* \\|_\\infty \\leq \\lambda$. By the triangle inequality, $\\|\\theta^*-\\hat{\\theta}\\|_\\infty \\leq 2 \\lambda$. Therefore, (i) holds. \n\n%Now for the coordinates in $\\textrm{supp}(\\hat{\\theta})$, they are not thresholded out so for $i \\in \\textrm{supp}(\\hat{\\theta})$, $\\hat{\\theta}_i = \\tilde{\\theta}_i$. Therefore, $|\\hat{\\theta}-\\theta_i^*|\\leq \\beta$. It remains to check the coordinates in $\\textrm{supp}(\\hat{\\theta})^c \\cap \\textrm{supp}(\\theta^*)$. Since $\\|\\tilde{\\theta}-\\theta^* \\|_\\infty < \\lambda$, for $j \\in \\textrm{supp}(\\hat{\\theta})^c \\cap \\textrm{supp}(\\theta^*)$, $|\\hat{\\theta}_j - \\theta_j^* | \\leq \\|\\hat{\\theta}_j - \\tilde{\\theta}_j\\|_\\infty + \\|\\tilde{\\theta}_j - \\theta_j^* \\| \\leq \\lambda+ \\beta$. Therefore, (i) holds. \n% summing up the above result makes\nLastly, (iii) can be argued as follows:\n%\\begin{align*}\n\\[\n \\|\\hat{\\theta}-\\theta^*\\|_1 = \\sum_{i \\in [d]} |\\hat{\\theta}_i - \\theta_i^*|\n \\leq \\sum_{i \\in \\textrm{supp}(\\theta^*)^c} 0 + \\sum_{i \\in \\textrm{supp}(\\theta^*)} 2\\lambda \\leq 2 s \\lambda.\n %\\qquad\n \\qedhere\n\\]\n%\\end{align*}\n\\end{proof}\n\\vspace{-8pt}\n\n% \\subsection{\\wpopart: improving the guarantees of \\popart by a two-stage procedure}\n% \\label{sec:wpopart}\n% % estimator\n\n\\textbf{\\ensuremath{\\textsc{Warm-PopArt}}: Improved guarantee by warmup.~}\n%\nOne drawback of the \\ensuremath{\\textsc{PopArt}} estimator is that its estimation error scales with $\\sqrt{R_0^2 + \\sigma^2 }$, which can be very large when $R_0$ is large.\nOne may attempt to use the fact that \\ensuremath{\\textsc{PopArt}} allows a pilot estimator $\\theta_0$ to address this issue since $R_0$ gets smaller as $\\th_0$ is closer to $\\th^*$.\nHowever, it is a priori unclear how to obtain a $\\theta_0$ close to $\\theta^*$ as $\\theta^*$ is the unknown parameter that we wanted to estimate in the first place. %needs to be estimated.\n\n%, is the algorithm that \nTo get around this ``chicken and egg'' problem, we propose to introduce a warmup stage, which we call \\ensuremath{\\textsc{Warm-PopArt}}(Algorithm \\ref{alg:warm-popart}). \n\\ensuremath{\\textsc{Warm-PopArt}} consists of two stages. \nFor the first warmup stage, the algorithm runs $\\ensuremath{\\textsc{PopArt}}$ with the zero vector as the pilot estimator and with the first half of the samples to obtain a coarse estimator denoted by $\\hat\\theta_0$ which guarantees that for large enough $n_0$, $\\|\\hat\\theta_0 - \\theta^*\\|_1 \\leq \\sigma$. In the second stage, using $\\hat\\theta_0$ as the pilot estimator, it runs $\\ensuremath{\\textsc{PopArt}}$ on the remaining half of the samples.\n\n% initial point\n%We summarize the details in Algorithm \\ref{alg:warm-popart}.\n%algorithm \n%, not $\\sigma$ as other confidence bound theorems do. Usually this is a minor issue compare to the order of $n, d$, or $s$, but one can avoid this problem if $R_0 \\approx \\sigma$, or $\\|\\theta^* - \\theta_0\\|_1 \\leq \\sigma$.\n%\\chicheng{}\n\n\n%This can be done by the proper choice of the $\\theta_0$ with spending only few samples on it. \n\n%Input: Samples $\\cova_1, \\cova_2, \\cdots, \\cova_{n_0} \\sim \\mu$ and its corresponding reward $\\resp_1, \\resp_2, \\cdots, \\resp_{n_0}$,\n\\begin{algorithm}[h]\n\\caption{\\ensuremath{\\textsc{Warm-PopArt}}}\n\\begin{algorithmic}[1]\n% \\STATE \\gray{\\textbf{Input:} \n% Samples $(\\cova_t, \\resp_t)_{t=1}^{n_0}$, where $(\\cova_t)_{t=1}^n$ are drawn iid from $\\mu$,\n% population covariance matrix $Q(\\mu) = \\EE_{A \\sim \\mu} [A A^\\top]$ (abbrev. $Q$),\n% number of samples $n_0$, failure rate $\\delta$, $R_{\\max}$, an upper bound of $\\max_{a \\in \\cA} |\\langle \\theta^* , a \\rangle| $.}\n\n\\STATE \\textbf{Input:} Samples $\\{(X_t, Y_t)\\}_{t=1}^{n_0}$, the population covariance matrix $Q\\in\\RR^{d\\times d}$, an upper bound \n$R_{\\max}$ of $\\max_{a \\in \\cA} |\\langle \\theta^* , a \\rangle| $, number of samples $n_0$, failure rate $\\delta$.\n\n%which is \n%reward upper bound\n%\\{r_i\\}_{i=1}^{n_0},\n%,\\{r_i\\}_{i=\\lfloor \\frac{n_0}{2} \\rfloor +1}^{n_0}\n\\STATE \\textbf{Output:} $\\hat{\\theta}$, an estimate of $\\theta^*$\n\\STATE Run $\\ensuremath{\\textsc{PopArt}}( \\{( X_i, Y_i )\\}_{i=1}^{\\lfloor n_0\/2 \\rfloor}, Q, \\undefined {0}, \\delta, R_{\\max})$ to obtain ${\\hat\\theta_0}$, a coarse estimate of $\\theta^*$ for the next step.\n\\label{step:coarse-estimation}\n\n\\STATE Run $\\ensuremath{\\textsc{PopArt}}( \\{( X_i, Y_i )\\}_{i= \\lfloor n_0\/2 \\rfloor +1}^{n_0},Q, \\hat\\theta_0, \\delta, \\sigma)$ to obtain $\\hat{\\theta}$, an estimate of $\\theta^*$.\n\\end{algorithmic}\n\\label{alg:warm-popart}\n\\end{algorithm}\n\nThe following corollary states the estimation error bound of the output estimator $\\hat{\\theta}$. Compared with \\ensuremath{\\textsc{PopArt}}'s $\\ell_1$ recovery guarantee, \\ensuremath{\\textsc{Warm-PopArt}}'s $\\ell_1$ recovery guarantee (Equation~\\eqref{eqn:warm-popart-l1}) has no dependence on $R_{\\max}$; its dependence on $R_{\\max}$ only appears in the lower bound requirement for $n_0$. \n\n%Note that there's no $R_{\\max}$ term in the regret bound now. \n\n%\\chicheng{Note: the current version still requires the algorithm to have the knowledge of $R_{\\max}$. To get rid of this requirement, one standard idea is to set $\\lambda_0$ to something like $\\sigma^2 (1\/n_0)^{\\frac13}$ (the exponent needs to be strictly smaller than $\\frac12$); so long as $\\lambda_0 \\geq 2\\sqrt{\\frac{2(R_{\\max}^2 + \\sigma^2) H^2(Q)}{n_0}\\log \\frac{2d}{\\delta}}$, and $s \\lambda_0 \\leq \\sigma^2$ (which can be simultaneously satisfied when $n_0$ is larger than some constant threshold), we can still guarantee that the first stage gives $\\hat{\\theta}$ such that $\\| \\hat{\\theta} - \\theta^* \\|_1 \\leq \\sigma$, ensuring the success of the second stage. But perhaps this is a bit too much - we should just assume $R_{\\max}$ is known? Or add an remark such that we know how to get rid of rmax. \n%}\n\n\\begin{corollary}\\label{cor:warm-popart}\nTake Assumption~\\ref{ass:popart} without the condition on $R_0$.\n% Suppose that \\wpopart receives inputs $\\cbr{\\cova_t, \\resp_t}_{t=1}^{n_0}$ drawn from $\\mu$, $Q(\\mu)$, failure rate $\\delta$, and $R_{\\max}$ such that \nAssume that $R_{\\max} \\geq \\max_{a \\in \\mathcal{A}} |\\langle a, \\theta^* \\rangle|$, and $n_0>\\frac{32s^2(R_{\\max}^2 + \\sigma^2)H^2 (Q(\\mu))}{\\sigma^2} \\log \\frac{2d}{\\delta}$.\nThen, \\ensuremath{\\textsc{Warm-PopArt}} has, with probability at least $1-2\\delta$,\n%all the following items hold \n%Let $H^2(Q):=\\max_{i \\in [d]} (Q^{-1})_{ii}$, $\\lambda_0 := 2\\sqrt{\\frac{2(R_{\\max}^2 + \\sigma^2) H^2(Q)}{n_0}\\log \\frac{2d}{\\delta}}$ and $\\lambda_1 := 4\\sigma\\sqrt{\\frac{H^2(Q)}{n_0}\\log \\frac{2d}{\\delta}}$. If $n_0>\\max(2\\frac{4(R^2 + \\sigma^2)H_*^2 (Q)}{\\sigma^2} \\log \\frac{2d}{\\delta}, 4\\log \\frac{2d}{\\delta}))$, \\kj{$R$ to $R_{\\max}$.} then all the following bound holds with probability at least $1-2\\delta$: \n\\begin{equation}\n \\|\\hat{\\theta}-\\theta^*\\|_1 \\leq 8 s \\sigma \\sqrt{\\frac{H^2(Q(\\mu))\\ln \\frac{2d}{\\delta}}{n_0}}.\n \\label{eqn:warm-popart-l1}\n\\end{equation}\n\\end{corollary}\n%\\kj{should we use $H^2(Q(\\pi))$?}\n%. For example, it could be the traditional\n% based exploration - in this case the first exploration phase length changes to $\\frac{8 s^2 \\sigma^2 \\log \\frac{2d}{\\delta}}{\\lambda_{\\min} (Q)^2}$ where \n\\begin{remarks}\nIn Algorithm~\\ref{alg:warm-popart}, we choose $\\ensuremath{\\textsc{PopArt}}$ as our coarse estimator, but we can freely change the coarse estimation step (step~\\ref{step:coarse-estimation}) to other principled estimation methods (such as Lasso) without affecting the main estimation error bound~\\eqref{eqn:warm-popart-l1}; the only change will be the lower bound requirement of $n_0$ to another problem-dependent constant.\n\\end{remarks}\n%On the other hand, one could also just remove the first warm-up phase ($n_0=0$) and let $\\theta_0=\\vec{0}$. This saves the exploration time especially when $R_{\\max} = {O}(\\sigma)$.\n\n\n%all the following items hold \n%knowledge of \n\\begin{remarks} \\ensuremath{\\textsc{Warm-PopArt}} requires the knowledge of $R_{\\max}$, an upper bound of $\\max_{a \\in \\cA} |\\langle \\theta^* , a \\rangle|$; this requirement can be relaxed by changing the last argument of the coarse estimation step (step~\\ref{step:coarse-estimation}) from $R_{\\max}$, to some function $f(n_0)$ such that $f(n_0) = \\omega(1)$ and $f(n_0) = o(\\sqrt{n_0})$ (say, $\\sigma n_0^{\\frac14}$);\nwith this change, a result analogous to Corollary~\\ref{cor:warm-popart} can be proved with a different lower bound requirement of $n_0$.\n\\end{remarks}\n%\\chicheng{Make a pass on remark 2}\n\n%, \n%for deciding the threshold $\\lambda_0$, and even though this is quite a standard in linear bandit studies,\n% \\subsection{A new optimal experimental design criterion for sparse linear estimation}\n% \\label{sec:opt-expt-design}\n\\textbf{A novel and efficient experimental design for sparse linear estimation.~}\n%\n%We can enjoy these results more \n%when we can control the distribution of the sample $\\mu$, and naturally, \n%the population covariance matrix $Q(\\mu)$. We can think about the experimental design for minimizing $H^2 (Q)$, or formally, \n%Now, define the quantities of the action set geometry as follows:\nIn the experimental design setting where the learner has freedom to design the underlying sampling distribution $\\mu$, the $\\ell_1$ error bound of \\ensuremath{\\textsc{PopArt}} and \\ensuremath{\\textsc{Warm-PopArt}} naturally motivates a design criterion.\nSpecifically, we can choose $\\mu$ that minimizes $H^2(Q(\\mu))$, which gives the lowest estimation error guarantee. \nWe denote the optimal value of $H^2(Q(\\mu))$ by\n\\begin{gather}\n H_*^2 := \\underset{\\mu \\in \\mathcal{P}(\\cA)}{\\textrm{min}}\\max_{i\\in[d]} (Q(\\mu)^{-1})_{ii} ~. \\label{def: H2}\n\\end{gather}\n\nThe minimization of $H^2(Q(\\mu))$ is a convex optimization problem, which admits efficient methods for finding the solution. \nIntuitively, $H_*^2$ captures the geometry of the action set $\\cA$.%: when $\\cA$ approximately lies in an low-dimensional subspace, $H_*^2$ will be large, as for all $\\mu$, $Q(\\mu)^{-1}$ will be large entrywise. \\kj{I feel it is rather misleading our selling point. We expect to have a small value of $H_*^2$ for the rademacher arm set, but that one does not lies in a low dimensional subspace!}\n%so we can approximate efficiently\n\n\n%This approach provides not only a control over $\\ell_\\infty$ norm and support, but also a tighter $\\ell_1$ bound in terms of the action set geometry.\n%perform when composed with \n\n%\\gray{How does the $\\ell_1$ recovery guarantee of \\popart with this experimental design, compare with those of prior works, specifically Lasso?}\n\nTo compare with previous studies that design a sampling distribution for Lasso, we first review the standard $\\ell_1$ error bound of Lasso.\n\\begin{theorem}\n\\label{thm:lasso}\n(\\citet[Theorem 6.1]{bv11}) With probability at least $1-2\\delta$, the $\\ell_1$-estimation error of the optimal Lasso solution $\\hat{\\theta}_{\\Lasso}$ \\cite[Eq. (2.2)]{bv11} with $\\lambda = \\sqrt{2\\log(2d\/\\delta)\/n}$ satisfies \n$$ \\| \\hat{\\theta}_{\\Lasso}-\\theta^*\\|_1 \\leq \\frac{s \\sigma}{\\phi_0^2 (\\hat{\\Sigma}, s)}\\sqrt{\\frac{2 \\log (2d\/\\delta) }{n}},$$\n\nwhere $\\phi_0 (\\hat{\\Sigma}, s)^2$ is the compatibility constant with respect to the empirical covariance matrix $\\hat{\\Sigma} = \\frac{1}{n} \\sum_{t=1}^n X_t X_t^\\top$ and the sparsity $s$ in Eq. \\eqref{def: comp const}.\n\\end{theorem}\n\nIdeally, for Lasso, experiment design which minimizes the compatibility constant will guarantee the best estimation error bound within a fixed number of samples $n$. However, naively, the computation of the compatibility constant is intractable since Eq.~\\eqref{def: comp const} is a combinatorial optimization problem which is usually difficult to compute.\nOne simple approach taken by~\\citet{hao2020high} is to use the following computationally tractable surrogate of $\\phi_0^2 (\\hat{\\Sigma}, s)$:\n\\begin{gather}\n \\mathcal{C}_{\\min} := \\underset{\\mu \\in \\mathcal{P}(\\cA)}{\\textrm{max}}\\lambda_{\\tmin} (Q(\\mu)) \\label{def: Cmin}\n\\end{gather}\nwhere $\\lambda_{\\tmin}(A)$ denotes the minimum eigenvalue of a matrix $A$.\nWith the choice of sampling distribution $\\mu = \\underset{\\mu \\in \\mathcal{P}(\\cA)}{\\textrm{argmax}}\\lambda_{\\tmin} (Q(\\mu))$, and $n \\geq \\tilde{\\Omega}(\\frac{s \\cdot \\mathrm{polylog}(d)}{{\\Cmin}^2})$, with high probability, $\\phi_0^2 (\\hat{\\Sigma},s) \\geq \\Cmin\/2$ holds~\\cite[][Theorem 1.8]{rudelson2012reconstruction},\n%\\cite{javanmard2014confidence}\nand one can replace $\\phi_0 (\\hat{\\Sigma}, s)$ to ${\\Cmin}\/2$ in Theorem \\ref{thm:lasso} to get the following corollary:\n\\begin{corollary}\n\\label{cor:lasso with Cmin} With probability at least $1-\\exp(-c n)-2\\delta$ for some universal constant $c$, the $\\ell_1$-estimation error of the optimal Lasso solution $\\hat{\\theta}_{\\Lasso}$ satisfies\n\\begin{equation}\\label{eqn:lasso with Cmin}\n \\| \\hat{\\theta}_{\\Lasso}-\\theta^*\\|_1 \\leq \\frac{2s \\sigma}{{\\Cmin}}\\sqrt{\\frac{2 \\log (2d\/\\delta) }{n}},\n\\end{equation} \n\\end{corollary}\n\n%%\\chicheng{Original result is from Rudelson and Zhou? I'll check.}\n\n%$\\mathcal{C}_{\\min}$ has several desirable properties - With high probability the inequality $\\frac{\\mathcal{C}_{\\min}}{2}<\\phi_0^2$ holds, and $\\mathcal{C}_{\\min}$ is concave so it is easy to approximate. \n%$\\mathcal{C}_{\\min}$ as a\n%, defined as follows\n\n%In terms of this perspective, \n%Compared with the guarantees obtained by experimental design for Lasso, our new estimator has a better estimation error bound compare to the previous approaches, as shown the proposition below whose proof can be found in Appendix \\ref{appendix: example of H2 and Cmin}. \\kj{to me, it is easier to read if we say ``\nThe following proposition shows that our estimator has a better error bound compared to the surrogate experimental design for Lasso of~\\citet{hao2020high}.\n\n%\n\\begin{proposition}\\label{prop:H2 vs Cmin}\nWe have $ H_*^2 \\leq \\mathcal{C}_{\\min}^{-1} \\leq d H_*^2$. Furthermore, there exist arm sets for which either of the inequalities is tight up to a constant factor.\n\\end{proposition}\n\nTherefore, our new estimator has $\\ell_1$ error guarantees at least a factor $\\mathcal{C}_{\\min}^{-1\/2}$ better\nthan that provided by~\\cite{hao2020high}, as follows: when we choose the $\\mu$ as the solution of the Eq. \\eqref{def: H2}, then\n$$ (\\text{RHS of \\eqref{eqn:warm-popart-l1}}) \\lesssim s \\sigma H_* \\sqrt{\\frac{\\ln(2d\/\\delta)}{n}} \\lesssim s \\sigma {\\Cmin}^{-1\/2} \\sqrt{\\frac{\\ln(2d\/\\delta)}{n}} \\lesssim s\\sigma {\\Cmin}^{-1} \\sqrt{\\frac{\\ln(2d\/\\delta)}{n}} \\lesssim (\\text{RHS of \\eqref{eqn:lasso with Cmin}})$$\n\nIn addition, we also prove that there exists a case where our estimator has an $d\/s$-order better error bound compared to the traditional lasso bound in Theorem~\\ref{thm:lasso}, although this is not in terms of the compatibility constant of the empirical covariance matrix $\\hat{\\Sigma}$.\n\n\\begin{proposition}\\label{prop:H2 vs compat} There exists an action set $\\mathcal{A}$ and an absolute constant $C_1>0$ such that $$H_* 16\\sqrt{2}\\frac{R_{\\max} (R_{\\max}^2 + \\sigma^2)^{3\/2} H_*^2 s^2}{\\sigma^4} \\log \\frac{2d}{\\delta}$, action set $\\cA \\subset [-1,+1]^d$, and exploration length $n_0 = 4(s^2 \\sigma^2 H_*^2 n^2 \\log \\frac{2d}{\\delta}R_{\\max}^{-2})^{\\frac{1}{3}}$, $\\lambda_1 = 4\\sigma \\sqrt{\\frac{H_*^2}{n_0}\\log \\frac{2d}{\\delta}}$, then with probability at least $1-2\\delta$,\n$\n\\Reg(n) \\leq 8R_{\\max}^{1\/3}(s^2 \\sigma^2 H_*^2 n^2 \\log \\frac{2d}{\\delta})^{\\frac{1}{3}}\n$.\n\\end{theorem}\n%the regret is bounded as follows \n\n\\begin{proof}\nFrom Corollary \\ref{cor:warm-popart}, $ \\|\\hat{\\theta} - \\theta^*\\|_1 \\leq 2s \\lambda_1$ with probability at least $1-2\\delta$. Therefore, with probability $1-2\\delta$, \n\\begin{align*}\n \\textrm{Reg}(n) &\\leq R_{\\max} n_0 + (n-n_0)\\|\\hat{\\theta} - \\theta^*\\|_1 \\leq R_{\\max} n_0 + 2sn\\lambda_1 = R_{\\max} n_0 + 8sn\\sigma \\sqrt{\\frac{H_*^2}{n_0}\\log \\frac{2d}{\\delta}}\n\\end{align*}\nand optimizing the right hand side with respect to $n_0$ leads to the desired upper bound. \n\\end{proof}\n\n%Overall, the regret upper bound is of the order ,\n\n\nCompared with~\\citet{hao2020high}'s regret bound $\\tilde{O}((R_{\\max} s^2 \\sigma^2 {\\Cmin}^{-2} n^2)^{1\/3})$\\footnote{This is implicit in~\\cite{hao2020high} -- they assume that $\\sigma=1$ and do not keep track of the dependence on $\\sigma$.}\n, Algorithm~\\ref{alg:etc-sparse}'s regret bound $\\tilde{O}((R_{\\max} s^2 \\sigma^2 H_*^2 n^2)^{1\/3})$\nis at most $\\tilde{O}((R_{\\max} s^2 \\sigma^2 {\\Cmin}^{-1} n^2)^{1\/3})$, which is at least a factor ${\\Cmin}^{\\frac13}$ smaller. As we will see in Section~\\ref{sec:lower-bound}, we show that the regret upper bound provided by Theorem~\\ref{thm:etc-sparse} is unimprovable in general, answering an open question of~\\cite{hao2020high}.\n\n\n%is a factor of $(H_* C_{\\min})^{\\frac23}$ smaller, which is $\\leq (\\sqrt{C_{\\min}})^{\\frac23} \\leq 1$. \n\n%Recall that $1 \\leq H_* \\leq \\frac{1}{\\sqrt{C_{\\min}}}$\n\n%$\\mathcal{C}_{\\min}^{-2\/3}$ to $H_*^{2\/3}$, at least of the order $\\mathcal{C}_{\\min}^{-1\/3}$. This gap of the instance-dependent constant order was one of the open problems for this sparse linear bandit in data-poor regime, and we optimized the regret upper bound. \n%previous research for the sparse linear bandit problem\n% \\label{sec:phased-elim-wpopart}\n%\n\\textbf{Improved upper bound with minimum signal condition.~} Our second new algorithm, Algorithm~\\ref{alg:phase-elim}, similarly uses \\ensuremath{\\textsc{Warm-PopArt}} under an additional minimum signal condition.\n\n\\begin{assumption}[Minimum signal]There exists a known lower bound $m > 0$ such\nthat $\\min_{j\\in \\textrm{supp}(\\theta^*)} |\\theta_j^*| > m$.\n\\end{assumption}\n\nAt a high level, Algorithm~\\ref{alg:phase-elim} uses the first $n_2$ rounds for identifying the support of $\\theta^*$; the $\\ell_\\infty$ recovery guarantee of $\\ensuremath{\\textsc{Warm-PopArt}}$ makes it suitable for this task. Under the minimal signal condition and a large enough $n_2$, it is guaranteed that $\\hat{\\theta}_2$'s support equals exactly the support of $\\theta^*$. After identifying the support of $\\theta^*$, Algorithm~\\ref{alg:phase-elim} treats this as a $s$-dimensional linear bandit problem by discarding the remaining $d-s$ coordinates of the arm covariates, \nand perform phase elimination algorithm \\citep[Section 22.1]{lattimore18bandit} therein. The following theorem provides a regret upper bound of Algorithm \\ref{alg:phase-elim}. \n\n\\begin{algorithm}[h]\n\\caption{Restricted phase elimination with \\ensuremath{\\textsc{Warm-PopArt}}}\n\\begin{algorithmic}[1]\n\\STATE Input: time horizon $n$, finite action set $\\cA$, minimum signal $m$, failure rate $\\delta$, reward threshold parameter $R_{\\max}$, an upper bound of $\\max_{a \\in \\cA} |\\langle \\theta^* , a \\rangle|$\n\\STATE Solve the optimization problem in Eq. \\ref{def: H2} and denote the solutions as $Q$ and $\\mu_*$, respectively.\n\\STATE Let $n_2 = \\max(\\frac{256\\sigma^2 H_*^2}{m^2} \\log \\frac{2d}{\\delta} , \\frac{32s^2(R_{\\max}^2 + \\sigma^2)H_*^2}{\\sigma^2} \\log \\frac{2d}{\\delta})$\n\\FOR{$t=1,\\ldots,n_2$}\n \\STATE Independently pull the arm $A_t$ according to $\\mu_*$ and receives the reward $r_t$\n\\ENDFOR \n\\STATE $\\hat{\\theta}_2 = \\ensuremath{\\textsc{Warm-PopArt}}(\\{ A_t\\}_{t=1}^n, \\{ R_t\\}_{t=1}^n, Q, \\delta,R_{\\max})$\n\\STATE Identify the support $\\hS = \\mathrm{supp}(\\hat{\\theta}_2)$\n\\FOR{$t=n_2+1,\\ldots,n$}\n\\STATE Invoke phased elimination algorithm for linear bandits on $\\hS$\n\\ENDFOR\n\\end{algorithmic}\n\\label{alg:phase-elim}\n\\end{algorithm}\n\n\n\n\\begin{theorem} \\label{thm:with minimum signal}\nIf Algorithm \\ref{alg:phase-elim} has input time horizon $n>\\max(\\frac{2^8\\sigma^2 H_*^2}{m^2} , \\frac{2^5 s^2(R_{\\max}^2 + \\sigma^2)H_*^2}{\\sigma^2} )\\log \\frac{2d}{\\delta}$, action set $\\cA \\subset [-1,1]^d$, upper bound of the reward $R_{\\max}$, then with probability at least $1-2\\delta$, the following regret upper bound of the Algorithm \\ref{alg:phase-elim} holds: for universal constant $C>0$, \n$$ \\textrm{Reg} (n) \\leq \\max(\\frac{2^8\\sigma^2 H_*^2}{m^2} \\log \\frac{2d}{\\delta} , \\frac{2^5s^2 (R_{\\max}^2 + \\sigma^2)H_*^2}{\\sigma^2} \\log \\frac{2d}{\\delta}) + C\\sigma\\sqrt{sn \\log (|\\cA|n)}$$\n\n\\end{theorem}\n%\\chicheng{why do we need this?}\n\n%In the special case that $m >16\\sigma H_* (sn)^{-1\/4} \\log^{1\/2} \\frac{2d}{\\delta}$\nFor sufficiently large $n$, the second term dominates, and we obtain an $O(\\sqrt{sn})$ regret upper bound. Theorem~\\ref{thm:with minimum signal} provides two major improvements compared to~\\citet[][Algorithm 2]{hao2020high}. First,\nwhen $m$ is moderately small (so that the first subterm in the first term dominates), \nit shortens the length of the exploration phase $n_2$ by a factor of $s \\cdot \\frac{{\\Cmin}}{H_*^2}$. Second, compared with the regret bound \n$\\tilde{O}( \\sqrt{\\frac{9\\lambda_{\\max}(\\sum_{i=1}^{n_2}A_i A_i^\\top\/n_2)}{\\mathcal{C}_{\\min}}} \\sqrt{sn} )$ provided by~\\cite{hao2020high},\nour main regret term $\\tilde{O}(\\sqrt{sn})$ is more interpretable and can be much lower. \n\n%$, and , a factor of $\n%improves over the regret bound of \n%~\\cite{hao2020high}, which is random, highly probable to be greater than $1$, and could be extraordinarily problematic. \n%We could achieve these improvements thanks to the superiority of $\\textrm{PopArt}$. $\\textrm{PopArt}$ assures the confidence bound of each coordinate, so there's no need to add redundant analysis for calculating the confidence about the support of $\\hat{\\theta}$. \n% action set dependent constant $\\frac{9\\lambda_{max}(\\sum_{i=1}^{n_2}A_i A_i^\\top\/n_2)}{\\mathcal{C}_{\\min}}$ from\n\n\\vspace{-6pt}\n\\section{Matching lower bound}\n\\label{sec:lower-bound}\n\\vspace{-6pt}\nWe show the following theorem that establishes the optimality of Algorithm~\\ref{alg:etc-sparse}. This solves the open problem of \\citet[][Remark 4.5]{hao2020high} on the optimal order of regret in terms of sparsity and action set geometry in sparse linear bandits. \n\\begin{theorem}\\label{thm:lower}\nFor any algorithm, any $s, d, \\kappa$ that satisfies\n$d \\geq \\max (n^{1\/3} s^{4\/3} \\kappa^{-4\/3},(s+1)^2)$ and $n>8\\kappa s^2$, there exists a linear bandit environment an action set $\\cA$ and a $s$-sparse $\\theta \\in \\RR^d$, such that $\\mathcal{C}_{\\min}^{-1} \\leq \\kappa^{-2}$, $R_{\\max} \\leq 2$, $\\sigma = 1$, and \n%\\max\\del{ {H_*^2(\\cA)}, \\mathcal{C}_{\\min}(\\cA)^{-1} } \n\\[\n\\textrm{Reg}_n \\geq \\Omega( \\kappa^{-2\/3} s^{2\/3} n^{2\/3})~.\n\\]\n\\end{theorem}\n%\\chicheng{Additionally discuss that this implies Theorem~\\ref{thm:etc-sparse} has a matching lower bound.} \n\n%\\begin{theorem}\n%\\chicheng{Add the requirement for $n, d, s, H_*$}\n%for any policy $\\pi$, \n%\\chicheng{(optional) for any $R_{\\max}$ - it is also OK to stick to current version with $R_{\\max} = O(1)$}\n%there exists an action set $\\mathcal{A}$ with $H_*^2 > 0$ and $s$-sparse parameter $\\th \\in \\mathbb{R}^d$ such that $$ \\textrm{Reg}_n \\geq \\Omega( \\red{R_{\\max}^{1\/3}} H_*^{2\/3} s^{2\/3} n^{2\/3})$$\n%\\end{theorem}\nWe give an overview of our lower bound proof techniques, and defer the details to Appendix \\ref{sec:proof-lb}.\n% and technical lemmas are in\n%Appendix: lower bound\n\n%One of the recent technique for proving the lower bound is\n\\noindent\\textbf{Change of measure technique.~}\nGenerally, researchers prove the lower bound by comparing two instances based on the information theory inequalities, such as Pinsker's inequality, or Bregtanolle-Huber inequality. In this proof, we also use two instances $\\theta$ and $\\theta'$, but we use the change of measure technique, to help lower bound the probability of events more freely. Specifically, for any event $A$,\n\\begin{align}\\label{eqn:change-of-measure}\n \\PP_\\th (A) \n = \\EE_{\\th} [\\one_A]\n = \n \\EE_{\\th'}\\sbr{ \\one_A \\prod_{t=1}^n\\frac{p_\\th(r_t |a_t)}{p_{\\theta'}(r_t |a_t)} } \n \\gtrsim \\EE_{\\theta'} \\sbr{ \\one_A \\exp\\del{-\\sum_{t=1}^n \\langle A_t , \\theta-\\theta' \\rangle^2} }~.\n\\end{align}\n%= \\EE_{\\th} [\\one_A]\n%This approach help us more freely. \n\n\n\n\\noindent\\textbf{Symmetrization.~}\nWe utilize the algorithmic symmetrization technique of~\\citet{simchowitz2017simulator, bubeck11pure-tcs}, which makes it suffice to focus on proving lower bounds against symmetric algorithms.\n%, and extending the result to the general algorithms. \n\n\n\\begin{definition}[Symmetric Algorithm]\nAn algorithm $\\textsf{Alg}$ is \\emph{symmetric} if for any permutation $\\pi \\in \\textit{Sym}(d)$, $\\theta \\in \\mathbb{R}^{d}$, $\\{a_t\\}_{t=1}^n \\in \\mathcal{A}^n$,\n$$ \\PP_{\\theta, \\textsf{Alg}} (A_1 = a_1, \\cdots , A_n = a_n) = \\PP_{{\\pi} (\\theta) , \\textsf{Alg}}(A_1 = \\pi(a_1 ), \\cdots , A_n = \\pi(a_n ))$$\nwhere for vector $v$, $\\pi(v) \\in \\RR^d$ denotes its permuted version that moves $v_i$ to the $\\pi(i)$-th position.\n%\\chicheng{$$ \\EE_{\\theta, \\textsf{Alg}} \\sbr{f(A_1, \\ldots, A_n)} = \\EE_{{\\pi} (\\theta) , \\textsf{Alg}} \\sbr{ f( \\pi^{-1}(A_1), \\ldots, \\pi^{-1}(A_n) ) }$$}\n\\end{definition}\n%advantage of\nThis approach can help us to exploit the symmetry of $\\theta'$ to lower bound the right hand side of~\\eqref{eqn:change-of-measure}; below, $\\Pi := \\cbr{\\pi': \\pi(\\theta') = \\theta'}$ is the set of permutations that keep $\\theta'$ invariant, {and $A$ is an event invariant under $\\Pi$}:\n\\begin{align*}\n \\text{~\\eqref{eqn:change-of-measure}}\n \\geq\n \\frac{1}{|\\Pi|} \\sum_{\\pi \\in \\Pi} \n \\EE_{ \\theta'} \\sbr{ \\one_A \\exp(-\\sum_{t=1}^n \\langle \\pi^{-1}(A_t) , \\theta-\\theta' \\rangle^2) } \\geq \n \\EE_{ \\theta'} \\sbr{ \\one_A \\exp\\del{ -\\sum_{t=1}^n \\frac{1}{|\\Pi|} \\sum_{\\pi \\in \\Pi} \\langle \\pi^{-1}(A_t) , \\theta-\\theta' \\rangle^2} }\n\\end{align*}\nwhich helps us use combinatorial tools over the actions for the lower bound proof. \n\n\n% \\paragraph{Averaging hammer over action instead of $\\theta$} Symmetrization provides us the possibility to deal with the actions, but it also has its own drawback that it cannot control $\\theta$ much. Previous approaches usually utilized the `averaging hammer', or `minimum is smaller than the average' trick. It was usually done as follows: when we want to calculate the upper bound of $\\EE_\\theta [f(\\bA)]$ (usually it is about the informational divergence) and want the case when this bound is small, \n\n% \\begin{itemize}\n% \\item Choose $\\theta$ that minimize $\\EE_\\theta [f(\\bA)]$ among the set of possible candidates $\\Theta$. \n% \\item Calculate $\\frac{1}{|\\Theta|} \\sum_{\\theta' \\in \\Theta}\\EE_{\\theta'} [f(\\bA)]$. It is better to set $\\Theta$ to have symmetry to make calculation easier. \n% \\item Since $\\theta$ is the minimizing argument, $\\EE_\\theta [f(\\bA)] \\leq \\frac{1}{|\\Theta|} \\sum_{\\theta' \\in \\Theta}\\EE_{\\theta'} [f(\\bA)]$.\n% \\end{itemize} \n% However, this approach chooses specific hidden instance that we never know explicitly, which harms the symmetry. Instead, we find a way to use this 'averaging hammer' over the actions. Keen readers would notice that, for the symmetric algorithm $\\textsf{Alg}$ and a permutation which satisfies $\\pi(\\theta)=\\theta$, $\\PP_{\\theta, \\textsf{Alg}} (\\bA) = \\PP_{\\theta, \\textsf{Alg}} (\\pi (\\bA))$. Therefore, we did 'averaging hammer' in this way.\n\n% \\begin{itemize}\n% \\item For each $\\bA \\in \\mathcal{A}$, make a partition (more precisely, an equivalence class) $\\Gamma_i$ such that $\\PP_{\\theta, \\textsf{Alg}} (\\bA) = \\PP_{\\theta, \\textsf{Alg}} (\\bA')$. Because of the symmetric algorithm assumption, we can find a lot of them. \n% \\item To calculate $\\EE_\\theta [f(\\bA)]$ with a convex function $f$, the following holds:\n% \\begin{align*}\n% \\EE_\\theta[ f(\\bA)] &= \\sum_{\\bA \\in \\cA^n} \\PP_\\theta(\\bA) f(\\bA)= \\sum_{\\Gamma_i} \\sum_{\\bA\\in \\Gamma_i} \\PP_\\theta(\\bA) f(\\bA)\\leq \\sum_{\\Gamma_i} \\sum_{\\bA\\in \\Gamma_i} \\PP_\\theta(\\bA) f(\\frac{1}{|\\Gamma_i|}\\sum_{\\bA\\in \\Gamma_i}\\bA)\n% \\end{align*}\n% where the last inequality is from Jensen for each $\\Gamma_i$.\n% \\item With the symmetry, it is easier to calculate the average over each partition $\\Gamma_i$\n% \\end{itemize}\n\n% It provides much tighter calculation about the lower bound for the symmetric algorithms and maximize the advantage of symmetrization. \n\n\\vspace{-6pt}\n\\section{Experimental results}\n\\label{sec:expr}\n\\vspace{-6pt}\n\nWe evaluate the empirical performance of \\ensuremath{\\textsc{PopArt}} and our proposed experimental design, along with its impact on sparse linear bandits. One can check our code from here: \\url{https:\/\/github.com\/jajajang\/sparse}. \n\n\\begin{figure}[h] \n \\centering\n \\begin{tabular}{cc}\n \\toprule\n Case 1 & Case 2 \\\\\n \\midrule\n \\begin{tabular}{l}\n \\includegraphics[width=0.4\\linewidth]{figures\/l1reg_threefigures_largerfont.png}\n \\end{tabular}&\n \\begin{tabular}{l}\n \\includegraphics[width=0.4\\linewidth]{figures\/d30_l1reg_largerfont_three.png}\n \\end{tabular}\n \\\\%\\midrule \n \\begin{tabular}{l}\n \\includegraphics[width=0.4\\linewidth]{figures\/bandit_t400000_largerfont.png}\\end{tabular}&\\begin{tabular}{l} \\includegraphics[width=0.4\\linewidth]{figures\/d30_bandit_largerfont.png}\n \\end{tabular}\n \\\\\\bottomrule\n \\end{tabular}\n \\caption{Experiment results on $\\ell_1$ estimation error cumulative regret. %We run \\popart with the solution of Eq. \\ref{def: H2}. \n %Cmin-Lasso is the algorithm of \\cite{hao2020high} that solves ... Eq. xx, and H2-Lasso is the algorithm that performs exploration based on the solution of Eq. \\ref{def: H2}.\n % \\kj{increase the resolution; you can do it by exporting the plot to pdf from python and then use pdf viewer to enlarge it and then screencapture it} \n }\n \\label{fig:table_of_figures}\n\\end{figure}\n\n\nFor sparse linear regression and experimental design, we compare our algorithm \\ensuremath{\\textsc{PopArt}} with $\\mu$ being the solution of~\\eqref{def: H2} with two baselines.\nThe first baseline denoted by $C_{\\min}$-Lasso is the method proposed by~\\citet{hao2020high} that uses Lasso with sampling distribution $\\mu$ defined by~\\eqref{def: Cmin}.\nThe second baseline is $H^2$-Lasso, uses Lasso with sampling distribution $\\mu$ defined by~\\eqref{def: H2}, which is meant to observe if Lasso can perform better with our experimental design and to see how \\ensuremath{\\textsc{PopArt}} is compared with Lasso as an estimator since they are given the same data. \nOf course, this experimental design is favored towards \\ensuremath{\\textsc{PopArt}} as we have optimized the design for it, so our intention is to observe if there ever exists a case where \\ensuremath{\\textsc{PopArt}} works better than Lasso.\n\n% how Lasso and \\popart performs when the how the performance of Lasso is compared with \n\n% estimation\n\n%Algorithm~\\ref{alg:etc-sparse}\nFor sparse linear bandits, we run a variant of our Algorithm~\\ref{alg:etc-sparse} that uses \\ensuremath{\\textsc{Warm-PopArt}} in place of \\ensuremath{\\textsc{PopArt}} for simplicity.\nAs a baseline, we use ESTC~\\cite{hao2020high}.\nFor both methods, we use the exploration length prescribed by theory.\nWe consider two cases:\n\\begin{itemize}\n \\item \\textbf{Case 1: Hard instance where $H_*^2 \\ll \\mathcal{C}_{\\min}^{-1}$.~} We use the action set constructed in Appendix \\ref{example:worst case of Cmin and H2} where $H_*^2$ and $\\mathcal{C}_{\\min}$ shows a gap of $\\Theta(d)$. We choose $d=10$, $s=2$, $\\sigma=0.1$.\n \\item \\textbf{Case 2. General unit vectors.~} In this case, we choose $d=30$, $s=2$, $\\sigma=0.1$ and the action set $\\mathcal{A}$ consists of $|\\mathcal{A}|=3d=90$ uniformly random vectors on the unit sphere. \n\\end{itemize}\n\nWe run each method 30 times and report the average and standard deviation of the $\\ell_1$ estimation error and the cumulative regret in Figure~\\ref{fig:table_of_figures}.\n\n%For each of the cases, we measure how $\\|\\hat{\\theta}-\\theta^*\\|_1$ changes for each time $n$ (first row) and the bandit regret.\n%\\kj{plots: remove sigma, larger font. remove `unit vec'; remove the first column }\n\n% \\cite{hao2020high}\n\\noindent\\textbf{Observation.} As we expected from the theoretical analysis, our estimator and bandit algorithm outperform the baselines. \nIn terms of the $\\ell_1$ error, for both cases, we see that \\ensuremath{\\textsc{PopArt}} converges much faster than ${\\Cmin}$-Lasso for large enough $n$.\nInterestingly, $H^2$-Lasso also improves by just using the design computed for \\ensuremath{\\textsc{PopArt}} in case 1.\nAt the same time, $H^2$-Lasso is inferior than \\ensuremath{\\textsc{PopArt}} even if they are given the same data points.\nWhile the design was optimized for \\ensuremath{\\textsc{PopArt}} and \\ensuremath{\\textsc{PopArt}} has the benefit of using the population covariance, which is unfair, it is still interesting to observe a significant gap between \\ensuremath{\\textsc{PopArt}} and Lasso.\nFor sparse linear bandit experiments, while ESTC requires exploration time almost the total length of the time horizon, ours requires a significantly shorter exploration phase in both cases and thus suffers much lower regret.\n\n\n% % the two Lasso-based estimation methods in terms of the $\\ell_1$ norm error. \n\n% \\kj{make the observation for each estimator separately. I think we had different intentions for both.}\n\n\\vspace{-8pt}\n\\section{Conclusion}\n\\label{sec:conclusion}\n\\vspace{-8pt}\n\nWe have proposed a novel estimator \\ensuremath{\\textsc{PopArt}} and experimental design for high-dimensional linear regression. %, which has enabled more accurate estimation with computational efficiency. \n\\ensuremath{\\textsc{PopArt}} has not only enabled accurate estimation with computational efficiency but also led to improved sparse linear bandit algorithms.\nFurthermore, we have closed the gap between the lower and upper regret bound on an important family of instances in the data-poor regime.\n\nOur work opens up numerous future directions.\nFor \\ensuremath{\\textsc{PopArt}}, we speculate that $(Q(\\mu)^{-1})_{ii}$ is the statistical limit for testing whether $\\theta^*_i = 0$ or not -- it would be a valuable investigation to prove or disprove this.\nWe believe this will also help investigate whether the dependence on $H_*^2$ in our regret upper bound is unimprovable (note our matching lower bound is only for a particular family of instances).\nFurthermore, it would be interesting to investigate whether we can use \\ensuremath{\\textsc{PopArt}} without relying on the population covariance; e.g., use estimated covariance from an extra set of unlabeled data or find ways to use the empirical covariance directly.\nFor sparse linear bandits, it would be interesting to develop an algorithm that achieves the data-poor regime optimal regret and data-rich regime optimal regret $\\sqrt{sdn}$ simultaneously.\nFurthermore, it would be interesting to extend our result to changing arm set, which poses a great challenge in planning.\n\n\\begin{ack}\nWe thank Ning Hao for helpful discussions on theoretical guarantees of Lasso.\nKwang-Sung Jun is supported by Data Science Academy and Research Innovation \\& Impact at University of Arizona. \n\\end{ack}\n\n\n% % \\paragraph{Improving our result further} \n% Even though we found out a matching upper and lower bound, there are still some problems worth studying, such as finding the real statistical limit of the $\\ell_1$ norm error, relationship between $H_*^2$ and $\\phi_0^2$, extension to the bandits with changing arm set, unified framework of the data-poor and data-rich regime in sparse linear bandits, or even the precise definition of the data-poor regime. For the perspective of the estimator, it would be a great improvement if \\popart does not need $Q$ for the estimation, and use pseudo-inverse for the empirical covariance matrix instead. \n% \\paragraph{Other topics} There are interesting topics where one can apply the ideas in sparse linear bandit because of the structural similarity. We believe that low-rank bandit \\cite{lu2021low} and compressed sensing \\cite{zhang2014efficient} problems are the good examples of those fields. \n\n% \\red{\n% future work\n% \\begin{itemize}\n% \\item (?) low rank bandit extensions..\n% \\item applications in compressed sensing?\n% \\item prove the conjecture that $H^2$ is the statistical limit (cannot be improved) w.r.t. $\\ell_1$ norm error. relationship with the compatibility constant.\n% \\item (non-bandit point of view; purely about the estimator) semi-supervised learning: the idea is that you estimate $Q$ form a separate unlabeled data points and prove a bound for it.\n% \\item the exact range of $n$ w.r.t. $s$, $d$, and $H^2$ for the data poor regime.\n% \\item sparse linear bandit, unified optimality for both datapoor and datarich regime is $\\min\\{\\sqrt{sdn}, (snH)^{2\/3}\\}$?\n% \\item idea for future work: changing arm set without any assumptions \/ acceleration when the distribution satisfies certain conditions.\n% \\end{itemize}\n\n% }\n\n\n\n\\putbib[library-shared]\n\\end{bibunit}\n```\n\n### Checklist\n\nThe checklist follows the references. Please read the checklist guidelines carefully for information on how to answer these questions. For each question, change the default to , , or . You are strongly encouraged to include a **justification to your answer**, either by referencing the appropriate section of your paper or providing a brief inline description. For example:\n\n- Did you include the license to the code and datasets?\n\n- Did you include the license to the code and datasets?\n\n- Did you include the license to the code and datasets?\n\nPlease do not modify the questions and only use the provided macros for your answers. Note that the Checklist section does not count towards the page limit. In your paper, please delete this instructions block and only keep the Checklist section heading above along with the questions\/answers below.\n\n1. For all authors...\n\n 1. Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?\n\n 2. Did you describe the limitations of your work?\n\n 3. Did you discuss any potential negative societal impacts of your work?\n\n 4. Have you read the ethics review guidelines and ensured that your paper conforms to them?\n\n2. If you are including theoretical results...\n\n 1. Did you state the full set of assumptions of all theoretical results?\n\n 2. Did you include complete proofs of all theoretical results?\n\n3. If you ran experiments...\n\n 1. Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?\n\n 2. Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?\n\n 3. Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)?\n\n 4. Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?\n\n4. If you are using existing assets (e.g., code, data, models) or curating\/releasing new assets...\n\n 1. If your work uses existing assets, did you cite the creators?\n\n 2. Did you mention the license of the assets?\n\n 3. Did you include any new assets either in the supplemental material or as a URL?\n\n 4. Did you discuss whether and how consent was obtained from people whose data you're using\/curating?\n\n 5. Did you discuss whether the data you are using\/curating contains personally identifiable information or offensive content?\n\n5. If you used crowdsourcing or conducted research with human subjects...\n\n 1. Did you include the full text of instructions given to participants and screenshots, if applicable?\n\n 2. Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable?\n\n 3. Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?\n\n# Appendix","meta":{"dup_signals":{"dup_doc_count":12,"dup_dump_count":2,"dup_details":{"curated_sources":6,"unknown":6}},"filename":"out\/2210.15345_extract_main.tex.md"},"subset":"arxiv"} +{"text":"abstract: Ties between individuals on a social network can represent different dimensions of interactions, and the spreading of information and innovations on these networks could potentially be driven by some dimensions more than by others. In this paper we investigate this issue by studying the diffusion of microfinance within rural India villages and accounting for the whole multilayer structure of the underlying social networks. We define a new measure of node centrality, diffusion versatility, and show that this is a better predictor of microfinance participation rate than previously introduced measures defined on aggregated single-layer social networks. Moreover, we untangle the role played by each social dimension and find that the most prominent role is played by the nodes that are central on layers concerned with trust, shedding new light on the key triggers of the diffusion of microfinance.\nauthor: Elisa Omodei and Alex Arenas \nDepartment of Mathematics and Computer Science, Rovira i Virgili University\ntitle: Untangling the role of diverse social dimensions in the diffusion of microfinance\n\n# Introduction\n\nUnderstanding the mechanisms driving the diffusion of information, behaviours, and innovations is a question of great interest for social and economical sciences\u00a0.\n\nIn his seminal book, Rogers identified four key elements for the diffusion of an innovation: the characteristics of the innovation itself, the communication channels, time, and the social systems within which the diffusion occurs\u00a0. The role played by the social structure of the system has since then been widely investigated using the mathematical formalism of networks\u00a0, and a fundamental question has been the identification of the most influential individuals therein\u00a0. This is especially important in the context of *network interventions*, which is concerned with understanding how social networks influence behaviours and their diffusions\u00a0. In particular, induction interventions are designed to stimulate peer-to-peer interaction to trigger cascades in information or behavioural diffusion. Studies have shown that their success is critically dependent on the choice of influencers\u00a0 but also on their position in the network\u00a0.\n\nIn this paper, by taking advantage of the framework of multilayer networks, we investigate how the choice of opinion leaders can be improved in the context of the diffusion of microfinance in rural villages.\n\nBuilding on the seminal study of Banjeree *et al.*\u00a0, we rely on a unique dataset on social network structure and participation in microfinance of 43 villages in Karnataka, a state of southern India\u00a0. In-between 2007 and 2011 a microfinance institution, Bharatha Swamukti Samsthe (BSS), entered these villages, which previously had almost no exposure to any microfinance institution nor other types of formal credit. Before BSS's entrance in the villages, Banjeree and collaborators administered to households detailed surveys covering a wide range of interactions, to reconstruct the structure of the social network. When entering a village, BSS selected a number of pre-defined individuals that they would expect to be well connected within the villages (teachers, shopkeepers, leaders of self-help groups, etc), and had a private meeting with them to introduce the microfinance programme. These individuals, hereafter simply called leaders, then played a fundamental role in spreading the information about microcredit opportunities. Banerjee and collaborators investigate the correlation between the village level of participation in microfinance and the average centrality of its leaders in the social network. Their goal is to find the centrality measure that best predicts participation, so that in future interventions the most central individuals in the network could be selected as leaders to potentially maximise participation. To the best of our knowledge, no study other than\u00a0 exists on applying the ideas of network interventions in the context of microfinance, the choice of the opinion leaders being left to credit institution criteria such as those just mentioned.\n\nBanerjee and collaborators define, for each village, a social network of households as an undirected unweighted network linking two households if any of their members are in at least one of the relations covered by the survey. They then introduce a new measure, called *diffusion centrality*, to evaluate the importance of households within the network, with the ultimate goal of predicting the rate of village participation in microfinance on the basis of the centrality of the households that were firstly informed about it. Given the network adjacency matrix $\\mathbf{A}$, a passing probability $q$ and $T$ iterations, the diffusion centrality of node $i$ is the $i^{\\text{th}}$ entry of the vector $$DC(\\mathbf{A};q,T) := \\Big[ \\sum_{t=1}^{T} (q\\mathbf{A})^t \\Big] \\cdot \\vec{\\mathbf{1}}$$ where $q$ is set as the inverse of the first eigenvalue of the adjacency matrix, and $T$ to the number of trimesters during which the village was exposed to BSS (6.6 on average). Essentially, diffusion centrality measures how effective a household would be as injection point of a new piece of information. By means of multivariate linear regression (including 5 village-level controls, i.e. number of households, self-help group participation rate, savings participation rate, caste composition, and fraction of village households designated as leaders), they show that the average diffusion centrality of the pre-selected leaders outperforms other existing measures of centrality in predicting the village eventual rate of participation in microfinance.\n\nThe administered surveys, used to reconstruct the social network, cover 8 different dimensions: names of those whose homes the respondent visits or receives visits by, kins in the village, nonrelatives with whom the respondent socializes, those from whom the respondent receives medical help, those from which and to whom the respondent would borrow or lend money, those from which and to whom the respondent would borrow or lend material goods (such as kerosene or rice), those from or to whom the respondent gets or gives advice, and those with whom the respondent goes to pray (at a temple, church, or mosque). In this paper, we show that taking into account the multilayer structure emerging from the different dimensions covered by the surveys leads to an improved prediction of microfinance participation. Moreover we investigate the relative role played by the different kinds of tie. These results can be used in future network interventions in the context of microfinance, and beyond, to select opinion leaders in function of their position in the multilayer network, so to maximise participation in the programme. The study is motivated by the recent growing literature on multiplex networks showing that taking into account the multilayer structure of social networks \u2013 which consist of different kinds of ties, from kinship, to friendship and professional relations\u00a0 \u2013 can shed new light into its topological and dynamical properties\u00a0. Therefore in this paper we reconsider the question of how innovations diffuse by asking: do all kinds of tie play the same role or are some dimensions more influential than others in fostering the adoption of an innovation?\n\n# Materials and Methods\n\n## Data\n\nIn each village, about half of the households completed surveys in which each member was asked to list the names of people in the village with whom they had a certain relationship. Households were selected through random sampling and stratification by religion and geographic sub-regions. For further information about data collection we refer the reader to the original paper\u00a0, and the publicly available dataset\u00a0. Individuals were asked the following questions:\n\n1. Name the 4 non-relatives whom you speak to the most.\n\n2. In your free time, whose house do you visit?\n\n3. Who visits your house in his or her free time?\n\n4. If you needed to borrow kerosene or rice, to whom would you go?\n\n5. Who would come to you if he\/she needed to borrow kerosene or rice?\n\n6. If you suddenly needed to borrow Rs. 50 for a day, whom would you ask?\n\n7. Who do you trust enough that if he\/she needed to borrow Rs. 50 for a day you would lend it to him\/her?\n\n8. Who comes to you for advice?\n\n9. If you had to make a difficult personal decision, whom would you ask for advice?\n\n10. If you had a medical emergency and were alone at home whom would you ask for help in getting to a hospital?\n\n11. Name any close relatives, aside those in this household, who also live in this village.\n\n12. Do you visit temple\/mosque\/church? Do you go with anyone else? What are the names of these people?\n\nWe observe that some pairs of questions are symmetric, as for instance \"In your free time, whose house do you visit?\" and \"Who visits your house in his or her free time?\". The two questions, jointly considered, allow to reconstruct a network describing who visits whom within each village. The same stands for questions 4-5, 6-7 and 8-9, which allow to reconstruct, respectively, the network of potential material good loans, of potential money loans, and of advice relationships. Therefore, from the 12 questions we identify 8 different dimensions: nonrelative socialisation (1), house visits (2-3), material good potential loans (4-5), money potential loans (6-7), advice exchange (8-9), help in a medical emergency (10), kinship (11), and praying company (12).\n\n## Methods\n\nThe social network defined by Banerjee and collaborators is the product of an aggregation over different types of social ties, from kinship to medical help. It was recently shown that accounting for the whole multilayer structure of networks that are intrinsically composed of different kinds of relations has important consequences in the definition of the most central nodes, and allows to identify the more versatile ones\u00a0. We call this extended notion of centrality *versatility*. Here, we are interested in understanding if measuring leaders' versatility in a multilayer network that accounts for all dimensions separately can improve the prediction of microfinance participation. To this end, for each village we build a multilayer network composed of $N$ nodes, corresponding to the number of households in the village, and $L=8$ layers, each encoding one of the dimensions defined above. Moreover, each node on a given layer is connected to its replica on all the other layers. Figure\u00a0 shows the visualisation of the multilayer social network for one of the villages. Following the mathematical framework introduced in\u00a0, we describe this network by means of the rank-4 tensor $A^{\\alpha\\tilde{\\gamma}}_{\\beta\\tilde{\\delta}}$. This was shown to be a natural generalisation of the adjacency matrix, and allows for a simple mathematical definition of multilayer networks\n\n, as we will now describe.\n\nLet us first consider a standard network, composed of $N$ nodes and of only one single type of edge. Such graph can be represented by means of the adjacency matrix $$\\mathbf{W}=\\sum_{i,j=1}^{N}w_{ij}\\mathbf{E}_{ij}=\\sum_{i,j=1}^{N}w_{ij}\\mathbf{e}_{i}\\otimes \\mathbf{e}_{j}^{\\dag}\\,, \\quad \\mathbf{A}\\in\\mathbb{R}^{N}\\otimes\\mathbb{R}^{N}=\\mathbb{R}^{N\\times N}\\,,$$ where $w_{ij}$ indicates the intensity of the relationship between node $i$ and node $j$, $\\mathbf{e}_{i}$ is the canonical vector in the vector space $\\mathbb{R}^{N}$, that is the $i^{\\text{th}}$ component of $\\mathbf{e}_{i}$ is 1, and all of its other components are $0$, and $\\dag$ is the transposition operator, which transforms the column vector $\\mathbf{e}_{j}$ into a row vector. $\\mathbf{E}_{ij}=\\mathbf{e}_{i}\\otimes \\mathbf{e}_{j}^{\\dag}$ is the $2^{\\text{nd}}$-order (i.e. rank-2) canonical tensor defined as the tensor product $\\otimes$ of the two canonical vectors.\n\nLet us now introduce the language of tensors, that we need to generalise the notion of adjacency matrix to the more general notion of adjacency tensor needed to describe multilayer networks. We will use the covariant notation, in which a row vector $\\mathbf{v}\\in\\mathbb{R}^{N}$ is given by a covariant vector $v_{\\alpha}$ (where $\\alpha=1,2,\\ldots,N$), and the corresponding column vector $\\mathbf{v}^{\\dag}$ is given by the contravariant vector $v^{\\alpha}$. Moreover, we will use Latin letters to denote the $i^{\\text{th}}$ vector or the $(ij)^{\\text{th}}$ tensor, and Greek letters to indicate the components of a vector or a tensor. Using this notation, $e_{\\alpha}(i)$ is the $\\alpha^{\\text{th}}$ component of the $i^{\\text{th}}$ covariant canonical vector $\\mathbf{e}_{i}$ in $\\mathbb{R}^{N}$, and $e^{\\alpha}(j)$ is the $\\alpha^{\\text{th}}$ component of the $j^{\\text{th}}$ contravariant canonical vector in $\\mathbb{R}^{N}$. The adjacency matrix $\\mathbf{W}$ can now be represented as rank-2 adjacency tensor $W^{\\alpha}_{\\beta}$ (1-covariant and 1-contravariant) as a linear combination of tensors in the canonical basis $$W^{\\alpha}_{\\beta}=\\sum_{i,j=1}^{N}w_{ij}e^{\\alpha}(i)e_{\\beta}(j)=\\sum_{i,j=1}^{N}w_{ij}E^{\\alpha}_{\\beta}(ij)$$ where $E^{\\alpha}_{\\beta}(ij)\\in\\mathbb{R}^{N\\times N}$ indicates the tensor in the canonical basis corresponding to the tensor product of the canonical vectors assigned to nodes $i$ and $j$, i.e. it is $\\mathbf{E}_{ij}$.\n\nIn a multilayer network, each type of relation between nodes is embedded in a different layer $\\tilde{k}$ (where $\\tilde{k}=1,2,\\ldots,L$ and we use the tilda symbol to denote indices that correspond to layers). For each of the layers, we construct the *intra-layer* adjacency tensor $W^{\\alpha}_{\\beta}(\\tilde{k})$ encoding information about relations between nodes within the same layer $\\tilde{k}$. Moreover, to encode information about connections between nodes in different layers, we construct the *inter-layer* adjacency tensors $C^{\\alpha}_{\\beta}(\\tilde{h}\\tilde{k})$. Note that, when $\\tilde{h} = \\tilde{k}$, we retrieve the intra-layer adjacency tensors $C^{\\alpha}_{\\beta}(\\tilde{k}\\tilde{k})=W^{\\alpha}_{\\beta}(\\tilde{k})$. Following the same approach as above, we define the covariant and contravariant vectors $e_{\\tilde{\\delta}}(\\tilde{k})$ and $e^{\\tilde{\\gamma}}(\\tilde{h})$ (where $\\tilde{\\delta}$, $\\tilde{\\gamma}$, $\\tilde{k}$,$\\tilde{h}$ all range in $(1,2,\\ldots,L)$) of the canonical basis in the space $\\mathbb{R}^{L}$. From these, we construct the $2^{\\text{nd}}$-order tensors $E^{\\tilde{\\gamma}}_{\\tilde{\\delta}}(\\tilde{h}\\tilde{k})=e^{\\tilde{\\gamma}}(\\tilde{h})e_{\\tilde{\\delta}}(\\tilde{k})$ that represent the canonical basis of the space $\\mathbb{R}^{L\\times L}$. Finally, we can now write the multilayer adjacency tensor as the tensor product between the adjacency tensors $C^{\\alpha}_{\\beta}(\\tilde{h}\\tilde{k})$ and the canonical tensors $E^{\\tilde{\\gamma}}_{\\tilde{\\delta}}(\\tilde{h}\\tilde{k})$: $$\\begin{aligned}\n\\begin{split}\n A^{\\alpha\\tilde{\\gamma}}_{\\beta\\tilde{\\delta}} &= \\sum_{\\tilde{h},\\tilde{k}=1}^{L}C^{\\alpha}_{\\beta}(\\tilde{h}\\tilde{k})E^{\\tilde{\\gamma}}_{\\tilde{\\delta}}(\\tilde{h}\\tilde{k})\\\\\n &= \\sum_{\\tilde{h},\\tilde{k}=1}^{L} \\Bigg[ \\sum_{i,j=1}^{N}w_{ij}(\\tilde{h}\\tilde{k})E^{\\alpha}_{\\beta}(ij) \\Bigg] E^{\\tilde{\\gamma}}_{\\tilde{\\delta}}(\\tilde{h}\\tilde{k})\\\\ \n &= \\sum_{\\tilde{h},\\tilde{k}=1}^{L}\\sum_{i,j=1}^{N}w_{ij}(\\tilde{h}\\tilde{k})\\mathcal{E}^{\\alpha\\tilde{\\gamma}}_{\\beta\\tilde{\\delta}}(ij\\tilde{h}\\tilde{k})\n\\end{split}\n\\end{aligned}$$ where $w_{ij}(\\tilde{h}\\tilde{k})$ are scalars that indicate the existence or not of a relationship between nodes $i$ and $j$, and $\\mathcal{E}^{\\alpha\\tilde{\\gamma}}_{\\beta\\tilde{\\delta}}(ij\\tilde{h}\\tilde{k})\\equiv e^{\\alpha}(i)e_{\\beta}(j)e^{\\tilde{\\gamma}}(\\tilde{h})e_{\\tilde{\\delta}}(\\tilde{k})$ is the $4^{\\text{th}}$-order (i.e., rank-4) tensors of the canonical basis in the space $\\mathbb{R}^{N\\times N\\times L\\times L}$.\n\nIn our particular case, we define $w_{ij}(\\tilde{h}\\tilde{k})$ as follows. We set $w_{ij}(\\tilde{k}\\tilde{k}) = 1$ if there exists at least one member of household $i$ that indicated a relationship of type $\\tilde{k}$ with any member of household $j$, or vice-versa, where $\\tilde{k}$ refers to any of the socio-economic dimensions defined above. Moreover, to take into account the fact that the $L$ replicas of node $i$, one per layer, represent in fact the same household, we set $w_{ii}(\\tilde{h}\\tilde{k})=1$ for all $i=1,2,\\ldots,N$ and all pairs of layers $(\\tilde{h}\\tilde{k})$. All others $w_{ij}(\\tilde{h}\\tilde{k})$ are set equal to 0.\n\nWe then generalise the definition of diffusion centrality by considering a diffusion process on the multilayer network, and introduce a new metrics that we call *diffusion versatility*. We define the layer-dependent diffusion versatility of node $\\alpha$ in layer $\\tilde{\\gamma}$ as the $(\\alpha\\tilde{\\gamma})^{\\text{th}}$ component of the rank-2 tensor $$\\label{eq:diffverslayer}\nDV_{\\alpha\\tilde{\\gamma}}(A^{\\alpha\\tilde{\\gamma}}_{\\beta\\tilde{\\delta}};q,T) := \\Big[ \\sum_{t=1}^{T} q(A^t)^{\\alpha\\tilde{\\gamma}}_{\\beta\\tilde{\\delta}} \\Big] u^{\\beta\\tilde{\\delta}}$$ where $(A^t)^{\\alpha\\tilde{\\gamma}}_{\\beta\\tilde{\\delta}}$ is the $t$-th power of the rank-4 tensor, and $u^{\\beta\\tilde{\\delta}}=\\sum_{\\tilde{h}=1}^{L}\\sum_{i=1}^{N}e^\\beta(i)e^{\\tilde{\\delta}}(\\tilde{h})$ is the $N \\times L$ rank-2 tensor with all components equal to 1. We then obtain the diffusion versatility of node $\\alpha$ independently of the layer by contracting the index of the tensor with the contravariant vector $u^{\\tilde{\\gamma}}$ whose entries are all equal to 1, and normalising by dividing by $L$: $$\\label{eq:diffvers}\nDV_\\alpha(A^{\\alpha\\tilde{\\gamma}}_{\\beta\\tilde{\\delta}};q,T) = \\frac{1}{L} DV_{\\alpha\\tilde{\\gamma}}(A^{\\alpha\\tilde{\\gamma}}_{\\beta\\tilde{\\delta}};q,T) u^{\\tilde{\\gamma}} \\mbox{ .}$$\n\nLet us note that the layer-dependent diffusion versatility $DV_{\\alpha\\tilde{\\gamma}}(A^{\\alpha\\tilde{\\gamma}}_{\\beta\\tilde{\\delta}};q,T)$ is not equivalent to computing diffusion centrality on a network composed only by layer $\\alpha$, because here we are taking into account the whole multilayer network in its computation. Therefore diffusion versatility $DV_\\alpha(A^{\\alpha\\tilde{\\gamma}}_{\\beta\\tilde{\\delta}};q,T)$ is not equivalent to computing diffusion centrality on the single layers separately and then taking their average for each node.\n\nConceptually, the diffusion versatility of a node measures how far a diffusion process starting on the node can spread on the multilayer network in a given amount of time $T$ (in our case, the number of trimesters during which the village was exposed to the microfinance institution). Accounting for the whole multilayer structure allows to capture along which kind of ties the diffusion is more likely to take place, and to assess whether the importance of nodes in the network as seeds of a diffusion process is more dependent on a dimension or another. For instance, a household that is very central in the aggregated network because it has several kinship ties with other households in the village, might have lower diffusion versatility in the multilayer network than another household that has the same centrality in the aggregated network but whose ties span over different dimensions because there live a very trusted person to whom people go to ask for advice, money and material goods.\n\n# Results\n\n## Comparing centrality and versatility node rankings\n\nFirst, we show that ranking nodes according to their diffusion versatility is significantly different than ranking them according to their diffusion centrality in the aggregated network. Figure\u00a0 shows a density map of the two rankings, for the 100 top ranked nodes in each village (i.e. about half of the nodes, on average). We selected the top 100 to avoid biases in the rank comparison due to the fact that pairs or groups of less central (or versatile) nodes might present the same value of centrality (or versatility) and therefore the same rank cardinal number, thus biasing the comparison between two different rankings. We observe that the two rankings are positively correlated as expected (using the multilayer network structure should capture some different aspects but not drastically change the whole ranking), but also that indeed the ranking is significantly different for several nodes. More specifically, $96\\%$ of the nodes do not occupy the same position in the two rankings, and $28\\%$ of them present a rank difference greater than or equal to 10. This result suggests that diffusion versatility provides different information with respect to diffusion centrality, and in the following sections we explore whether this information can lead to a better prediction of microfinance participation, and, more importantly, to the detection of which kinds of tie play the most important role.\n\n## Improving microfinance participation prediction\n\nWe investigate the correlation between the average diffusion versatility of leaders (as defined in Eq.\u00a0) and the rate of microfinance participation in the village, and compare the results with those obtained using diffusion centrality. As shown in Table\u00a0, we find that diffusion versatility is more strongly correlated to microfinance participation rate than diffusion centrality ($R^2=0.470$ for versatility, versus $R^2=0.442$ for centrality). To test the significance of the difference between the two models, we generate 1000 bootstrapped samples of the data, perform the linear regressions on them, and then compare the two resulting distributions of the coefficient of determination using the paired samples t-test. We find that we can accept with a $99\\%$ confidence level the alternative hypothesis that the the average $R^2$ of the model that uses versatility is higher than the average $R^2$ of the model that uses centrality. These results show that accounting for the whole multilevel structure of the different dimensions provides a better framework to identify the pre-defined set of leaders that microfinance agencies should initially inform in order to maximise participation. However, given that the improvement in prediction is significant but relatively small, we are interested in understanding if some kinds of tie play a more fundamental role than others in the diffusion, and leaders should therefore be chosen according to their layer-dependent versatility in some particular layers.\n\n```latex\n\\begin{table*}[h!]\\caption{\\textbf{Microfinance participation versus centrality and versatility of leaders.} Values shown are coefficients from ordinary least-squares regression. Each column represents a different regression. The dependent variable is the microfinance participation rate of nonleader households in a village. The covariates are diffusion centrality (regression 1) and diffusion versatility (regression 2), averaged over the set of leaders, as well as 5 control variables: number of households, self-help group participation rate, savings participation rate, caste composition, and fraction of village households designated as leaders. Standard errors (in parenthesis) are robust to heteroskedasticity.}\n\\label{regression}\n\\begin{center}\n\\begin{tabular}{ l c c }\n & \\multicolumn{2}{c}{\\textbf{Regression}} \\\\\n\\textbf{Measure} & 1 & 2 \\\\\n\\hline\n\\multirow{ 2}{*}{diffusion centrality} & 0.022 (0.007) & \\\\\n& P=0.002 & \\\\\n\\multirow{2}{*}{diffusion versatility} & & 0.030 (0.011) \\\\\n& & P=0.001\\\\\n$R^2$ & 0.442 & 0.470 \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table*}\n```\n\n## Untangling the importance of the different dimensions\n\nWe investigate whether the diverse dimensions contribute evenly, or rather play different roles, by considering the layer-dependent components of the diffusion versatility tensor, i.e. $DV_{\\alpha\\tilde{\\gamma}}(A^{\\alpha\\tilde{\\gamma}}_{\\beta\\tilde{\\delta}};q,T)$. For each dimension, we compute the average leaders' versatility taking into account only the components of the corresponding layer $\\tilde{\\gamma}$, thus obtaining 8 different average leaders' versatilities, each corresponding to a given dimension. Let us note that this is not the same as computing diffusion centrality on each layer separately, because in this case each versatility value is computed taking into account the whole multilayer structure. We perform 8 linear regressions, each using as covariate one of the 8 versatility measures (as well as the same control variables as above), and microfinance participation as the dependent variable. The results are reported in Table\u00a02, from the least to the most predictive, as indicated by $R^2$ values. To assess the statistical significance of the difference between each of these models and the model based on diffusion centrality, we use paired samples t-test on 1000 bootstrapped samples of the data, as already described in the previous section. We find that we can accept with a $99\\%$ confidence level the alternative hypothesis that the average $R^2$ of the the models that use layer-dependent versatility based on the layers *material good*, *kinship*, *praying company*, *advice*, *money* and *medical help* is higher than the average $R^2$ of the model that uses centrality. For the model based on the *nonrelative socialisation* the confidence level is $90\\%$. Instead, the average $R^2$ of the the model based on the *visits* layer is smaller than the average $R^2$ of the model that uses centrality ($99\\%$ confidence level). Moreover, we find that we can accept with a $99\\%$ confidence level the alternative hypothesis that the average $R^2$ of the the models that use layer-dependent versatility based on the layers *money* and *medical help* is also higher than the average $R^2$ of the model that uses overall versatility. The same holds also for the *advice* layer, but with a confidence level of $90\\%$. These results indicate that the most predictive dimensions are all related to trust: asking for help in a medical emergency, asking for money if in need, and asking for advice. These results mean that the versatility of leaders in these layers is what best correlates with the final rate of participation in microfinance in the village. This could serve as an indication for microfinance institutions for leader selection, which could be done on the basis of diffusion versatility, but with a particular focus on individuals belonging to households which are particularly versatile on these specific layers.\n\n```latex\n\\begin{table*}[h!]\\caption{\\textbf{Microfinance participation versus layer-dependent versatility of leaders.} Values shown are coefficients from ordinary least-squares regression. Each column represents a different regression. The dependent variable is the microfinance participation rate of nonleader households in a village. The covariates are the layer-dependent diffusion versatility of the given layer, averaged over the set of leaders, as well as the 5 control variables.}\n\\begin{center}\n\\tiny\n\\begin{tabular}{ l c c c c c c c c }\n & \\multicolumn{8}{c}{\\textbf{Regression}} \\\\\n\\textbf{Dimension} & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\\\\n\\hline\n\\multirow{ 2}{*}{visits} & 0.016(0.007) & & & & & & & \\\\\n& P=0.019 & & & & & & & \\\\\nnonrelative & & 0.021(0.007) & & & & & & \\\\\nsocialisation & & P=0.004 & & & & & & \\\\\n\\multirow{2}{*}{material goods} & & & 0.031(0.010) & & & & & \\\\\n& & & P=0.004 & & & & & \\\\\n\\multirow{2}{*}{kinship} & & & & 0.034(0.010) & & & & \\\\\n& & & & P=0.002 & & & & \\\\\npraying & & & & & 0.044(0.013) & & & \\\\\ncompany & & & & & P=0.002 & & & \\\\\n\\multirow{2}{*}{advice} & & & & & & 0.025(0.008) & & \\\\\n& & & & & & P=0.003 & & \\\\\n\\multirow{2}{*}{money} & & & & & & & 0.028(0.009) & \\\\\n& & & & & & & P=0.003 & \\\\\n\\multirow{2}{*}{medical help} & & & & & & & & 0.034(0.009) \\\\\n& & & & & & & & P=0.000 \\\\\n$R^2$ & 0.365 & 0.443 & 0.447 & 0.468 & 0.472 & 0.474 & 0.487 & 0.511 \\\\\n\\hline\n\\end{tabular}\n%\\normalsize\n\\label{regression2}\n\\end{center}\n\\end{table*}\n```\n\n# Conclusions\n\nIn this paper we have shown that taking into account the multilayer structure of social networks of rural India villages allows for a better identification of the individuals who are more likely to help the spreading of microfinance in the community. Firstly, we have introduced a new measure, diffusion versatility, as an extension of diffusion centrality to multilayer networks. We have shown that the diffusion versatility of leaders is a better predictor of the microfinance participation rate in the village than diffusion centrality. Secondly, we have used the layer-dependent components of diffusion versatility to untangle the role played by each dimension in the diffusion of microfinance. We have found that the most predictive dimensions are related with trust: asking for help in a medical emergency or for a money loan if in need.\n\nThese results show that diffusion versatility could be used by microfinance institutions to identify opinion leaders so to maximise participation, focusing in particular on those with high versatility in specific layers. Further field research could validate these results, for instance by means of randomised field experiments. Leaders in a set of villages could be chosen according to their layer-dependent diffusion versatility ranking relative to a given dimension, and in another set of villages according to a different dimension, and then compare participation. Moreover, future work should involve sociologists and anthropologists in order to combine methods of multilayer network analysis with detailed investigations of the sociological meaning of the different dimensions in the context of rural India, to gain a deeper understanding of these social systems and how innovations diffuse therein.\n\n# Acknowledgements\n\nThe authors would like to thank Matthew O. Jackson for the fruitful discussions. AA and EO were supported by the James S.\u00a0McDonnell Foundation through grant 220020325. AA also acknowledges financial support from the European Commission FET-Proactive project MULTIPLEX (Grant No. 317532), the ICREA Academia and by Spanish government grant FIS2015-38266.","meta":{"dup_signals":{"dup_doc_count":16,"dup_dump_count":14,"dup_details":{"curated_sources":3,"2023-14":1,"2022-49":1,"2022-27":1,"2022-05":1,"2021-39":1,"2021-25":1,"2021-04":1,"2020-45":1,"2020-34":1,"2020-05":1,"2019-43":1,"2023-50":1,"2024-22":1}},"filename":"out\/1609.01455_extract_microfinance-arxiv.tex.md"},"subset":"arxiv"} +{"text":"abstract: Sperm traverse their microenvironment through viscous fluid by propagating flagellar waves; the waveform emerges as a consequence of elastic structure, internal active moments, and low Reynolds number fluid dynamics. Engineered microchannels have recently been proposed as a method of sorting and manipulating motile cells; the interaction of cells with these artificial environments therefore warrants investigation. A numerical method is presented for large-amplitude elastohydrodynamic interaction of active swimmers with domain features. This method is employed to examine hydrodynamic scattering by a model microchannel backstep feature. Scattering is shown to depend on backstep height and the relative strength of viscous and elastic forces in the flagellum. In a 'high viscosity' parameter regime corresponding to human sperm in cervical mucus analogue, this hydrodynamic contribution to scattering is comparable in magnitude to recent data on contact effects, being of the order of $5$\u2013$10^\\circ$. Scattering can be positive or negative depending on the relative strength of viscous and elastic effects, emphasising the importance of viscosity on the interaction of sperm with their microenvironment. The modulation of scattering angle by viscosity is associated with variations in flagellar asymmetry induced by the elastohydrodynamic interaction with the boundary feature.\nauthor: T. D. Montenegro-Johnson^1,2,3^[^1], H. Gad\u00ealha^4,3^ and D. J. Smith^2,3,5^.\nbibliography: nl_refs.bib\ntitle: **Spermatozoa scattering by a microchannel feature: an elastohydrodynamic model**\n\n| | |\n|:---|:---|\n| \u00a0\u00a0$^1$ | Department of Applied Mathematics and Theoretical Physics, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge, CB3 0WA, U.K. |\n| \u00a0\u00a0$^2$ | School of Mathematics, University of Birmingham, Edgbaston, Birmingham, B15 2TT, U.K. |\n| \u00a0\u00a0$^3$ | Centre for Human Reproductive Science, Birmingham Women's NHS Foundation Trust, Mindelsohn Way, Edgbaston, Birmingham, B15 2TG, U.K. |\n| \u00a0\u00a0$^4$ | Wolfson Centre for Mathematical Biology, University of Oxford, Mathematical Institute, Woodstock Road, OX2 6GG, U.K. |\n| \u00a0\u00a0$^5$ | School of Engineering and Centre for Scientific Computing, University of Warwick, Coventry, CV4 7AL, U.K. |\n\nKey index words: *Stokesian swimming, fluid-structure interaction, human sperm*\n\n# Introduction\n\nHuman sperm propel themselves by propagating a travelling wave along a single, active flagellum; this motility is essential for migration through the female reproductive tract and natural fertilisation. Recent work with microfluidic devices has suggested the ability to direct and sort cells through their own motility, a potentially valuable advance in assisted reproduction therapy and in the livestock industry. Cell scattering at simple geometric features, such as the outside of a corner, appear to be dependent on viscosity and temperature; developing mechanical models to understand, interpret and optimise these effects for their exploitation is therefore of considerable interest. We will develop a mathematical model of a cell interacting with its environment, and its computational implementation, and study the dynamics of a realistic model sperm swimming over a backstep feature to study the effect of elastic, viscous and geometric parameters. The model will combine geometric nonlinearity of the elastic flagellum with nonlocal hydrodynamic interactions, and will be solved numerically via an implicit finite difference method for the elastohydrodynamic equations, combined with a hybrid slender body theory\/boundary integral method for the low Reynolds number fluid dynamics.\n\nThe motor apparatus driving the flagellar waveform is a remarkably phylogenetically conserved structure known as the axoneme. The axoneme in human sperm comprises $9$ doublet microtubules, linked to each other and a central pair by passive elastic structures, with additional stiffening from outer dense fibres and the fibrous sheath (for recent review focused on mechanically-relevant features, see ref.\u00a0). Motor proteins bound to microtubules exert forces on adjacent doublets in a coordinated manner to induce bending moments along the length of the flagellum, causing bending, which is in turn resisted by the surrounding fluid. The fluid mediates interactions with surrounding surfaces and other cells; the flagellar waveform emerges from this nonlinear coupling.\n\nMachin\u00a0 showed that in order to generate experimentally observed waveforms, the flagellum must actively bend along its length, developing a linearised theory that has formed the basis of many subsequent studies. The theory that bending is produced by relative sliding of internal microtubules was subsequently proposed by Satir\u00a0, and the sliding mechanism was modelled in early studies by Brokaw\u00a0, using the formalism of an active internal moment per unit length in an elastic filament. The regulation of the active motor proteins that cause this sliding, and their oscillatory behaviour, is however a subject of continuing enquiry\u00a0, with modelling playing an important role in comparing regulatory theories . A number of studies since the 1970s have provided significant insights into how potential mechanisms of dynein regulation can produce the types of bending waves observed in nature (see for example\u00a0).\n\nThe importance of large-amplitude elastohydrodynamic flagellar modelling was established by Gad\u00ealha et al.\u00a0, who delineated the range of validity of small-amplitude elastic theory and showed that for sufficiently high viscosity relative to flagellar stiffness, a buckling instability can give rise to waveform asymmetry without domain boundaries or asymmetric internal actuation. The numerical implementation of Gad\u00ealha et al.'s study built on a model of passive flexible fibres in shear flow\u00a0, although replacing the nonlocal hydrodynamics of the latter with a local drag-velocity law. The combination of three-dimensional, time-dependent flow with the hydrodynamic interactions arising from fixed and moving boundaries, with active filament mechanics is computationally demanding; the majority of sperm models until the last decade made similar approximations for the fluid dynamics, or small-amplitude linearisation of the flagellar wave.\n\nLiron, Gueron and colleagues (see for example\u00a0) modelled cilia arrays, taking both nonlocal fluid dynamics and geometric nonlinearity into account, building on earlier work by for example Lighthill\u00a0 and Hines & Blum\u00a0. However this formalism, expressed in terms of bending angles rather than flagellar position, does not appear to have been generalised to a free-swimming cell with the associated boundary condition resulting from the presence of a head. More recent work using the finite element and finite volume methods and cluster computing has also been focused on cilia\u00a0; another successful recent approach is the regularised stokeslet method combined with a generalised immersed boundary method\u00a0.\n\nWhile the fluid dynamic interaction of sperm with plane boundaries has received significant attention since the work of Rothschild over 50 years ago\u00a0, motivating a number of experimental and theoretical studies , the interaction of sperm with 'non-trivial' geometric obstacles involving angles and curves or complex interfaces is a subject of growing recent interest .\n\nDenissenko et al.\u00a0 showed how sperm scatter at a range of angles when encountering the outside of a corner in an artificial microchannel maze, and that the scattering angle is modulated by viscosity; Kantsler et al.\u00a0 studied the effect of very close interactions of sperm and the biflagellate algae *Chlamydomonas* with these features. The geometric nature of the female reproductive tract is also highly convoluted, further motivating the need for models which can accommodated complex wall shapes into account. These studies suggest tantalising opportunities to direct and sort motile sperm on passive microdevices, however a better understanding of the subtle nonlinear physics of how flagellated swimmers interact with geometric features must be developed; to aid with this understanding we will develop a mathematical and computational approach which accounts for elasticity, viscosity and their interaction, without the need for large scale computational resources. To this end we will bring together the active elastic formulation of Gad\u00ealha *et al.* with the Lighthill-Gueron-Liron theorem for nonlocal slender body theory and the boundary element and regularised stokeslet methods to capture the influence of a non-trivial nearby surface. We will use this approach to explore how sperm scatter near geometric features due to elastohydrodynamic interaction over hundreds of flagellar beats with a single computer core, and quantify how the balance of viscosity and elasticity modulates this effect via changes to the flagellar waveform.\n\n# Mathematical model\n\nThe mathematical model of a sperm interacting with a geometric feature will be derived from, (i) the Stokes flow equations, with a nonlocal hydrodynamic model, and (ii) geometrically nonlinear elasticity for an internally actuated flagellum. We will first derive the equations for the two parts of problem, before describing (iii) the numerical implementation.\n\n## Hydrodynamics\n\nAt microscopic scales, fluid dynamics can be modelled by the incompressible Stokes flow equations, $$0=-\\boldsymbol{\\nabla}p+\\mu\\nabla^2\\mathbf{u} \\mbox{,} \\quad\n\\boldsymbol{\\nabla}\\cdot\\mathbf{u} = 0,$$ where $\\mathbf{u}$ is velocity, $p$ is pressure and $\\mu$ is dynamic viscosity. For our problem, these equations will be augmented with the no-slip, no-penetration condition $\\mathbf{u}(\\mathbf{X})=\\mathbf{X}_t$ for points $\\mathbf{X}$ on the solid boundary, where subscript $t$ denotes time derivative.\n\nThe linearity of the Stokes flow equations enables the construction of solutions to satisfy boundary conditions via discrete and\/or continuous sums of suitably-weighted fundamental solutions. These techniques replace solid surfaces, such as the sperm flagellum, head, and its surrounding microenvironment, by line or surface distributions of immersed forces. A concentrated point force located at $\\mathbf{y}$ with strength $\\mathbf{F}$, produces a velocity field (the 'stokeslet'), $$u_j(\\mathbf{x})=S_{jk}(\\mathbf{x},\\mathbf{y})F_k \\mbox{,} \\quad \\mbox{where} \\quad S_{jk}(\\mathbf{x},\\mathbf{y})=\\frac{1}{8\\pi\\mu}\\left(\\frac{\\delta_{jk}}{|\\mathbf{x}-\\mathbf{y}|}+\\frac{(x_j-y_j)(x_k-y_k)}{|\\mathbf{x}-\\mathbf{y}|^3}\\right)\\mbox{,}\\label{eq:stokeslet}$$ the symbol $\\delta_{jk}$ being the Kronecker delta tensor and the summation convention being used. The symbol $\\boldsymbol{\\mathsf{S}}(\\mathbf{x},\\mathbf{y})$ will be used to denote the 2nd rank tensor in equation\u00a0. It will also be convenient to make use of the regularised stokeslet $\\boldsymbol{\\mathsf{S}}^\\epsilon$ of Cortez\u00a0, which corresponds to a spatially smoothed force; a frequently-used implementation in three dimensions takes the form, $$S_{jk}^\\epsilon(\\mathbf{x},\\mathbf{y})=\\frac{1}{8\\pi\\mu}\\frac{\\delta_{jk}(|\\mathbf{x}-\\mathbf{y}|^2+2\\epsilon^2)+(x_j-y_j)(x_k-y_k)}{(|\\mathbf{x}-\\mathbf{y}|^2+\\epsilon^2)^{3\/2}}.$$ The parameter $\\epsilon>0$ defines the length scale over which the point force is smoothed; this smoothness property is particularly convenient for the formulation of boundary integral methods.\n\nThe LGL theorem , an extension of the work of Lighthill\u00a0, derives from a line distribution of singular stokeslets and source dipoles: an approximate expression for the flow field at the surface of a moving slender body, accurate to $O(\\sqrt{b\/L})$, where $b$ is the radius and $L$ is the flagellar length. Ignoring image systems, which are not required in our formulation, and using the properties of the stokeslet to reorder the source and field points, we have the expression for the approximate velocity field produced by the slender body $\\mathbf{v}$, $$\\begin{aligned}\n\\mathbf{v}(\\mathbf{X}(s_0,t)) &= -\\frac{1}{\\xi_{\\parallel}}(\\mathbf{f}_{\\mathrm{vis}}\\cdot \\hat{\\mathbf{s}})\\hat{\\mathbf{s}}-\\frac{1}{\\xi_{\\perp}}(\\mathbf{f}_{\\mathrm{vis}}\\cdot \\hat{\\mathbf{n}})\\hat{\\mathbf{n}}-\\frac{1}{\\xi_{\\perp}}(\\mathbf{f}_{\\mathrm{vis}}\\cdot \\hat{\\mathbf{b}})\\hat{\\mathbf{b}}\\nonumber \\\\\n& -\\int_{|s-s_0|>q} \\boldsymbol{\\mathsf{S}}(\\mathbf{X}(s_0,t),\\mathbf{X}(s,t))\\cdot \\mathbf{f}_{\\mathrm{vis}}(s)\\,ds.\\label{eq:lgl}\n\\end{aligned}$$ Here and in what follows, $0\\leqslant s \\leqslant L$ is an arclength parameterisation for the flagellum, and $\\mathbf{f}_{\\mathrm{vis}}$ is the viscous force per unit length exerted by the fluid on the flagellum. The coefficients $\\xi_{\\parallel}$ and $\\xi_{\\perp}$ are parallel and perpendicular resistance coefficients similar to those of Gray & Hancock\u00a0 and take the form, $$\\xi_{\\perp} = \\frac{8\\pi\\mu}{1 + 2 \\ln (2q\/b)}, \\quad \\xi_{\\parallel} =\n\\frac{8\\pi\\mu}{-2 + 4\\ln (2q\/b)}, \\quad \\gamma = \\frac{\\xi_{\\perp}}{\\xi_{\\parallel}}\\mbox{,}$$ the parameter $q$ being a length scale chosen intermediate in magnitude between $b$ and $L$. The symbols $\\hat{\\mathbf{s}}$, $\\hat{\\mathbf{n}}$ and $\\hat{\\mathbf{b}}$ are unit tangent, normal and binormal. Whereas Gueron and Liron\u00a0 considered the dynamics of a cilium projecting from a plane boundary, and hence the associated image systems, in this study we will not require these terms because surfaces will be represented via boundary integrals.\n\nEquation\u00a0 can be considered a nonlocal extension of resistive force theories, which retain only the first three terms. To couple LGL to the elastohydrodynamic model of Gad\u00ealha et al.\u00a0 we will rewrite these terms in another commonly-used form, $-(1\/\\xi_{\\perp})(\\boldsymbol{\\mathbf{I}}+(\\gamma-1)\\hat{\\mathbf{s}}\\hat{\\mathbf{s}})\\cdot\n\\mathbf{f}_{\\mathrm{vis}}$, with $\\gamma=\\xi_{\\perp}\/\\xi_{\\parallel}$ playing a similar role to the drag anisotropy ratio of resistive force theory, but depending on the choice of $q$. The precise value of $q$ is not critical provided that $b\\ll q \\ll L$ because changes to the resistance coefficients are accompanied by changes to the integrals; for our study with $b=0.01L$, we choose $q=0.1L$, leading to $\\gamma\\approx 1.4$.\n\nTo model a sperm, we will consider a cell with a rigid head as well as a flagellum, swimming near a rigid step-like surface. The linearity of Stokes flow equations means that a solution satisfying the additional no-slip boundary conditions associated with the head and the wall may be constructed by linear superposition. Moreover, the Lorentz reciprocal relation, and its regularised analogue enable the representation of these surfaces by boundary integrals; rigidity of the surfaces enables the use of single layer boundary integral representations . In the present study we will use a hybrid approach, representing the head via a surface distribution of singular stokeslets with stress $\\boldsymbol{\\phi}^{\\mathrm{H}}$, discretised via BEMLIB , and the wall by regularised stokeslets and boundary elements, with stress $\\boldsymbol{\\phi}^{\\mathrm{W}}$ . The full fluid dynamic model for the velocity field on the surface of the flagellum is therefore, $$\\begin{aligned}\n\\mathbf{u}(\\mathbf{X}(s_0,t))& = -\\frac{1}{\\xi_{\\perp}}(\\boldsymbol{\\mathbf{I}}+(\\gamma-1)\\hat{\\mathbf{s}}\\hat{\\mathbf{s}})\\cdot \\mathbf{f}_{\\mathrm{vis}} -\\int_{|s-s_0|>q} \\boldsymbol{\\mathsf{S}}(\\mathbf{X}(s_0,t),\\mathbf{X}(s,t))\\cdot \\mathbf{f}_{\\mathrm{vis}}(s)\\,ds \\nonumber \\\\\n&-\\iint_{H(t)} \\boldsymbol{\\mathsf{S}}(\\mathbf{X}(s_0,t),\\mathbf{y})\\cdot\\boldsymbol{\\phi}^{\\mathrm{H}}(\\mathbf{y})\\,dS_{\\mathbf{y}}-\\iint_W \\boldsymbol{\\mathsf{S}}^\\epsilon(\\mathbf{y},\\mathbf{X}(s_0,t))\\cdot\\boldsymbol{\\phi}^{\\mathrm{W}}(\\mathbf{y})\\,dS_{\\mathbf{y}}.\\label{eq:fluiddyn}\n\\end{aligned}$$ Similar equations, but with the first two terms replaced by a single slender body integral $-\\int_0^L\\boldsymbol{\\mathsf{S}}\\cdot\\mathbf{f}_{\\mathrm{vis}}\\,ds$, hold on the surface of the head and the wall. In the next section we will discuss the equations of an internally-driven elastic flagellum, and their coupling to the fluid mechanics.\n\n## Elastohydrodynamics\n\nThe elastohydrodynamic formulation we will work with was derived by Tornberg & Shelley , and extended to an internally-driven flagellum by Gad\u00ealha et al.\u00a0; the central feature of this approach is to formulate the problem in terms of the flagellar position $\\mathbf{X}(s,t)$ and line tension $T(s,t)$. Alternative approaches based on bending angles and curvatures have also been pursued, as has complex curvature . The internal elastic contact force $\\mathbf{F}_{\\mathrm{int}}$ and moment $\\mathbf{M}_{\\mathrm{int}}$ exerted on the proximal flagellum $[0,s_0)$ by the distal flagellum $(s_0,L)$, respectively are given by, $$\\mathbf{F}_{\\mathrm{int}}=-E\\mathbf{X}_{sss}+m\\hat{\\mathbf{n}}+T\\mathbf{X}_s \\mbox{,} \\quad \\mathbf{M}_{\\mathrm{int}}\\wedge \\mathbf{X}_s = E\\mathbf{X}_{ss} \\mbox{,} \\label{eq:elasticity}$$ where $E$ is constant elastic modulus and $m(s,t)$ is a prescribed active moment density representing the internal flagellar motors. Balancing elastic and viscous forces acting on a segment of flagellum $(s_0,s_0+\\delta s)$ and taking the limit as $\\delta s\\rightarrow 0$ yields, $$\\mathbf{f}_{\\mathrm{vis}} + \\partial_s(-E\\mathbf{X}_{sss}+m\\hat{\\mathbf{n}}+T\\mathbf{X}_s) = 0.$$ Noting that $\\hat{\\mathbf{s}}=\\mathbf{X}_s$, the local term of equation\u00a0 can then be written, $$\\begin{aligned}\n-\\frac{1}{\\xi_{\\perp}}(\\boldsymbol{\\mathbf{I}}+(\\gamma-1)\\hat{\\mathbf{s}}\\hat{\\mathbf{s}})\\cdot \\mathbf{f}_{\\mathrm{vis}}\n&=\n-E(\\mathbf{X}_{ssss}+(\\gamma-1)(\\mathbf{X}_s\\cdot\\mathbf{X}_{ssss})\\mathbf{X}_s)+T\\mathbf{X}_{ss}+\\gamma T_s\\mathbf{X}_s\\nonumber \\\\\n& +m_s\\hat{\\mathbf{n}}+\\gamma m \\hat{\\mathbf{n}}_s.\n\\end{aligned}$$ For brevity we will write the nonlocal (integral) velocities from equation\u00a0 as $\\mathbf{V}$ (written out explicitly in the appendix, equation\u00a0). Nondimensionalising with scales $L$ for position, $1\/\\omega$ for time, $\\omega L$ for velocity and $E\/L^2$ for tension and moment density, yields the following dimensionless elastohydrodynamic equation,\n\n$$\\mathrm{Sp}^4 (\\mathbf{X}_t-\\mathbf{V}) =\n-\\mathbf{X}_{ssss}-(\\gamma-1)(\\mathbf{X}_s\\cdot \\mathbf{X}_{ssss})\\mathbf{X}_s +\nT\\mathbf{X}_{ss} + \\gamma T_s \\mathbf{X}_s + m_s \\hat{\\mathbf{n}} + \\gamma m \\hat{\\mathbf{n}}_s.\\label{eq:elastohydro}$$ The parameter $\\mathrm{Sp}=L(\\xi_{\\perp}\\omega\/E)^{1\/4}$ is the *sperm number*, which quantifies the relative importance of viscous and elastic effects. This model can be seen as an extension of linear models (such as Camalet et al.\u00a0) by the inclusion of the nonlinear terms on the right hand side, and an extension of hydrodynamically local models (such as Gad\u00ealha et al.\u00a0) by the inclusion of the $\\mathbf{V}$ term on the left hand side.\n\nSimilarly to Gad\u00ealha et al.\u00a0, the inextensibility constraint $\\partial_t(\\mathbf{X}_s\\cdot\\mathbf{X}_s)=0$ can be used with the elastohydrodynamic equation\u00a0 to deduce an ordinary differential equation which must be satisfied by the line tension $T$, $$\\begin{aligned}\n-\\mathrm{Sp}^4\\mathbf{V}_s\\cdot\\mathbf{X}_s &= \\gamma T_{ss} - \\mathbf{X}_{ss}\\cdot\\mathbf{X}_{ss}T + 3\\gamma \\mathbf{X}_{sss}\\cdot\\mathbf{X}_{sss}+(1+3\\gamma)\\mathbf{X}_{ss}\\cdot\\mathbf{X}_{ssss} \\nonumber \\\\\n& +(\\gamma +1)m_s\\hat{\\mathbf{n}}_s\\cdot\\mathbf{X}_s + m\\hat{\\mathbf{n}}_{ss}\\cdot\\mathbf{X}_s. \\label{eq:aux}\n\\end{aligned}$$ The above equation is derived via the identity $3\\mathbf{X}_{ss}\\cdot\\mathbf{X}_{sss}+\\mathbf{X}_s\\cdot\\mathbf{X}_{ssss}=0$ and its derivative with respect to $s$. As previously we introduce the term $\\lambda \\mathrm{Sp}^4(1-\\mathbf{X}_s\\cdot\\mathbf{X}_s)$ to the left hand side of equation\u00a0 to dampen numerical errors in flagellar length. The value used in the present study is $\\lambda = 80$, though as found by Gad\u00ealha *et al.* the solution is insensitive to the precise value of $\\lambda$.\n\nThe final part of the mathematical model is the specification of the boundary conditions for equations\u00a0 and . The assumption of zero contact force and moment at the distal ($s=1$) tip of the flagellum combined with the elasticity equations\u00a0 yield (in dimensionless variables), $$0=-\\mathbf{X}_{sss}+m\\hat{\\mathbf{n}}+T\\mathbf{X}_s \\mbox{,}\\quad 0=\\mathbf{X}_{ss} \\quad \\mbox{at} \\; s=1. \\label{eq:distbc}$$ Taking the dot product of the first equation with $\\mathbf{X}_s$, using the identity $\\mathbf{X}_s\\cdot\\mathbf{X}_{sss}=-\\mathbf{X}_{ss}\\cdot\\mathbf{X}_{ss}$ and the second equation yields the distal tension boundary condition, $T=0$.\n\nAt the proximal end of the flagellum, the boundary conditions are given by considering the force and moment exerted by the fluid on the head. We denote these quantities $\\mathbf{F}^\\mathrm{H}$ and $\\mathbf{M}^\\mathrm{H}$ and nondimensionalise them with the elastic scalings $E\/L^2$ and $E\/L$ respectively. In the inertialess Stokes flow regime, the total force and moment acting on the head are zero, so by Newton's third law, the force and moment on the flagellum at $s=0$ are also given by $\\mathbf{F}^\\mathrm{H}$ and $\\mathbf{M}^\\mathrm{H}$ respectively. With the appropriate scalings, the proximal boundary conditions are then, $$\\mathbf{F}^\\mathrm{H}=\\mathbf{X}_{sss}-m\\hat{\\mathbf{n}}-T\\mathbf{X}_s \\quad \\mbox{and} \\quad \\mathbf{M}^\\mathrm{H}\\wedge\\mathbf{X}_s=-\\mathbf{X}_{ss}+M\\hat{\\mathbf{n}} \\mbox{,} \\quad \\mbox{at} \\; s=0 \\mbox{,} \\label{eq:proxbc}$$ where $M=\\int_0^1 m \\,ds$. From these equations we also derive the tension condition at the proximal end, $\\mathbf{F}^\\mathrm{H}\\cdot \\mathbf{X}_s =\n-\\mathbf{X}_{ss}\\cdot \\mathbf{X}_{ss} -T$. The calculation of the quantities $\\mathbf{F}^\\mathrm{H}$ and $\\mathbf{M}^\\mathbf{H}$ with nonlocal hydrodynamic interaction is described in more detail in the next section and the appendix. Finally we introduce the translational and angular velocity $\\mathbf{U}^\\mathrm{H}$ and $\\boldsymbol{\\Omega}^\\mathrm{H}$ of the head; while $\\mathbf{U}^\\mathrm{H}$ and two components of the angular velocity are constrained by knowledge of the function $\\mathbf{X}$, there is an independent rotational component of the motion that defines the principal bending plane of the flagellum. These quantities will be determined by kinematic considerations and the implementation of the boundary conditions.\n\nTo complete the mathematical model it is necessary to specify the internal active moment $m(s,t)$. Gad\u00ealha et al.\u00a0 used travelling waves of internal moment, which calculations from experiment confirm are a good model. We therefore specify in dimensionless units, $m(s,t)=m_0\\cos(ks- t)$.\n\n## Numerical implementation\n\nThe elastohydrodynamic equation\u00a0 is treated with a Crank-Nicolson type finite difference discretisation, with the second order central differences in the interior, and third order one-sided difference for the boundary conditions, using coefficients taken from Fornberg\u00a0. The higher-order boundary stencil produced comparable errors to the central stencil on polynomial test functions. Both linear and nonlinear terms are treated implicitly; nonlinearity of these equations is dealt with by performing an iterative process on every timestep, with the operator on the left hand side at $t+dt$ being linearised as, $$-\\mathbf{X}_{ssss}-(\\gamma-1)(\\tilde{\\mathbf{X}}_s\\cdot\\mathbf{X}_{ssss})\\tilde{\\mathbf{X}}_s+T\\tilde{\\mathbf{X}}_{ss}+\\gamma\nT_s\\tilde{\\mathbf{X}}_s+m_s\\tilde{\\mathbf{n}}+\\gamma m\\tilde{\\mathbf{n}}_s \\mbox{,}$$ variables with tildes denoting that values from the previous iteration are taken.\n\nThe nonlocal hydrodynamic term $\\mathbf{V}$ in equation\u00a0 is approximated by forming the slender body\/boundary integral problem of determining $\\mathbf{f}_{\\mathrm{vis}}$, $\\boldsymbol{\\phi}^\\mathrm{H}$ and $\\boldsymbol{\\phi}^\\mathrm{W}$ using the most recent approximations to $\\tilde{\\mathbf{X}}$ and $\\tilde{\\mathbf{X}}_t$ available; details are given in the appendix.\n\nAt the first iteration of each timestep the converged values from the previous timestep are used as starting guesses for all variables, except for $\\mathbf{X}$ which is approximated via linear extrapolation. The nonlinear iteration is terminated when the maximum difference in position between successive iterations relative to the distance travelled by the flagellum over the timestep falls below $0.5\\%$. Similarly, the auxiliary equation for the tension at $t+dt$ is linearised as, $$\\begin{aligned}\n\\mathrm{Sp}^4(\\lambda(1-\\tilde{\\mathbf{X}}_s\\cdot\\mathbf{X}_s) -\\tilde{\\mathbf{V}}_s\\cdot\\tilde{\\mathbf{X}}_s) &= \\gamma T_{ss} - (\\tilde{\\mathbf{X}}_{ss}\\cdot\\tilde{\\mathbf{X}}_{ss})T + 3\\gamma \\tilde{\\mathbf{X}}_{sss}\\cdot\\mathbf{X}_{sss}+(1+3\\gamma)\\tilde{\\mathbf{X}}_{ss}\\cdot\\mathbf{X}_{ssss} \\nonumber \\\\\n& +(\\gamma +1)(\\tilde{\\mathbf{n}}_s\\cdot\\tilde{\\mathbf{X}}_s)m_s + (\\tilde{\\mathbf{n}}_{ss}\\cdot\\tilde{\\mathbf{X}}_s)m. \\label{eq:auxlinearised}\n\\end{aligned}$$ Each iteration requires the solution of a linear system for the unknown discrete values of $\\mathbf{X}(s_l,t_{n+1})$, $T(s_l,t_{n+1})$, $\\mathbf{U}^\\mathrm{H}$ and $\\boldsymbol{\\Omega}^\\mathrm{H}$, where $l=0,\\ldots,N_s$ denotes the spatial grid coordinate and $n=0, 1, \\ldots$ the timestep. We found that $N_s=160$ and $200$ time steps per beat were sufficient to yield accurate results. The discrete form of equations\u00a0 and provide $4(N_s+1) +\n6 = 650$ linear equations, the additional $6$ equations arising from the translational and angular velocity of the cell head. The nonlinear correction is then a system of $3(N_s + N_h + N_b)$ linear equations, where $N_h$ and $N_b$ are the number of elements on the head and domain boundary respectively.\n\nTo implement the boundary conditions\u00a0, , the force and moment on the head are *a priori* unknown and need to be determined as part of the coupled problem. The force and moment are decomposed into a linear part, given by the grand resistance matrix associated with rigid body motion in the vicinity of the wall, and an additional subleading correction resulting from the influence of the flagellum. Following nondimensionalisation with the elasticity scalings, the force and moment on the head may then be expressed as, $$\\begin{pmatrix}\\mathbf{F}^\\mathrm{H} \\\\ \\mathbf{M}^\\mathrm{H}\\end{pmatrix} = \\mathrm{Sp}^4\\left(\\frac{\\mu}{\\xi_\\perp}\\right)\\mathcal{R}\\cdot \\begin{pmatrix} \\mathbf{U}^\\mathrm{H} \\\\ \\boldsymbol{\\Omega}^\\mathrm{H} \\end{pmatrix}+\\begin{pmatrix}\\Delta\\mathbf{F}^\\mathrm{H} \\\\ \\Delta\\mathbf{M}^\\mathrm{H}\\end{pmatrix} \\label{eq:forceCorrections}\n\\mbox{,}$$ where $\\Delta\\mathbf{F}^\\mathrm{H}$, $\\Delta\\mathbf{M}^\\mathrm{H}$ are corrections for the effect of the flagellum. The calculations of $\\mathcal{R}$ and the corrections are described in the appendix.\n\nIn summary, each timestep requires a number of iterations to solve the nonlinear problem and each iteration involves the solution of a sparse linear system arising from the finite difference discretisation of the elastohydrodynamic equations. The 'right hand side' terms arising from the nonlocal hydrodynamic correction $\\mathbf{V}$ and the nonlocal corrections to the force and moment balance $\\Delta \\mathbf{F}^\\mathrm{H}$, $\\Delta\\boldsymbol{\\Omega}^\\mathrm{H}$ require the solution of a slender body theory-boundary integral hydrodynamic problem. Calculation of the grand resistance matrix $\\mathcal{R}$ requires the separate solution of a boundary integral problem with multiple right hand sides to determine the force and moment resistances associated with the rigid body modes of the head and the wall interaction. The code is implemented in Fortran 90 (gfortran, GNU Compiler Collection); linear systems are equilibriated and solved by LU factorisation with the LAPACK routines `dgeequ` and `dgesv` respectively, and the boundary integrals over the sperm head are calculated with routines from BEMLIB . A typical run of $200$ beats with $500$ boundary elements required approximately 24 hours walltime on a single core of a 2.2\u00a0GHz Intel Sandy Bridge E5-2660 node.\n\n# Results\n\nThe numerical scheme is applied to predict the trajectory of a sperm-like cell swimming in an unbounded fluid at varying $\\mathrm{Sp}$, over a 'backstep' (the latter being shown in figure\u00a0a), the limiting case of zero backstep height being referred to as a 'strip'. As in Gad\u00ealha et al.\u00a0, we consider planar waveform actuation, which is appropriate for cells swimming through high viscosity fluids such as cervical mucus . The semi-axes of the ellipsoidal head, modelled with the boundary element method, are $a_x = 0.05L$, $a_y = 0.03L$, $a_z =\n0.04L$, which correspond to $5\\times3\\times4\\,\\mu m$ for a flagellum of length $L=50$\u00a0$\\mu$m. The swimmer is initially at rest, with a straight flagellum, and a 'soft start' is applied whereby the internal shear moment is initially low and smoothly increases to its maximum, reaching 99% after 5 beats. The sperm number of a human gamete can be approximated by using bending stiffness $E \\approx\n5\\times10^{-21}$\u00a0Nm$^2$, beat frequency $10$\u2013$20$\u00a0Hz giving an angular frequency $\\omega \\approx 100$\u00a0rad$\/$s . Taking a flagellar radius of $0.5$\u00a0$\\mu$m, viscosity $\\mu \\approx 0.14$\u00a0Pa$\\cdot$s (similar to mucus analogue ) yields the normal resistance coefficient $\\xi_\\perp\\approx 0.503$ and sperm number $\\mathrm{Sp}\\approx 15.8$. Therefore, we will consider a range of sperm numbers between $13$ and $17$, fixing the magnitude of the internally generated shear-force $m_0=240$ and wavenumber $k=6\\pi$. The resulting waveforms are shown in figure (a). As sperm number increases, beat amplitude is suppressed, as is observed for sperm in high viscosity medium , leading to a reduction in side-to-side yaw. All simulations in infinite fluid, i.e.\u00a0with no nearby boundaries, produced trajectories which were straight overall, once the within-beat yaw was accounted for (data submitted to DRYAD repository\u00a0); flagellar waveforms for $\\mathrm{Sp}=13$, $15$ and $17$ are shown in figure\u00a0(b,c).\n\nFigure\u00a0 shows a planar projection of the trajectories $(X(0,t),Y(0,t))$, and the tangent angle $\\theta:=\\arctan (dY\/dX(s=0))$ (in degrees) of those trajectories, of cells swimming over backsteps of varying height. The derivative $dY\/dX$ is calculated numerically by sampling the trajectory at the temporal midpoint of each beat-cycle and taking centred differences. Colour indicates the trajectory over the backstep of the height in figure\u00a0(a, c, e) with green denoting $h = 0$ and red denoting $h = 0.5$. Simulations were performed over backsteps of height $h = 0.05,0.1,\\dots,0.5$, and are displayed up to the time at which $X(0,t)\\geqslant 1$.\n\nThe results in figure\u00a0(a, c, e) suggest that the backstep affects swimmers at different sperm numbers differently, producing a range of scattering angles. However, it is important in these results to factor out the effects of the strip from the backstep. Taking the (lightest) green trajectory, representing a strip, as a baseline comparison, it is evident that for all sperm numbers the hydrodynamic effect of the backstep is to deflect the swimmer downwards relative to a strip trajectory. Figure\u00a0(b, d, f) reveal that this downward deflection is not smooth, rather there is a sharp bump at $x = 0$ where the head initially passes over the backstep, and a further bump at around $x = 0.3$ where the effect of the step itself becomes subleading relative to boundary interactions between the head and the lower wall.\n\nSimulations were also performed comparing the effect of the backstep to a 'cliff' geometry, with the lower portion of the backstep missing (data submitted\u00a0). After passing the backstep, cells swam straight as though in an infinite fluid, suggesting that the majority of the angular deflection occurs due to interaction with the lower boundary; boundary forces change suddenly over a step jump, and the cell acts as though it were above a higher boundary. Additionally, simulations over a strip at $\\mathrm{Sp}\n= 13$ for different starting heights (data submitted\u00a0) showed that attraction to the surface initially increased and then decreased as height above the surface increased, which suggests that hydrodynamic boundary attraction is responsible for the behaviour in figure\u00a0(a, b).\n\nFigure\u00a0 shows the effect of varying sperm number over finer increments for backstep height zero (a, b) and $h=0.2L$ (c, d), with results summarised in figure\u00a0(a). Simulations were performed for $\\mathrm{Sp} = 13,13.5,\\dots,17$ over both a strip geometry and a backstep of height $h = 0.2L$, so that the sperm cells initially start $0.2L$ above the surface, and then increase to around $0.4L$ after the backstep. In figure\u00a0(a\u2013d), colour is matched to increasing sperm number, so that light green corresponds to $\\mathrm{Sp} = 13$ and red to $\\mathrm{Sp} = 17$. Figure\u00a0(a, b) show for a sperm swimming over a strip, the boundary repels the swimmer more at this close distance as sperm number is increased. This effect is to be expected because increasing the sperm number increases the relative strength of viscous to elastic forces, thus the effect of the boundary is likely to be enhanced as $\\mathrm{Sp}$ increases. The initial dip in figure\u00a0(b) is an artifact of the numerical soft start of our system, as the waveform emerges from a straight initial state.\n\nFigure\u00a0(c, d) show a larger range of scattering angles than for fixed sperm number over various backstep heights, of the order of $10^\\circ$. Furthermore, additional simulations (data submitted\u00a0) showed that this hydrodynamic deflection was not sensitive to the phase of the waveform as it passed over the backstep, in contrast to scattering due to contact forces (R.\u00a0Goldstein, personal communication, 2014). Figure\u00a0(a) shows the effects of changing sperm number, giving the deflection for a strip, a backstep, and their difference. A slight increase in the magnitude of this difference is observed as sperm number is increased, owing to increased hydrodynamic interaction mediated by viscosity.\n\nFigure\u00a0(b) summarises the effect of varying both backstep height and sperm number simultaneously, quantified by the 'final deflection angle' $\\theta_{\\mathrm{d}}$, i.e.\u00a0the value of $\\theta$ for which $X=L$. At $\\mathrm{Sp}=13$ deflection is always negative, whereas for $\\mathrm{Sp}=15$, $17$ deflection is always positive. The relationship between $\\theta_\\mathrm{d}$ and $h$ is non-monotone at the lower sperm number but is monotonic in the higher range. At $\\mathrm{Sp}=13$, the deflection angle initially increases in magnitude, then decreases after the maximum at around $h = 0.15L$. This riser height corresponds to a distance of $0.35L$ between the cell and the boundary, which is where boundary attraction is strongest at this sperm number. For $\\mathrm{Sp}=15$, $17$ the deflection angle decreases monotonically with backstep height in the range we have considered. This effect likely occurs because at these sperm numbers, the strip causes the cell to pitch away. However in all cases increasing backstep height to $0.5L$ results in a plateau.\n\nThe effects of the backstep on the waveform are summarised in figure\u00a0, which show the waveform shape with and without the boundary, and quantitative measures of the asymmetry of the waveform. Recall that the flagellar actuation is symmetric; waveform asymmetry is produced due to increased hydrodynamic drag arising from proximity to the wall affecting closer portions of the flagellum more than further portions. Figure\u00a0(a) shows waveforms at sperm number $\\mathrm{Sp} = 13$, $17$ in infinite fluid as well as over a strip. In infinite fluid, the waveform is symmetrical for all sperm numbers considered, while the presence of a boundary gives rise to a waveform asymmetry that increases with $\\mathrm{Sp}$.\n\n'Asymmetry' is quantified by sampling the flagellar wave every $41$ numerical timesteps (relative to a beat cycle of $200$ timesteps), projecting into the body frame, and calculating the average lateral position relative to the body frame centreline over a fixed period, in this case beats $82$\u2013$90$. This quantity is plotted as a function of arclength in figure\u00a0(b); its distal ($s=1$) value is plotted in figure\u00a0(c).\n\nFigure\u00a0(b) plots asymmetry versus arclength for sperm numbers in the range $13$\u2013$17$, the effect being largest at higher sperm number. The asymmetry at the tip of the flagellum for a strip versus no boundary is shown in figure\u00a0(c) as a function of sperm number.\n\n# Discussion\n\nA numerical method for simulating the swimming of monoflagellate cells over geometric features was presented and applied to model sperm interacting with microchannel backstep feature. The scheme incorporates nonlocal hydrodynamics with large-amplitude active filament mechanics. We believe this method to be the simplest generalisation of previous work that is capable of taking into account nonlocal hydrodynamic interaction geometrical features. The linearity of the Stokes flow equations entails that the largest error in our method arises from the LGL slender body theory, which is at worst on the order of the square root of the slenderness ratio. Accuracy of the method of regularised stokeslets is on the order of the regularisation parameter near the boundary, and its square far from the boundary where the swimmer is located. Future work may consider boundary integral modelling of the flagellum also, however we do not expect that this would qualitatively change swimmer trajectories.\n\nThe interaction between the cell and the lower boundary involves the competing effects of asymmetric hydrodynamic forces leading to waveform asymmetry and boundary repulsion, and the pitching behaviour associated with swimmer\/boundary interaction . At lower sperm number and at greater distances from the boundary, waveform asymmetry is smaller, and the cell pitches towards the boundary. At higher sperm number and closer distances from the boundary, waveform asymmetry is larger and the cell pitches away. The effect of the backstep is a sudden drop in the lower boundary, which changes the relative importance of these effects; waveform asymmetry is reduced relative to hydrodynamic attraction, and the net result is a deflection towards the lower boundary after the backstep relative to the expected trajectory over a strip (figure\u00a0).\n\nAnalysing sperm scattering over a backstep, we found that hydrodynamic effects may be comparable in magnitude in the relatively high viscosity range considered to the contact interactions found experimentally by Kantsler et al.\u00a0. A transition is predicted from scattering towards the backstep at lower viscosity to scattering away from the backstep at higher viscosity. Qualitatively this behaviour is similar to the temperature-related transition in Kantsler et al.'s observations (with lower temperature corresponding to higher viscosity); the correspondence is not exact however, with Kantsler et al.'s observations being carried out with bull sperm in low viscosity buffer, and with cells exhibiting very close interaction with the boundary, compared with our longer range interactions and sperm number representative of human cells in mucus analogue that we chose to focus on in the present study. Clearly integrating both surface interactions and hydrodynamics will be necessary to develop a comprehensive model, particularly at higher sperm number\/viscosity.\n\nThe role of hydrodynamic interactions in determining surface attraction and more complex effects associated with boundary features continues to receive significant theoretical attention and is stimulating novel mathematical approaches . Viscous interactions of course become increasingly important in high viscosity fluids such as mucus and laboratory analogues. Kantsler et al.\u00a0 noted the need to take both elastic and steric interactions into account; modelling very short length scale or contact interactions, with either glass, epithelium, cumulus, or even ciliated surfaces, and their effect on the flagellar wave, is a topic of importance, though numerical simulation requires taking account of the rapidly varying hydrodynamic force and electrostatic interactions as the swimmer approaches these boundaries. We hope that the numerically implicit method, potentially also combined with adaptive refinement of the boundary element meshes, will enable accurately resolved simulation of sperm-like swimmers in very near surface-contact in future work. Other valuable methods for modelling three-dimensional sperm motility and elastic-fluid interaction include models based exclusively on regularised stokeslets and techniques such as stochastic rotation dynamics .\n\nWhilst we have used our model to examine a swimmer representative of human sperm, the approach is applicable to a much wider range of eukaryotic cells, including the sperm of other species and, with a slight reworking of the head boundary condition, biflagellate organisms such as the green alga *Chlamydomonas*. These species are of particular interest as they have been used as models for flagellar synchronisation and are relevant to energy-producing bioreactors . For these systems, the model may also be extended to include a nonlocal hydrodynamic contribution from other swimmers. Larger swimming organisms, such as *C. elegans*, have also been shown to be significantly affected by interactions with a structured microenvironment .\n\nAnother application are is the design and optimisation of biomimetic artificial microswimmers (see for example refs.\u00a0). Because the model includes internal periodic actuation via prescribed bending moments, it might be used to optimise actuation for various purposes such as forward progress, subject to constraints such as fixed mechanical energy. Furthermore, the inclusion of geometrical boundary features and the use of sperm number allows such optimisation to be tailored to specific environments. The elastohydrodynamic model can additionally be used to solve the inverse problem of estimating internal moments from observed flagellar data, potentially allowing us to examine how nature has optimised swimming in various environments and informing truly biomimetic design.\n\nDespite the linearity of the Stokes flow equations, the interaction of sperm with their microenvironment presents a subtle nonlinear mechanics problem. Sperm scattering depends nonlinearly on the ratio between viscous and elastic forces, with even a simple backstep feature producing attractive or repulsive scattering of cells depending on parameter values. These scattering effects may be valuable in sorting cells in microdevices, in addition to giving insight into the complexity of how sperm interact with their microenvironment. The combination of mechanical models and experiment will provide the best way to understand and exploit these effects for biomedical applications.\n\n# Data accessibility\n\nTrajectory and waveform data supporting the findings in this paper have been submitted to DRYAD; see reference item\u00a0.\n\n# Competing interests\n\nNone.\n\n# Authors' contributions\n\nTDMJ designed the research, implemented algorithms, analysed results and co-wrote the manuscript. HG designed the research and contributed to the writing of the manuscript. DJS designed the research, helped with algorithmic implementation, co-wrote the manuscript and supervised the project.\n\n# Acknowledgements\n\nThe authors thank Dr Eamonn Gaffney (University of Oxford), Prof.\u00a0Ray Goldstein (University of Cambridge), Dr Vasily Kantsler, Dr Petr Denissenko (University of Warwick), Prof.\u00a0John Blake (University of Birmingham) and Dr Jackson Kirkman-Brown (University of Birmingham\/Birmingham Women's NHS Foundation Trust) for continuing valuable discussions.\n\n# Funding\n\nThis work was supported by EPSRC grant EP\/K007637\/1 to DJS, supporting TDMJ. TDMJ is supported by a Royal Commission for the Exhibition of 1851 Research Fellowship. HG acknowledges a Hooke Fellowship held at the University of Oxford. The computations described in this paper were performed using the University of Birmingham's BlueBEAR HPC service, which was purchased through SRIF-3 funds. See http:\/\/www.bear.bham.ac.uk for more details.\n\n# Appendix: calculation of hydrodynamic terms\n\nFollowing nondimensionalisation, the hydrodynamic model yields the following equation for the dimensionless fluid velocity away from the flagellum, $$\\begin{aligned}\n\\mathrm{Sp}^4\\mathbf{U}(\\mathbf{x}) &= \\frac{\\xi_\\perp}{\\mu}\\left(\\int_0^1\\boldsymbol{\\mathsf{S}}(\\mathbf{x},\\mathbf{X}(s,t))\\cdot\\mathbf{f}_\\mathrm{vis}(s,t) ds + \\iint_{H(t)}\\boldsymbol{\\mathsf{S}}(\\mathbf{x},\\mathbf{Y})\\cdot\\boldsymbol{\\phi}^\\mathrm{H}(\\mathbf{Y},t) dS_{\\mathbf{Y}}\\right.\\nonumber \\\\\n&\\left.+\\iint_W\\boldsymbol{\\mathsf{S}}^\\epsilon(\\mathbf{x},\\mathbf{Y})\\cdot\\boldsymbol{\\phi}^\\mathrm{W}(\\mathbf{Y},t) dS_{\\mathbf{Y}}\\right).\n\\end{aligned}$$ The nonlocal contribution to the velocity $\\mathbf{V}$ on the slender body is similarly given by, $$\\begin{aligned}\n\\mathrm{Sp}^4\\mathbf{V}(\\mathbf{X}(s_0,t)) &= \\frac{\\xi_\\perp}{\\mu}\\left(\\int_{|s-s_0|>q}\\boldsymbol{\\mathsf{S}}(\\mathbf{X}(s_0,t),\\mathbf{X}(s,t))\\cdot\\mathbf{f}_\\mathrm{vis}(s,t) ds \\right. \\nonumber \\\\\n& + \\iint_{H(t)}\\boldsymbol{\\mathsf{S}}(\\mathbf{X}(s_0,t),\\mathbf{Y})\\cdot\\boldsymbol{\\phi}^\\mathrm{H}(\\mathbf{Y},t) dS_{\\mathbf{Y}}\\nonumber \\\\\n&\\left.+\\iint_W\\boldsymbol{\\mathsf{S}}^\\epsilon(\\mathbf{X}(s_0,t),\\mathbf{Y})\\cdot\\boldsymbol{\\phi}^\\mathrm{W}(\\mathbf{Y},t) dS_{\\mathbf{Y}}\\right).\\label{eq:nonloc}\n\\end{aligned}$$\n\nAt each step of the iterative solution to the nonlinear problem, the collocation code solves the integral equation, $$\\mathrm{Sp}^4\\begin{pmatrix}\\mathbf{X}_t\\\\\\mathbf{U}^\\mathrm{H}+\\boldsymbol{\\Omega}^\\mathrm{H}\\wedge(\\mathbf{Y}^\\mathrm{H}-\\mathbf{X}^\\mathrm{c})\\\\0\\end{pmatrix}\n=\n\\begin{pmatrix}\n-(\\boldsymbol{\\mathsf{I}}+(\\gamma-1)\\hat{\\mathbf{s}}\\hat{\\mathbf{s}})\\cdot \\mathbf{f}_{\\mathrm{vis}}+\\mathrm{Sp}^4\\mathbf{V}[\\mathbf{f}_{\\mathrm{vis}},\\boldsymbol{\\phi}^\\mathrm{H},\\boldsymbol{\\phi}^\\mathrm{W}](\\mathbf{X})\n\\\\\n\\mathrm{Sp}^4\\mathbf{U}[\\mathbf{f}_{\\mathrm{vis}},\\boldsymbol{\\phi}^\\mathrm{H},\\boldsymbol{\\phi}^\\mathrm{W}](\\mathbf{Y}^\\mathrm{H})\n\\\\\n\\mathrm{Sp}^4\\mathbf{U}[\\mathbf{f}_{\\mathrm{vis}},\\boldsymbol{\\phi}^\\mathrm{H},\\boldsymbol{\\phi}^\\mathrm{W}](\\mathbf{Y}^\\mathrm{W})\n\\end{pmatrix}\\mbox{,}$$ for the unknown hydrodynamic force per unit length $\\mathbf{f}_{\\mathrm{vis}}$ and unknown stresses $\\boldsymbol{\\phi}^\\mathrm{H}$, $\\boldsymbol{\\phi}^\\mathrm{W}$.\n\nThe collocation code discretises the flagellum with $160$ elements, with the nonlocal contribution to the LGL slender body theory computed by the midpoint rule with constant force per unit length over each element. The force per unit area on the ellipsoidal head of $32$ mesh elements is calculated using routines from BEMLIB with $20$ point Gauss-Legendre quadrature as described in detail in the appendix. The wall boundary is discretised into elements of width $0.075L$, using regularised stokeslets with $\\epsilon = 0.01L$. Integration is performed with repeated Gauss-Legendre quadrature with $4\\times 4$ points per element for the near-singular wall integrals, and a $2\\times 2$ point rule elsewhere.\n\nTo implement the boundary conditions\u00a0, , Gad\u00ealha et al.\u00a0 approximated the force and moment on the head by a grand resistance matrix multiplying the velocity and angular velocity. In dimensional variables, the grand resistance matrix expresses the force and moment on a moving rigid body as $$\\begin{pmatrix}\\mathbf{F} \\\\ \\mathbf{M}\\end{pmatrix} = \\mathcal{R}\\cdot \\begin{pmatrix} \\mathbf{U} \\\\ \\boldsymbol{\\Omega} \\end{pmatrix} , \\quad \\mbox{where} \\quad \\mathcal{R} = \\begin{pmatrix} \\quad & \\mathcal{R}^\\mathrm{F} & \\\\ & \\mathcal{R}^\\mathrm{M} & \\end{pmatrix}.$$ The blocks $\\mathcal{R}^\\mathrm{F}$ and $\\mathcal{R}^\\mathrm{M}$ are $3\\times 6$ matrices yielding the force and moment terms respectively. For example, a sphere of radius $a$ in the absence of hydrodynamic interactions would have dimensionless grand resistance matrix given by $$\\mathcal{R}^\\mathrm{F}=\\begin{pmatrix} \\quad & -6\\pi \\mu a\\boldsymbol{\\mathsf{I}} & 0 & \\quad \\end{pmatrix}, \\quad \\mathcal{R}^\\mathrm{M}=\\begin{pmatrix}\\quad & 0 & -8\\pi \\mu a^3\\boldsymbol{\\mathsf{I}} & \\quad \\end{pmatrix}.$$ This approach is convenient because the linearity of the relationship means that the head velocity $\\mathbf{U}^\\mathrm{H}$ and angular velocity $\\boldsymbol{\\Omega}^\\mathrm{H}$ can be dealt with in the implicit formulation as unknowns in the linear algebra problem. To generalise to a nonlocal hydrodynamic model taking into account the effect of the flagellum and nearby boundary, the force and moment will be decomposed as consisting of part which is linear in velocity and angular velocity via resistance matrices and a remaining contribution from the flagellum. The matrices $\\mathcal{R}^\\mathrm{F}$ and $\\mathcal{R}^\\mathrm{M}$ are determined via the boundary integral method, taking into account the potentially highly significant effect of the wall feature, but not the subleading effect of the flagellum, which is accounted for as a correction, as described below.\n\nElastic scalings are used to nondimensionalise all forces and moments, i.e.\u00a0$E\/L^2, E\/L$ for $\\mathbf{F}^\\mathrm{H}$ and $\\mathbf{M}^\\mathrm{H}$ respectively, with $E\/L^3$ for force per unit length $\\mathbf{f}_{\\mathrm{vis}}$ and $E\/L^4$ for stress $\\boldsymbol{\\phi}^\\mathrm{H}, \\boldsymbol{\\phi}^\\mathrm{W}$. The additional corrections $\\Delta\\mathbf{F}^\\mathrm{H}$ and $\\Delta\\boldsymbol{M}^\\mathrm{H}$ referred to in equation\u00a0 are determined as part of the iterative process by performing a slender body\/boundary integral calculation of $\\tilde{\\mathbf{f}}_{\\mathrm{vis}}$, $\\tilde{\\boldsymbol{\\phi}}^H$ and $\\tilde{\\boldsymbol{\\phi}}^W$ with the most recent approximation to $\\tilde{\\mathbf{X}}$ available, yielding in dimensionless variables, $$\\tilde{\\mathbf{F}}^\\mathrm{H}=\\iint_{H(t)}\\tilde{\\boldsymbol{\\phi}}^\\mathrm{H}\\,dS, \\quad \\tilde{\\mathbf{M}}^\\mathrm{H}=\\iint_{H(t)}(\\tilde{\\mathbf{Y}}-\\tilde{\\mathbf{X}}^\\mathrm{c})\\wedge\\tilde{\\boldsymbol{\\phi}}^\\mathrm{H}\\,dS_{\\mathbf{Y}} \\mbox{,}$$ where $\\tilde{\\mathbf{X}}^\\mathrm{c}$ is the head centroid. Using also the most recent iterates for $\\tilde{\\mathbf{U}}^\\mathrm{H}$ and $\\tilde{\\boldsymbol{\\Omega}}^\\mathrm{H}$, the corrections are then given by, $$\\Delta\\mathbf{F}^\\mathrm{H} = \\tilde{\\mathbf{F}}^\\mathrm{H} - \\mathrm{Sp}^4\\left(\\frac{\\mu}{\\xi_\\perp}\\right)\\mathcal{R}^\\mathrm{F} \\cdot \\begin{pmatrix} \\tilde{\\mathbf{U}}^\\mathrm{H} \\\\ \\tilde{\\boldsymbol{\\Omega}}^\\mathrm{H} \\end{pmatrix} \\mbox{,} \\quad\n\\Delta\\boldsymbol{M}^\\mathrm{H} =\\tilde{\\mathbf{M}}^\\mathrm{H} - \\mathrm{Sp}^4\\left(\\frac{\\mu}{\\xi_\\perp}\\right)\\mathcal{R}^\\mathrm{M} \\cdot \\begin{pmatrix} \\tilde{\\mathbf{U}}^\\mathrm{H} \\\\ \\tilde{\\boldsymbol{\\Omega}}^\\mathrm{H} \\end{pmatrix}.$$ These corrections appear on the right hand side of the linear system.\n\n**Short title for page headings:** *Scattering of sperm by a microchannel feature*\n\n[^1]: Author for correspondence, email: `email@example.com`","meta":{"dup_signals":{"dup_doc_count":16,"dup_dump_count":15,"dup_details":{"curated_sources":1,"2018-39":1,"2018-34":1,"2018-22":1,"2018-09":1,"2018-05":1,"2017-47":1,"2017-39":2,"2017-30":1,"2017-22":1,"2017-09":1,"2017-04":1,"2016-50":1,"2016-40":1,"2018-47":1}},"filename":"out\/1410.6357_extract_scattering_of_sperm_arxiv.tex.md"},"subset":"arxiv"} +{"text":"title: LEVELS OF REALITY AS SOURCE OF QUANTUM INDETERMINACY[^1]\n\nBasarab NICOLESCU\n\nLaboratoire de Physique Nucl\u00e9aire et de Hautes Energies (LPNHE) [^2] \nLPTPE, Tour 12 E3 - 4, Place Jussieu 75252 Paris Cedex 05, France \ne-mail: email@example.com\n\n# Quantum physics and levels of Reality\n\nThe major cultural impact of the quantum physics has certainly raised questions for the contemporary philosophical dogma of the existence of a single level of Reality .\n\nHere the meaning we give to the word \"Reality\" is pragmatic and ontological at the same time.\n\nBy Reality I intend first of all to designate that which resists our experiences, representations, descriptions, images or mathematical formalizations. Quantum physics caused us to discover that abstraction is not simply an intermediary between us and Nature, a tool for describing reality, but rather, one of the constituent parts of Nature. In quantum physics, mathematical formalization is inseparable from experience. It resists in its own way by its simultaneous concern for internal consistency, and the need to integrate experimental data without destroying that self-consistency.\n\nIn so far as Nature participates in the being of the world one must ascribe an ontological dimension to the concept of Reality. Nature is an immense, inexhaustible source of the unknown which justifies the very existence of science. Reality is not only a social construction, the consensus of a collectivity, or an intersubjective agreement. It also has a trans-subjective dimension, to the extent that one simple experimental fact can ruin the most beautiful scientific theory.\n\nBy *level of Reality* I intend to designate an ensemble of systems which are invariant under the action of certain general laws: for example, quantum entities are subordinate to quantum laws, which depart radically from the laws of the macrophysical world. That is to say that two levels of Reality are *different* if, while passing from one to the other, there is a break in the laws and a break in fundamental concepts (like, for example, causality). No one has succeeded in finding a mathematical formalism which permits the rigorous passage from one world to another. Semantic glosses, tautological definitions or approximations are unable to replace a rigorous mathematical formalism. The recent decoherence models have nothing precise to say on the passage between the quantum level and the macrophysical level: in fact, the main problem is not decoherence but precisely *coherence*.\n\nThere are even strong mathematical indications that the continuous passage from the quantum world to the macrophysical world would never be possible. But there is nothing catastrophic about this. The *discontinuity* which is manifest in the quantum world is also manifest in the structure of the levels of Reality. That does not prevent the two worlds from co-existing.\n\nThe levels of Reality are radically different from the levels of organization as these have been defined in systemic approaches . Levels of organization do not presuppose a break with fundamental concepts: several levels of organization appear at one and the same level of Reality. The levels of organization correspond to different structurings of the same fundamental laws. For example, Marxist economy and classical physics belong to one and the same level of Reality.\n\nThe emergence of at least two different levels of Reality in the study of natural systems is a major event in the history of knowledge.\n\nThe existence of different levels of Reality has been affirmed by different traditions and civilizations, but these affirmations were founded on religious dogma or on the exploration of the interior universe.\n\nIn our century, in their questioning of the foundations of science, Edmund Husserl and other scholars have discovered the existence of different levels of perception of Reality by the subject-observer. But these thinkers, pioneers in the exploration of a multi-dimensional and multi-referential reality, have been marginalized by academic philosophers and misunderstood by the majority of physicists, enclosed in their respective specializations. The view I am expressing here is totally conform to the one of Heisenberg, Pauli and Bohr.\n\nIn fact, Werner Heisenberg came very near, in his philosophical writings, to the concept of \"level of Reality\". In his famous *Manuscript of the year 1942* (published only in 1984) Heisenberg, who knew well Husserl, introduces the idea of three *regions of reality*, able to give access to the concept of \"reality\" itself: the first region is that of classical physics, the second - of quantum physics, biology and psychic phenomena and the third - that of the religious, philosophical and artistic experiences . This classification has a subtle ground: the closer and closer connectiveness between the Subject and the Object.\n\nAs we shall see in the following, the notion of levels of Reality will lead us to a general philosophical understanding of the nature of indeterminacy. If there was only one region or level of reality, it was impossible to conceive what means a true, irreducible indeterminacy, like the quantum one.\n\n# The logic of the included middle\n\nKnowledge of the coexistence of the quantum world and the macrophysical world and the development of quantum physics has led, on the level of theory and scientific experiment, to the upheaval of what were formerly considered to be *pairs of mutually exclusive contradictories* (A and non-A): wave *and* corpuscle, continuity *and* discontinuity, separability *and* nonseparability, local causality *and* global causality, symmetry *and* breaking of symmetry, reversibility *and* irreversibility of time, etc.\n\nThe intellectual scandal provoked by quantum mechanics consists in the fact that the pairs of contradictories that it generates are actually mutually contradictory when they are analyzed through the interpretative filter of classical logic. This logic is founded on three axioms:\n\n1. *The axiom of identity* : A is A.\n\n2. *The axiom of non-contradiction* : A is not non-A.\n\n3. *The axiom of the excluded middle* : There exists no third term T which is at the same time A and non-A.\n\nUnder the assumption of the existence of a single level of Reality, the second and third axioms are obviously equivalent.\n\nIf one accepts the classical logic one immediately arrives at the conclusion that the pairs of contradictories advanced by quantum physics are mutually exclusive, because one cannot affirm the validity of a thing and its opposite at the same time: A *and* non-A.\n\nSince the definitive formulation of quantum mechanics around 1930 the founders of the new science have been acutely aware of the problem of formulating a new, \"quantum logic.\" Subsequent to the work of Birkhoff and van Neumann a veritable flourishing of quantum logics was not long in coming . The aim of these new logics was to resolve the paradoxes which quantum mechanics had created and to attempt, to the extent possible, to arrive at a predictive power stronger than that afforded by classical logic.\n\nMost quantum logics have modified the second axiom of classical logic \u2013 the axiom of non-contradiction \u2013 by introducing non-contradiction with several truth values in place of the binary pair (A, non-A). These multivalent logics, whose status with respect to their predictive power remains controversial, have not taken into account one other possibility: the modification of the third axiom \u2013 the axiom of the excluded middle.\n\nHistory will credit St\u00e9phane Lupasco with having shown that the *logic of the included middle* is a true logic, formalizable and formalized, multivalent (with three values: A, non-A, and T) and non-contradictory . His philosophy, which takes quantum physics as its point of departure, has been marginalized by physicists and philosophers. Curiously, on the other hand, it has had a powerful albeit underground influence among psychologists, sociologists, artists, and historians of religions. Perhaps the absence of the notion of \"levels of Reality\" in his philosophy obscured its substance: many persons wrongly believed that Lupasco's logic violated the principle of non-contradiction.\n\nOur understanding of the axiom of the included middle \u2013 *there exists a third term T which is at the same time A and non-A* \u2013 is completely clarified once the notion of \"levels of Reality\" is introduced.\n\nIn order to obtain a clear image of the meaning of the included middle, we can represent the three terms of the new logic \u2013 A, non-A, and T \u2013 and the dynamics associated with them by a triangle in which one of the vertices is situated at one level of Reality and the two other vertices at another level of Reality. If one remains at a single level of Reality, all manifestation appears as a struggle between two contradictory elements (example: wave A and corpuscle non-A). The third dynamic, that of the T-state, is exercised at another level of Reality, where that which appears to be disunited (wave or corpuscle) is in fact united (quanton), and that which appears contradictory is perceived as non-contradictory.\n\nIt is the projection of T on one and the same level of Reality which produces the appearance of mutually exclusive, antagonistic pairs (A and non-A). A single level of Reality can only create antagonistic oppositions. It is inherently *self-destructive* if it is completely separated from all the other levels of Reality. A third term, let us call it $T_{0}$, which is situated on the same level of Reality as that of the opposites A and non-A, can not accomplish their reconciliation.\n\nThe T-term is the key in understanding indeterminacy: being situated on a different level of Reality than A and non-A, it necessarily induces an *influence* of its own level of Reality upon its neighbouring and different level of Reality: *the laws of a given level are not self-sufficient to describe the phenomena occuring at the respective level*.\n\nThe entire difference between a triad of the included middle and an Hegelian triad is clarified by consideration of the role of *time*. *In a triad of the included middle the three terms coexist at the same moment in time*. On the contrary, each of the three terms of the Hegelian triad succeeds the former in time. This is why the Hegelian triad is incapable of accomplishing the reconciliation of opposites, whereas the triad of the included middle is capable of it. In the logic of the included middle the opposites are rather *contradictories* : the tension between contradictories builds a unity which includes and goes beyond the sum of the two terms. The Hegelian triad would never explain the nature of indeterminacy.\n\nOne also sees the great dangers of misunderstanding engendered by the common enough confusion made between the axiom of the excluded middle and the axiom of non-contradiction . The logic of the included middle is non-contradictory in the sense that the axiom of non-contradiction is thoroughly respected, a condition which enlarges the notions of \"true\" and \"false\" in such a way that the rules of logical implication no longer concerning two terms (A and non-A) but three terms (A, non-A and T), co-existing at the same moment in time. This is a formal logic, just as any other formal logic: its rules are derived by means of a relatively simple mathematical formalism.\n\nOne can see why the logic of the included middle is not simply a metaphor, like some kind of arbitrary ornament for classical logic, which would permit adventurous incursions into the domain of complexity. *The logic of the included middle is the privileged logic of complexity*, privileged in the sense that it allows us to cross the different areas of knowledge in a coherent way, by enabling a new kind of simplicity.\n\nThe logic of the included middle does not abolish the logic of the excluded middle: it only constrains its sphere of validity. The logic of the excluded middle is certainly valid for relatively simple situations. On the contrary, the logic of the excluded middle is harmful in complex, transdisciplinary cases. For me, the problem of indeterminacy is precisely belonging to this class of cases.\n\n# The G\u00f6delian unity of the world\n\nThe transdisciplinary approach sets forth for consideration a multi-dimensional Reality, structured by multiple levels replacing the single level of classical thought \u2013 one-dimensional reality. This proposal is not enough, by itself, to justify a new vision of the world. We must first of all answer many questions in the most rigorous possible way. What is the nature of the theory which can describe the passage from one level of Reality to another? Is there truly a coherence, a unity of the totality of levels of Reality? What is the role of the subject-observer of Reality in the dynamics of the possible unity of all the levels of Reality? Is there a level of Reality which is privileged in relation to all other levels? What is the role of reason in the dynamics of the possible unity of knowledge? What is the predictive power of the new model of Reality in the sphere of reflection and action? Finally, is understanding of the present world possible ?\n\nAccording to our model, Reality comprises a certain number of levels \\[1,2\\]. The considerations which follow do not depend on whether or not this number is finite or infinite. For the sake of clarity, let us suppose that this number is infinite.\n\nTwo adjacent levels are connected by the logic of the included middle in the sense that the T-state present at a certain level is connected to a pair of contradictories (A and non-A) at the immediately adjacent level. The T-state operates the unification of contradictories A and non-A but this unification is operated at a level *different* from the one on which A and non-A are situated. The axiom of non-contradiction is thereby respected. Does this fact signify that we can obtain a complete theory, which will be able to account for all known and forthcoming results ?\n\nThere is certainly a coherence between different levels of Reality, at least in the natural world. In fact, an immense *self-consistency* \u2013 a cosmic bootstrap \u2013 seems to govern the evolution of the universe, from the infinitely small to the infinitely large, from the infinitely brief to the infinitely long . A flow of information is transmitted in a coherent manner from one level of Reality to another level of Reality in our physical universe.\n\nThe logic of the included middle is capable of describing the coherence between the levels of Reality by an iterative process defined by the following stages: 1. A pair of contradictories (A, non-A) situated at a certain level of reality is unified by a T-state situated at a contiguous level of Reality; 2. In turn, this T-state is linked to a couple of contradictories (A', non-A'), situated at its own level; 3. The pair of contradictories (A', non-A') is, in its turn, unified by a T'-state situated at a different level of Reality, immediately contiguous to that where the ternary (A', non-A', T) is found. The iterative process continues indefinitely until all the levels of Reality, known or conceivable, are exhausted.\n\nIn other terms, the action of the logic of the included middle on the different levels of Reality induces an *open*, *G\u00f6delian* structure of the unity of levels of Reality. This structure has considerable consequences for the theory of knowledge because it implies the impossibility of a complete theory, closed in upon itself.\n\nIn effect, in accordance with the axiom of non-contradiction, the T-state realizes the unification of a pair of contradictories (A, non-A) but it is associated, at the same time with another pair of contradictories (A', non-A'). This signifies that starting from a certain number of mutually exclusive pairs one can construct a new theory which eliminates contradictions at a certain level of Reality, but this theory is only temporary because it inevitably leads, under the joint pressure of theory and experience, to the discovery of new levels of contradictories, situated at a new level of Reality. In turn this theory will therefore be replaced by still more unified theories as new levels of Reality are discovered. This process will continue indefinitely without ever resulting in a completely unified theory. The axiom of non-contradiction is increasingly strengthened during this process. In this sense, without ever leading to an absolute non-contradiction, we can speak of an *evolution of knowledge* which encompasses all the levels of Reality: knowledge which is forever open. Finer matter penetrates coarser matter, just as quantum matter penetrates macrophysical matter, but the reverse is not true. *Degrees of materiality* induce an orienting arrow for tracing the transmission of information from one level to the other. This orienting arrow is associated with the discovery of more and more general, unifying, and encompassing laws.\n\nThe open structure of the unity of levels of Reality is in accord with one of the most important scientific results of the 20th century concerning arithmetic, the theorem of Kurt G\u00f6del . G\u00f6del's theorem tells us that a sufficiently rich system of axioms inevitably lead to results which would be either undecidable or contradictory. The implications of G\u00f6del's theorem have considerable importance for all modern theories of knowledge. First of all it does not only concern the field of arithmetic but also all mathematics which includes arithmetic. Now, obviously the mathematics which underlies theoretical physics include arithmetic. This means that all research for a complete physical theory is illusory.\n\nIn fact, the search for an axiomatic system leading to a complete theory (without undecidable or contradictory results) marks at once the apex and the starting point of the decline of classical thought. The axiomatic dream is unraveled by the verdict of the holy of holies of classical thought \u2013 mathematical rigor.\n\nThe theorem that Kurt G\u00f6del demonstrated in 1931 sounded only a faint echo beyond a very limited circle of specialists. The difficulty and extreme subtlety of its demonstration explains why this theorem has taken a certain time to be understood within the mathematical community. Today, it has scarcely begun to penetrate the world of physicists. Wolfgang Pauli, one of the founders of quantum mechanics, was one of the first physicists to understand the extreme importance G\u00f6del's theorem has for the construction of physical theories .\n\nThe G\u00f6delian structure of the unity of levels of Reality associated with the logic of the included middle implies that it is impossible to construct a complete theory for describing the passage from one level to the other and, *a fortiori*, for describing the unity of levels of Reality.\n\nIf it does exist, the unity linking all the levels of Reality must necessarily be an open *unity*.\n\nTo be sure, there is a coherence of the unity of levels of Reality, but we must remember that this coherence is *oriented* : there is an arrow associated with all transmission of information from one level to the other. As a consequence of this, if coherence is limited only to the levels of Reality, it is stopped at the \"highest\" level and at the \"lowest\" level. If we wish to posit the idea of a coherence which continues beyond these two limited levels so that there is an open unity, one must conceive the unity of levels of Reality as a unity which is extended by a *zone of non-resistance* to our experiences, representations, descriptions, images and mathematical formalizations. Within our model of Reality, this zone of non-resistance corresponds to the \"veil\" which Bernard d'Espagnat referred to as \"the veil of the real\" . The \"highest\" level and the \"lowest\" level of the unity of levels of Reality are united across a zone of absolute transparence. But these two levels are different; from the point of view of our experiences, representations, descriptions, images, and mathematical formalizations, absolute transparence functions like a veil. In fact, the open unity of the world implies that that which is \"below\" is the same as that which is \"above\". The isomorphism between \"above\" and \"below\" is established by the zone of non-resistance.\n\nQuite simply, the non-resistance of this zone of absolute transparence is due to the limitations of our bodies and of our sense organs, limitations which apply regardless of the instruments of measure used to extend these sense organs. To claim that there is an infinite human knowledge (which excludes any zone of non-resistance), while simultaneously affirming the limitations of our body and our sense organs, seems to us a feat of linguistic sleight of hand. The zone of non-resistance corresponds to the sacred, *that is to say to that which does not submit to any rationalization*.\n\nThe unity of levels of Reality and its complementary zone of non-resistance constitutes *the transdisciplinary Object*.\n\nA new *Principle of Relativity* emerges from the coexistence between complex plurality and open unity : *no one level of Reality constitutes a privileged place from which one is able to understand all the other levels of Reality*. A level of Reality is what it is because all the other levels exist at the same time. This Principle of Relativity is what originates a new perspective on religion, politics, art, education, and social life. In the transdisciplinary vision, Reality is not only multi-dimensional, it is also multi-referential.\n\nThe different levels of Reality are accessible to human knowledge thanks to the existence of different *levels of perception*, which are in bi-univocal correspondence with levels of Reality. These levels of perception permit an increasingly general, unifying, encompassing vision of Reality, without ever entirely exhausting it.\n\nAs in the case of levels of Reality the coherence of levels of perception presupposes a zone of non-resistance to perception.\n\nThe unity of levels of perception and its complementary zone of non-resistance constitutes *the transdisciplinary Subject*.\n\nThe two zones of non-resistance of transdisciplinary Object and Subject must be *identical* in order that the transdisciplinary Subject can communicate with the transdisciplinary Object. *A flow of consciousness crossing the different levels of perception in a coherent manner must correspond to the flow of information crossing the different levels of Reality in a coherent manner*. The two flows are in a relation of *isomorphism* thanks to the existence of one and the same zone of non-resistance. Knowledge is neither exterior nor interior: it is *at the same time* exterior and interior. The study of the universe and the study of the human being sustain one another. The zone of non-resistance permits the unification of the transdisciplinary Subject and the transdisciplinary Object while preserving their difference.\n\nTransdisciplinarity is the transgression of duality opposing binary pairs: subject\/object,subjectivity\/objectivity, matter\/consciousness, \nnature\/divine, simplicity\/complexity, reductionism\/holism, diversity\/unity. This duality is transgressed by the open unity which encompasses both the universe and the human being.\n\nThe transdisciplinary model of Reality has, in particular, some important consequences in the study of *complexity*. Without its contradictory pole of simplicity (or, more precisely, *simplexity*) complexity appears as an increasingly enlarging *distance* between the human being and Reality which introduces a self-destructive alienation of the human being who is plunged into the absurdity of destiny. The infinite simplicity of the transdisciplinary Subject corresponds to the infinite complexity of the transdisciplinary Object.\n\nThe Subject\/Object problem was central for the founding-fathers of quantum mechanics. Pauli, Heisenberg and Pauli, as Husserl, Heidegger and Cassirer refuted the basic axiom of modern metaphysics: the clear-cut distinction between Subject and Object. Our considerations here are inscribed in the same framework.\n\n# The death and the resurrection of Nature\n\nModernity is particularly deadly. It has invented all kinds of \"deaths\" and \"ends\": the death of God, the death of Man, the end of ideologies, the end of history and, today, the end of science .\n\nBut, there is a death which is spoken of much less, on account of shame or ignorance : *the death of Nature*. In my view, this death of Nature is the source of all the other deadly concepts which were just invoked. In any case, the very word \"Nature\" has ended by disappearing from scientific vocabulary. Of course, the \"man in the street\", just as the scientist (in popularized works) still uses this word, but in a confused, sentimental way, reminiscent of magic.\n\nSince the beginning of time we have not stopped modifying our vision of Nature . Historians of science are in accord in saying that, despite all appearances to the contrary, there is not only one vision of Nature across time. What can there be in common between the Nature of so-called \"primitive\" peoples, the Nature of the Greeks, the Nature in the time of Galileo, of the Marquis de Sade, of Laplace or of Novalis? The vision of Nature of a given period depends on the imaginary which predominates during that period; in turn, that vision depends on a multiplicity of parameters: the degree of development of science and technology, social organization, art, religion, etc. Once formed, an image of Nature exercises an influence on all areas of knowledge. The passage from one vision to another is not progressive, continuous \u2013 it occurs by means of sharp, radical, discontinuous ruptures. Several contradictory visions can co-exist. The extraordinary diversity of visions of Nature explains why one cannot speak of Nature, but only of a certain nature in accord with the imaginary of a given period.\n\nThe image of Nature has always had a multiform action: it has influenced not only science but also art, religion, and social life. This allows us to explain some strange synchronicities. Here I limit myself to but a single example: the simultaneous appearance of the theory of the end of history and of the end of science just before the beginning of the 3rd millenium. For example, unified theories in physics have as their aim the elaboration of a complete approach, founded on a unique interaction, which can predict everything (hence the name, \"Theory of Everything\"). It is quite obvious that if such a theory were formulated in the future, it would signify the end of fundamental physics, because there would be nothing left to look for. It is interesting to observe that both the idea of the end of history and of the end of science have simultaneously emerged from the \"end of the century\" imaginary.\n\nNotwithstanding the abundant and fascinating diversity of images of Nature one can nevertheless distinguish three main stages: Magic Nature, Nature as Machine, and the Death of Nature. Magical thought views nature as a living organism, endowed with intelligence and consciousness. The fundamental postulate of magical thought is that of universal interdependence: Nature cannot be conceived outside of its relations with us. Everything is sign, trace, signature, symbol. Science, in the modern sense of this word, is superfluous.\n\nAt the other extreme, the mechanist and determinist thought of the 18th and above all the 19th century (which, by the way, still predominates today) conceives Nature not as an organism, but as a machine. It suffices to disassemble this machine piece by piece in order to possess it entirely. The fundamental postulate of mechanistic and determinist thought is that Nature can be known and conquered by scientific methodology, defined in a way which is completely independent of human beings and separate from us.\n\nThe logical outcome of the mechanist and determinist vision is the Death of Nature, the disappearance of the concept of Nature from the scientific field. From the very beginning of the mechanistic vision, Nature as Machine, with or without the image of God as watchmaker, is split up into an ensemble of separate parts. From that moment on, there is no more need for a coherent whole, for a living organism, or even, for a machine which still kept the musty odor of finality. Nature is dead, but complexity remains. An astonishing complexity (in fact, often confused with \"complication\"), which penetrates each and every field of knowledge. But this complexity is perceived as an accident; we ourselves are considered to be an accident of complexity.\n\nThe Death of Nature is incompatible with a coherent interpretation of the results of contemporary science, in spite of the persistence of the neo-reductionistic attitude which accords exclusive importance to the fundamental building-blocks of matter and to the four known physical interactions. According to this neo-reductionist attitude, all recourse to Nature is superfluous and devoid of sense. In truth, Nature is dead only for a certain vision of the world \u2013 the classical vision.\n\nThe rigid objectivity of classical thought is only viable in the classical world. The idea of total separation between an observer and a Reality assumed to be completely independent from that observer brings us to the verge of insurmountable paradoxes. In fact, a far more subtle notion of objectivity characterizes the quantum world: objectivity depends on the level of Reality in question.\n\nSpace-time itself no longer rests on a fixed concept. Our space-time which proceeds in four dimensions is not the only conceivable space-time. According to certain physical theories, it appears more an approximation, like a part of a space-time all the more rich for being the generator of possible phenomena. Supplementary dimensions are not the result of mere intellectual speculation. On the one hand, these dimensions are necessary to insure the self-consistency of the theory and the elimination of certain undesirable aspects. On the other hand, they do not have a purely formal character \u2013 they have physical consequences for our own scale. For example, according to certain cosmological theories, if the universe had been associated from the \"beginning\" of the big bang in a multi-dimensional space-time, supplementary dimensions would have remained forever hidden, unobservable; rather, their vestiges would be precisely the known physical interactions. By means of generalizing the example provided by particle physics, it becomes conceivable that certain levels of Reality correspond to a space-time different than that characterizing our own level. Moreover, complexity itself would depend on the nature of space-time as well.\n\nWe can make, like Heisenberg made , a step further and assert that the classical four-dimensional space-time is, in fact, an *anthropomorphic concept*, founded on our sense-organs.\n\nAccording to present scientific conceptions, *matter* is far from being identical with substance. In the quantum world, matter is associated with a *substance-energy-information-space-time complexus*.\n\nIt is somewhat mysterious why trajectories played such a central role in the formulation of modern physics. The quantum indeterminacy showed that trajectories are not a fondamental concept. In more recent years, a new discipline is born by the unexpected encounter between the theory of information and quantum mechanics: the Quantum Theory of Information . This new-born science already poses a crucial question : are the *information laws* more general, and therefore deeper, than the equations of movement? Are the central concepts of positions, speeds and trajectories of particles to be abandoned in favour of information laws which, in fact, could be valid not only for physics but also for other fields of knowledge? There were these last years fabulous experimental advances in the fields of non-separability, disentaglement, quantum cryptography and teleportation, in conjonction with the possible advent of quantum computers. This shows that notions like \"levels of Reality\" or \"included middle\" cease to be just theoretical speculations, by entering today in the field of experiments and, tomorrow, in the everyday life.\n\nWe can assert that the notion itself of *laws of Nature* completely changes its contents when compared with that of the classical vision. This situation can be summed up by three theses formulated by the well-known physicist Walter Thirring :\n\n1\\. *The laws of any inferior level are not completely determined by the laws of a superior level*. Thus, notions well anchored in classical physics, like \"fundamental\" and \"accidental,\" must be re-examined. That which is considered to be fundamental on one level can appear to be accidental on a superior level and that which is considered to be accidental or incomprehensible on a certain level can appear to be fundamental on a superior level.\n\n2\\. *The laws of an inferior level depend more on the circumstances of their emergence than on the laws of a superior level*. The laws of a certain level depend essentially on the local configuration to which these laws refer. There is therefore a kind of local autonomy of respective levels of Reality; however, certain internal ambiguities concerning laws of an inferior level of Reality are resolved by taking into account the laws of a superior level. It is the internal consistency of laws which reduces the ambiguity of laws.\n\n3\\. *The hierarchy of laws evolves at the same time as the universe itself*. In other words, the *birth of laws* occurs simultaneously with the evolution of the universe. These laws pre-exist at the \"beginning\" of the universe as potentialities. It is the evolution of the universe which actualizes these laws and their hierarchy. A transdisciplinary model of Nature must integrate all this new knowledge of the emergent characteristics of the physical universe.\n\nThe Thirring's description of the laws of Nature is in perfect agreement with our own considerations about the G\u00f6delian structure of Nature and knowledge. The problem of quantum indeterminacy can now be fully understood as *the influence of the quantum level of Reality on our own macrophysical level of Reality*. Of course, the laws of the macrophysical level depend more, as Thirring writes, on \"the circumstances of their emergence\". From the point of view of the macrophysical level indeterminacy appears as accidental, incomprehensible, or at most as a rare event. But this reveals, in fact, an internal ambiguity which can be solved only by taking into account the laws of the quantum level. At this last level the indeterminacy is fundamental.\n\nOne can ask if one can not logically conceive a *generalized indeterminacy*, which goes far beyond the problem of trajectories of particles. Heisenberg already considered the *indeterminacy of language* : the natural language can not express with arbitrary high precision all its elements, because the way of expressing acts in an essential manner on what is expressed. The indeterminacy of the natural language is just one example of the generalized indeterminacy generated by the G\u00f6delian structure of Nature and knowledge.\n\nIn conclusion, we can distinguish three major aspects of Nature in accordance with the transdisciplinary model of Reality :\n\n\\(1\\) *Objective Nature*, which is connected with the natural properties of the transdisciplinary Object; objective Nature is subject to subjective objectivity. This objectivity is subjective to the extent that the levels of Reality are connected to levels of perception. Nevertheless emphasis here is on objectivity, to the extent to which the methodology employed is that of science.\n\n2\\) *Subjective Nature*, which is connected with the natural properties of the transdisciplinary Subject; subjective Nature is subject to objective subjectivity. This subjectivity is objective to the extent that the levels of perception are connected to levels of Reality. Nevertheless, emphasis here is on subjectivity, to the extent to which the methodology is employed is that of the ancient science of being, which crosses all the traditions and religions of the world.\n\n3\\) *Trans-Nature*, which is connected with a similarity in Nature which exists between the transdisciplinary Object and the transdisciplinary Subject. Trans-Nature concerns the domain of the sacred. It cannot be approached without considering the other two aspects of Nature at the same time.\n\nTransdisciplinary Nature has a ternary structure (objective Nature, subjective Nature, trans-Nature), which defines *living Nature*. This Nature is living because it is there that life is present in all its degrees and because its study demands the integration of *lived experience*. The three aspects of Nature must be considered simultaneously in terms of their inter-relation and their conjunction within all the phenomena of living Nature .\n\nThe study of living Nature asks for a new methodology \u2013 transdisciplinary methodology \u2013 which is different from the methodology of modern science and the methodology of the ancient science of being. It is the *co-evolution* of the human being and of the universe which asks for a new methodology.\n\nAn attempt to elaborate a new *Philosophy of Nature*, a privileged mediator of a dialogue between all the areas of knowledge, is one of the highest priorities of transdisciplinarity.\n\n[^1]: Published in *Determinismo e complessit\u00e0*, Fondazione Nova Spes and Armando Editore, Roma, 2000, pp. 127-158, edited by F. Tito Arecchi.\n\n[^2]: Unit\u00e9 de Recherche des Universit\u00e9s Paris 6 et Paris 7, Associ\u00e9e au CNRS.","meta":{"dup_signals":{"dup_doc_count":64,"dup_dump_count":47,"dup_details":{"curated_sources":2,"2023-14":1,"2021-21":1,"2021-04":1,"2020-34":1,"2020-10":1,"2019-26":1,"2019-04":2,"2018-51":1,"2018-47":1,"2018-30":1,"2018-05":1,"2017-34":2,"2017-30":2,"2017-26":3,"2017-17":4,"2017-04":1,"2016-50":1,"2016-44":1,"2016-40":1,"2016-36":1,"2016-22":1,"2016-18":1,"2016-07":1,"2015-48":1,"2015-40":1,"2015-35":1,"2015-32":1,"2015-22":1,"2015-14":1,"2014-52":1,"2014-42":4,"2014-41":1,"2014-35":1,"2014-23":2,"2014-15":3,"2023-23":1,"2024-18":1,"2024-10":1,"2017-13":3,"2015-18":1,"2015-11":1,"2015-06":1,"2014-10":1,"2013-48":1,"2013-20":1,"2024-22":1}},"filename":"out\/quant-ph0012007_extract_levels.tex.md"},"subset":"arxiv"} +{"text":"author: A. J. Landahl; D. S. Lobser; B. C. A. Morrison; K. M. Rudinger; A. E. Russo; J. W. Van Der Wall; P. Maunz\ndate: \n March 4, 2020\ntitle: \u00a0 \n Jaqal\u2122, the Quantum Assembly Language for QSCOUT\n\n# To Learn More\n\nTo learn more about QSCOUT and the Jaqal\u2122 language developed for it, please visit [qscout.sandia.gov](https:\/\/qscout.sandia.gov) or send an e-mail to .\n\n# Introduction\n\nQSCOUT is the Quantum Scientific Computing Open User Testbed, a trapped-ion quantum computer testbed realized at Sandia National Laboratories on behalf of the Department of Energy's Office of Science and its Advanced Scientific Computing Research (ASCR) program. As an open user testbed, QSCOUT provides the following to its users:\n\n- **Transparency**: Full implementation specifications of the underlying native trapped-ion quantum gates.\n\n- **Extensibility**: Pulse definitions can be programmed to generate custom trapped-ion gates.\n\n- **Schedulability**: Users have full control of sequential and parallel execution of quantum gates.\n\n## QSCOUT Hardware 1.0\n\nThe first version (1.0) of the QSCOUT hardware realizes a single register of qubits stored in the hyperfine clock states of trapped ${}^{171}$Yb${}^+$ ions arranged in a one-dimensional chain. Single and multi-qubit gates are realized by tightly focused laser beams that can address individual ions. The native operations available on this hardware include the following:\n\n- Global preparation and measurement of all qubits in the $z$ basis.\n\n- Parallel single-qubit rotations about any axis in the equatorial plane of the Bloch sphere.\n\n- The M\u00f8lmer\u2013S\u00f8rensen two-qubit gate between any pair of qubits, in parallel with no other gates.\n\n- Single-qubit $Z$ gates executed virtually by adjusting the reference clocks of individual qubits.\n\nImportantly, QSCOUT 1.0 does not support measurement of a subset of the qubits. Consequently, it also does not support classical feedback. This is because, for ions in a single chain, the resonance fluorescence measurement process destroys the quantum states of all qubits in the ion chain, so that there are no quantum states onto which feedback can be applied. Future versions of the QSCOUT hardware will support feedback.\n\nQSCOUT 1.0 uses ***Just Another Quantum Assembly Language (Jaqal)*** (described [below](#jaqal-quantum-assembly-language)) to specify quantum programs executed on the testbed. On QSCOUT 1.0, every quantum computation starts with preparation of the quantum state of the entire qubit register in the $z$ basis. Then it executes a sequence of parallel and sequential single and two-qubit gates. After this, it executes a simultaneous measurement of all qubits in the $z$ basis, returning the result as a binary string. This sequence of prepare-all\/do-gates\/measure-all can be repeated multiple times in a Jaqal program, if desired. However, any adaptive program that uses the results of one such sequence to issue a subsequent sequence must be done with metaprogramming, because Jaqal does not currently support feedback. Once the QSCOUT platform supports classical feedback, Jaqal will be extended to support it as well.\n\n# Gate Pulse File\n\nThe laser pulses that implement built-in or custom trapped-ion gates are defined in a ***Gate Pulse File (GPF)***. Eventually, users will be able to write their own GPF files, but that capability will not be available in our initial software release. However, users will be free to specify composite gates by defining them as sub-circuit [macros](#macro-statement). Additionally, custom native gates can be added in collaboration with Sandia scientists by specifying the pulse sequences that have to be applied to the trapped ion qubits to realize the gate.\n\nWe have provided a GPF file for the built-in gates on the QSCOUT 1.0 platform. This file is not intended to be modified by users, so we are not specifying its contents here. However, a full specification of the built-in gates will be available to users of the QSCOUT 1.0 platform. This GPF file contains pulse-level gate definitions for the QSCOUT 1.0 built-in gates listed below. All angle arguments in this list are in the units of radians, with 40 bits of precision. The chirality of rotations is determined using the right-hand rule.\n\n- `prepare_all` \n 0.875cm Prepares all qubits in the quantum register in the $|0\\rangle$ state in the $z$ basis.\n\n- `R ` \n Counter-clockwise rotation around an axis in the equatorial plane of the Bloch sphere defined by ``, measured counter-clockwise from the $x$ axis, by the angle defined by ``.\n\n- `Rx ` \n Counter-clockwise rotation around the $x$ axis, by the angle defined by ``.\n\n- `Ry ` \n Counter-clockwise rotation around the $y$ axis, by the angle defined by ``.\n\n- `Rz ` \n Counter-clockwise rotation around the $z$ axis, by the angle defined by ``.\n\n- `Px ` \n Counter-clockwise rotation around the $x$ axis, by $\\pi$. (Pauli $X$ gate.)\n\n- `Py ` \n Counter-clockwise rotation around the $y$ axis, by $\\pi$. (Pauli $Y$ gate.)\n\n- `Pz ` \n Counter-clockwise rotation around the $z$ axis, by $\\pi$. (Pauli $Z$ gate.)\n\n- `Sx ` \n Counter-clockwise rotation around the $x$ axis, by $\\pi\/2$. ($\\sqrt{X}$ gate.)\n\n- `Sy ` \n Counter-clockwise rotation around the $y$ axis, by $\\pi\/2$. ($\\sqrt{Y}$ gate.)\n\n- `Sz ` \n Counter-clockwise rotation around the $z$ axis, by $\\pi\/2$. ($\\sqrt{Z}$ gate.)\n\n- `Sxd ` \n Clockwise rotation around the $x$ axis, by $\\pi\/2$. ($\\sqrt{X}^\\dagger$ gate.)\n\n- `Syd ` \n Clockwise rotation around the $y$ axis, by $\\pi\/2$. ($\\sqrt{Y}^\\dagger$ gate.)\n\n- `Szd ` \n Clockwise rotation around the $z$ axis, by $\\pi\/2$. ($\\sqrt{Z}^\\dagger$ gate.)\n\n- `MS ` \n The general two-qubit M\u00f8lmer\u2013S\u00f8rensen gate. (If we let $\\theta$ represent `` and $\\varphi$ represent ``, then the gate is $$\\exp\\left(-i\\left(\\frac{\\theta}{2}\\right)(\\cos \\varphi X + \\sin \\varphi Y)^{\\otimes 2}\\right).$$\n\n- `Sxx ` \n The XX-type two-qubit M\u00f8lmer\u2013S\u00f8rensen gate: $$\\exp\\left(-i\\left(\\frac{\\pi}{2}\\right) X\\otimes X\\right).$$\n\n- `measure_all` \n Measures all qubits of the quantum register in the $z$ basis. After measurement, ions will be outside the qubit space. Therefore, the qubits have to be prepared again before any other gates can be applied.\n\nThe gate pulse definitions also include idle gates with the same duration as the single- and two-qubit gates. These have a prefix of `I_`. For example an idle gate of the same duration as a `Px` can be obtained by `I_Px `. It is important to note that it is not necessary to explicitly insert idle on idling qubits in a parallel block. Explicit idle gates are meant to be used for performance testing and evaluation.\n\nThe open nature of the QSCOUT testbed requires a flexible **Quantum Assembly Language (QASM)** that empowers QSCOUT users to extend the set of native gates and fully control the execution of the quantum program on the QSCOUT testbed. Due to the proliferation of such languages in this fledgling field, ours is named **Just Another Quantum Assembly Language**, or **Jaqal**.\n\nTo realize our objectives, the Jaqal QASM language fulfills the following requirements:\n\n- Jaqal fully specifies the allocation of qubits within the quantum register, which *cannot* be altered during execution.\n\n- Jaqal requires the scheduling of sequential and parallel gate sequencing to be fully and explicitly specified.\n\n- Jaqal can execute any native (built-in or custom) gate specified in any GPF file it references.\n\nWhile Jaqal is built upon a lower-level pulse definition in GPF files, it is the lowest-level QASM programming language exposed to users in QSCOUT. We anticipate that users will develop their own higher-level programming languages that compile down to Jaqal. We plan to release Jaqal-branded metaprogramming tools after user-driven innovation at this meta-programming level settles down.\n\n# Jaqal Syntax\n\nA Jaqal file consists of gates and metadata making those gates easier to read and write. The gates that are run on the machine can be deterministically computed by inspection of the source text. This implies that there are no conditional statements at this level. This section will describe the workings of each statement type.\n\nWhitespace is largely unimportant except as a separator between statements and their elements. If it is desirable to put two statements on the same line, a ';' separator may be used. In a parallel block, the pipe ('') must be used instead of the ';'. Like the semicolon, however, the pipe is unnecessary to delimit statements on different lines. Both Windows and Linux newline styles will be accepted.\n\n## Identifiers\n\nGate names and qubit names have the same character restrictions. Similar to most programming languages, they may contain, but not start with, numerals. They are case sensitive and may contain any non-accented Latin character plus the underscore. Identifiers cannot be any of the keywords of the language.\n\n## Comments\n\nC\/C++ style comments are allowed and treated as whitespace. A comment starting with '\/\/' runs to the end of the current line, while a comment with '\/\\*' runs until a '\\*\/' is encountered. These comments do not nest, which is the same behavior as C\/C++.\n\n## Header Statements\n\nA properly formatted Jaqal file comprises a header and body section. All header statements must precede all body statements. The order of header statements is otherwise arbitrary except that all objects must be defined before their first use.\n\n### Register Statement\n\nA register statement serves to declare the user's intention to use a certain number of qubits, referred to in the file with a given name. If the machine cannot supply this number of qubits then the entire program is rejected immediately.\n\nThe following line declares a register named `q` which holds 7 qubits.\n\n register q[7]\n\n### Map Statement\n\nWhile it is sufficient to refer to qubits by their offset in a single register, it is more convenient to assign names to individual qubits. The map statement effectively provides an alias to a qubit or array of qubits under a different name. The following lines declare the single qubit `q``[``0``]` to have the name `ancilla` and the array `qubits` to be an alias for `q`. Array indices start with 0.\n\n register q[3]\n map ancilla q[0]\n map qubits q\n\nThe map statement will also support Python-style slicing. In this case, the map statement always declares an array alias. In the following line we relabel every other qubit to be an ancilla qubit, starting with index 1.\n\n register q[7]\n map ancilla q[1:7:2]\n\nAfter this instruction, `ancilla``[``0``]` corresponds to `q``[``1``]`; `ancilla``[``1``]` and `ancilla``[``2``]` correspond to `q``[``3``]`and `q``[``5``]`, respectively.\n\n### Let Statement\n\nWe allow identifiers to replace integers or floating point numbers for convenience. There are no restrictions on capitalization. An integer defined in this way may be used in any context where an integer literal is valid and a floating point may similarly be used in any context where a floating point literal is valid. Note that the values are constant, once defined.\n\nExample:\n\n let total_count 4\n let rotations 1.5\n\n## Body Statements\n\n### Gate Statement\n\nGates are listed, one per statement, meaning it is terminated either by a newline or a separator. The first element of the statement is the gate name followed by the gate's arguments which are whitespace-separated numbers or qubits. Elements of quantum registers, mapped aliases, and local variables (see section on [macros](#macro-statement)) may be freely interchanged as qubit arguments to each gate. The names of the gates are fixed but determined in the Gate Pulse File, except for macros. The number of arguments (\"arity\") must match the expected number. The following is an example of what a 2-qubit gate may look like.\n\n register q[3]\n map ancilla q[1]\n Sxx q[0] ancilla\n\nThe invocation of a macro is treated as completely equivalent to a gate statement.\n\n### Gate Block\n\nMultiple gates and\/or macro invocations may be combined into a single block. This is similar, but not completely identical, to how C or related languages handle statement blocks. Macro definitions and header statements are not allowed in gate blocks. Additionally, statements such as macro definitions or loops expect a gate block syntactically and are not satisfied with a single gate, unlike C.\n\nTwo different gate blocks exist: sequential and parallel. Sequential gate blocks use the standard C-style '{}' brackets while parallel blocks use angled '\\<\\>' brackets, similar to C++ templates. This choice was made to not conflict with '\\[\\]' brackets, which are used in arrays, and to reserve '()' for possible future use. In a sequential block, each statement, macro, or gate block waits for the previous to finish before executing. In a parallel gate block, all operations are executed at the same time. It is an error to request parallel operations that the machine is incapable of performing, however it is not syntactically possible to forbid these as they are determined by hardware constraints which may change with time.\n\n[Looping statements](#loop-statement) are allowed inside sequential blocks, but not inside parallel blocks. Blocks may be arbitrarily nested so long as the hardware can support the resulting sequence of operations. Blocks may not be nested directly within other blocks of the same type.\n\nThe following statement declares a parallel block with two gates.\n\n < Sx q[0] | Sy q[1] >\n\nThis does the same but on different lines.\n\n <\n Sx q[0]\n Sy q[1]\n >\n\nHere is a parallel block nested inside a sequential one.\n\n {\n Sxx q[0] q[1]\n < Sx q[0] | Sy q[1] >\n }\n\nAnd sequential blocks may be nested inside parallel blocks.\n\n <\n Sx q[0]\n { Sx q[1] ; Sy q[1] }\n >\n\n### Timing within a parallel block\n\nIf two gates are in a parallel block but have different durations (*e.g.*, two single-qubit gates of different length), the default behavior is to *start* each gate within the parallel block simultaneously. The shorter gate(s) will then be padded with idles until the end of the gate block. For example, the command\n\n <\n Rx q[1] 0.1\n Sx q[2]\n >\n\nresults in the `Rx` gate on `q``[``1``]` with angle 0.1 radians and `Sx` gate on `q``[``2``]` both starting at the same time; the `Rx` gate will finish first and `q``[``1``]` will idle while the `Sx` gate finishes. Once the Jaqal gate set becomes user-extensible, users may define their own scheduling within parallel blocks (*e.g.*, so that gates all *finish* at the same time instead).\n\nA macro can be used to treat a sequence of gates as a single gate. Gates inside a macro can access the same qubit registers and mapped aliases at the global level as all other gates, and additionally have zero or more arguments which are visible. Arguments allow the same macro to be applied on different combinations of physical qubits, much like a function in a classical programming language.\n\nA macro may use other macros that have already been declared. A macro declaration is complete at the *end* of its code block. This implies that recursion is impossible. It also implies that macros can only reference other macros created earlier in the file. Due to the lack of conditional statements, recursion always creates an infinite loop and is therefore never desirable.\n\nA macro is declared using the `macro` keyword, followed by the name of the macro, zero or more arguments, and a code block. Unlike C, a macro must use a code block, even if it only has a single statement.\n\nThe following example declares a macro.\n\n macro foo a b {\n Sx a\n Sxx a q[0]\n Sxx b q[0]\n }\n\nTo simplify parsing, a line break is not allowed before the initial '{', unlike C. However, statements may be placed on the same line following the '{'.\n\nA gate block may be executed for a fixed number of repetitions using the loop statement. The loop statement is intentionally restricted to running for a fixed number of iterations. This ensures it is easy to deterministically evaluate the runtime of a program. Consequently, it is impossible to write a program which will not terminate.\n\nThe following loop executes a sequence of statements seven times.\n\n loop 7 {\n Sx q[0]\n Sz q[1]\n Sxx q[0] q[1]\n }\n\nThe same rules apply as in macro definitions: '{' must appear on the same line as `loop`, but other statements may follow on the same line.\n\nLoops may appear in sequential gate blocks, but not in parallel gate blocks.\n\n# Extensibility\n\nAs Jaqal and the QSCOUT project more broadly have extensibility as stated goals, it is important to clarify what is meant by this term. Primarily, Jaqal offers extensibility in the gates that can be performed. This will occur through the gate pulse file and the use of macros to define composite gates that can be used in all contexts a native gate can. Jaqal will be incrementally improved as new hardware capabilities come online and real world use identifies areas for enhancement. The language itself, however, is not intended to have many forms of user-created extensibility as a software developer might envision the term. Features we do not intend to support include, but are not limited to, pragma statements, user-defined syntax, and a foreign function interface (i.e.\u00a0using custom C or Verilog code in a Jaqal file).\n\n# Examples\n\n## Bell state preparation\n\nThis example prepares a Bell state using the classic Hadamard and controlled X circuit, then measures it in the computational basis. Up to the limits of gate fidelity, the measurements of the two qubits should always match.\n\n macro hadamard target { \/\/ A Hadamard gate can be implemented as\n Sy target \/\/ a pi\/2 rotation around Y\n Px target \/\/ followed by a pi rotation around X.\n }\n\n macro cnot control target { \/\/ CNOT implementation from Maslov (2017)\n Sy control \/\/\n Sxx control target\n \/\/ we can perform these in parallel\n Syd control\n }\n\n register q[2]\n\n prepare_all \/\/ Prepare each qubit in the computational basis.\n hadamard q[0]\n cnot q[1] q[0]\n measure_all \/\/ Measure each qubit and read out the results.\n\nHowever, there's a more efficient way of preparing a Bell state that takes full advantage of the native M\u00f8lmer-S\u00f8rensen interaction of the architecture, rather than using it to replicate a controlled-X gate. The following snippet of code repeats that interaction 1024 times, measuring and resetting the ions after each time. All 1024 measurement results will be reported to the user.\n\n register q[2]\n\n loop 1024 {\n prepare_all\n Sxx q[0] q[1]\n measure_all\n }\n\n## Single-Qubit Gate Set Tomography\n\n register q[1]\n\n \/\/ Fiducials\n macro F0 qubit { I_Sx qubit }\n macro F1 qubit { Sx qubit }\n macro F2 qubit { Sy qubit }\n macro F3 qubit { Sx qubit; Sy qubit}\n macro F4 qubit { Sx qubit; Sx qubit; Sx qubit }\n macro F5 qubit { Sy qubit; Sy qubit; Sy qubit }\n\n \/\/ Germs\n macro G0 qubit { Sx qubit }\n macro G1 qubit { Sy qubit }\n macro G2 qubit { I_Sx qubit }\n macro G3 qubit { Sx qubit; Sy qubit }\n macro G4 qubit { Sx qubit; Sy qubit; I_Sx qubit }\n macro G5 qubit { Sx qubit; I_Sx qubit; Sy qubit }\n macro G6 qubit { Sx qubit; I_Sx qubit; I_Sx qubit }\n macro G7 qubit { Sy qubit; I_Sx qubit; I_Sx qubit }\n macro G8 qubit { Sx qubit; Sx qubit; I_Sx qubit; Sy qubit }\n macro G9 qubit { Sx qubit; Sy qubit; Sy qubit; I_Sx qubit }\n macro G10 qubit { Sx qubit; Sx qubit; Sy qubit; Sx qubit; Sy qubit; Sy qubit }\n\n \/\/ Length 1\n prepare_all\n F0 q[0]\n measure_all\n\n prepare_all\n F1 q[0]\n measure_all\n\n prepare_all\n F2 q[0]\n measure_all\n\n prepare_all\n F3 q[0]\n measure_all\n\n prepare_all\n F4 q[0]\n measure_all\n\n prepare_all\n F5 q[0]\n measure_all\n\n prepare_all\n F1 q[0]; F1 q[0]\n measure_all\n\n prepare_all\n F1 q[0]; F2 q[0]\n measure_all\n\n \/\/ and many more\n \/\/ Repeated germs can be realized with the loop\n\n prepare_all\n F1 q[0]\n loop 8 { G1 q[0] }\n F1 q[0]\n measure_all\n\nWhen successfully executed, a single Jaqal file will generate a single ASCII text file (Linux line endings) in the following way:\n\n1. Each call of `measure_all` at runtime will add a new line of data to the output file. (If `measure_all` occurs within a `loop` (or nested loops), then multiple lines of data will be written to the output file, one for each call of `measure_all` during execution.)\n\n2. Each line of data written to file will be a single bitstring, equal in length to the positive integer passed to `register` at the start of the program.\n\n3. Each bitstring will be written in least-significant bit order (little endian).\n\nFor example, consider the program:\n\n register q[2]\n\n loop 2 {\n prepare_all\n Px q[0]\n measure_all\n }\n\n loop 2 {\n prepare_all\n Px q[1]\n measure_all\n }\n\nAssuming perfect execution, the output file would read as:\n\n 10\n 10\n 01\n 01\n\nWhile this output format will be \"human-readable\", it may nevertheless be unwieldy to work with directly. Therefore, a Python-based parser will be written to aid users in manipulating output data.\n\n# Possible Future Capabilities\n\nJaqal is still under development, and will gain new features as the QSCOUT hardware advances. While the precise feature set of future versions of Jaqal is still undetermined, we discuss some features that may be added, and in some cases identify workarounds for the current lack of those features.\n\n## Subset Measurement\n\nCurrently, the measurement operation of the QSCOUT hardware acts on all ions in the trap, destroying their quantum state and taking them out of the computational subspace. Future versions of the QSCOUT hardware will allow for the isolation and measurement of a subset of qubits with a command of the form `measure_subset ...`. Similarly, a `prepare_subset ...` operation will allow the reuse of measured qubits without destroying the quantum state of the remainder. These would be implemented in a Gate Pulse File, and not require a change to the Jaqal language.\n\n## Measurement Feedback\n\nThe QSCOUT hardware does not currently support using measurement outcomes to conditionally execute future gates. We expect this capability will be added in a future version of the QSCOUT hardware, and Jaqal programs will be able to use that capability once it exists. We have chosen to delay adding the syntax for measurement feedback to Jaqal until that time, in order to allow us the flexibility to choose a syntax that best allows users to take advantage of the actual capabilities of our hardware, once those are known.\n\n## Classical Computation\n\nJaqal does not currently support any form of classical computation. We understand that this is a limitation, and expect future versions of Jaqal to do so. There are two relevant forms of classical computation that we are considering for Jaqal.\n\n### Compile-Time Classical Computation\n\nPerforming classical computations at compile-time, before the program is sent to the quantum computer, can vastly increase the expressiveness of the language. For example, consider the following experiment, *which is not currently legal Jaqal code:*\n\n register q[1]\n\n let pi 3.1415926536\n\n loop 100 {\n prepare_all\n Ry q[0] pi\/32\n measure_all\n prepare_all\n Ry q[0] pi\/16\n measure_all\n prepare_all\n Ry q[0] 3*pi\/32\n measure_all\n prepare_all\n Ry q[0] pi\/8\n measure_all\n }\n\nCurrently, Jaqal does not support inline parameter calculations like the above. The recommended workaround is to define additional constants as needed:\n\n register q[1]\n\n let pi_32 0.09817477042\n let pi_16 0.1963495408\n let pi_3_32 0.2945243113\n let pi_8 0.3926990817\n\n loop 100 {\n prepare_all\n Ry q[0] pi_32\n measure_all\n prepare_all\n Ry q[0] pi_16\n measure_all\n prepare_all\n Ry q[0] pi_3_32\n measure_all\n prepare_all\n Ry q[0] pi_8\n measure_all\n }\n\nAnother example of a case where compile-time classical computation could be useful is in macro definition. For example, if you wished to define a macro for a controlled z rotation in terms of a (previously-defined) CNOT macro:\n\n ...\n macro CNOT control target { ... }\n\n macro CRz control target angle {\n Rz target angle\/2\n CNOT control target\n Rz target -angle\/2\n CNOT control target\n }\n ...\n\nAgain, the above example *is not currently legal Jaqal.* We recommend, in such cases, that you manually unroll macros as needed, then define additional constants as above. That is, rather than using the above macro:\n\n ...\n let phi 0.7853981634;\n ...\n CRz q[0] q[1] phi;\n ...\n\nYou should instead call the gates the macro is made up of, substituting the results of the appropriate calculations yourself:\n\n ...\n let phi 0.7853981634;\n let phi_2 0.3926990817;\n let phi_m_2 -0.3926990817;\n ...\n Rz q[1] phi_2; CNOT q[0] q[1]; Rz q[1] phi_m_2; RNOT q[0] q[1];\n ...\n\nWe recognize that this \"manual compilation\" is a significant inconvenience for writing readable and expressive code in Jaqal. We expect to include compile-time classical computation in a relatively early update to Jaqal, likely even before measurement feedback is available. Fortunately, metaprogramming (automated code generation) significantly eases the burden of the lack of classical computation features, and we highly recommend it to users of Jaqal.\n\n### Run-Time Classical Computation\n\nUsers may also wish to do classical computation while a Jaqal program is running, based on the results of measurements. For example, in hybrid variational algorithms, a classical optimizer may use measurement results from one circuit to choose rotation angles used in the next circuit. In error-correction experiments, a decoder may need to compute which gates are necessary to restore a state based on the results of stabilizer measurements. Adaptive tomography protocols may need to perform statistical analyses on measurement results to determine which measurements will give the most information. As can be seen from the above examples, run-time classical computation is useful only when measurement feedback is possible. Accordingly, we will consider this feature after we have added support for measurement feedback. However, use cases like adaptive tomography and variational algorithms can be implemented via metaprogramming techniques. After running a Jaqal file on the QSCOUT hardware, a metaprogram can parse the [measurement results](#data-output-format), then use that information to generate a new Jaqal file to run.\n\n## Randomness\n\nExecuting quantum programs with gates chosen via classical randomness is desirable for a variety of reasons. Applications of randomized quantum programs include hardware benchmarking, error mitigation, and some quantum simulation algorithms. Jaqal does not currently have built-in support for randomization, although it may in the future, likely in combination with support for run-time classical computation. Our currently recommended workaround is to pre-compute any randomized elements of the algorithm, automatically generating Jaqal code to execute the random circuit selected. For example, the following program isn't currently possible, as there's no means of generating a random angle in Jaqal directly:\n\n register q[1]\n\n loop 100 {\n prepare_all\n \/\/ Do an X rotation on q[0] by a random angle between 0 and 2*pi.\n measure_all\n }\n\nHowever, the same effect can be obtained by a metaprogram (written in Python, for the sake of example) that generates a Jaqal program:\n\n from random import uniform\n from math import pi\n with open(\"randomness_example.jql\", \"w\") as f:\n f.write(\"register q[1]\\n\\n\")\n for idx in range(100):\n angle = uniform(0.0, 2.0 * pi)\n f.write(\"prepare_all\\n\")\n f.write(\"Rx q[0] %f\\n\" % angle)\n f.write(\"measure_all\\n\\n\")\n\nWhile the generated Jaqal program is much larger than one that could be written in a potential future version of Jaqal that supported randomized execution, the metaprogram that generates it is quite compact.\n\n# Acknowledgements\n\nThis material was funded by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research Quantum Testbed Program.\n\nSandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International, Inc., for DOE's National Nuclear Security Administration under contract DE-NA0003525.","meta":{"dup_signals":{"dup_doc_count":11,"dup_dump_count":2,"dup_details":{"curated_sources":2,"unknown":9}},"filename":"out\/2003.09382_extract_quantum_assembly_spec.tex.md"},"subset":"arxiv"} +{"text":"abstract: The HANDE quantum Monte Carlo project offers accessible stochastic algorithms for general use for scientists in the field of quantum chemistry. HANDE is an ambitious and general high-performance code developed by a geographically-dispersed team with a variety of backgrounds in computational science. In the course of preparing a public, open-source release, we have taken this opportunity to step back and look at what we have done and what we hope to do in the future. We pay particular attention to development processes, the approach taken to train students joining the project, and how a flat hierarchical structure aids communication.\nauthor: J.\u00a0S.\u00a0Spencer; N.\u00a0S.\u00a0Blunt; W.\u00a0A.\u00a0Vigor; Fionn\u00a0D.\u00a0Malone; W.\u00a0M.\u00a0C.\u00a0Foulkes; James\u00a0J.\u00a0Shepherd; A.\u00a0J.\u00a0W.\u00a0Thom\nbibliography: hande_wssspe2.bib\ndate: 2024-09-30\ntitle: Open-source development experiences in scientific software: the HANDE quantum Monte Carlo project\n\n[^1]\n\n[^2]\n\nThe Highly Accurate N-DEterminant (HANDE) quantum Monte Carlo project began life as an experiment by one of us (JSS) to explore the (then recent) development in quantum chemistry: the full configuration interaction quantum Monte Carlo (FCIQMC) method\u00a0. FCIQMC can be viewed simply as a stochastic approach to the power method; it allows the calculation of exact ground state energies of quantum systems with Hilbert spaces orders of magnitude larger than accessible via even state-of-the-art deterministic algorithms. Initially only the Hubbard model was implemented, but HANDE now handles a range of model and chemical systems. At the same time HANDE has become an efficient and highly parallel implementation of FCIQMC and related methods, capable of scaling to several thousand cores. We have also provided deeper understanding of the FCIQMC method, extended HANDE to include the canonical implementation of the stochastic coupled cluster approach and developed new methods within the field. The driving-force for this transformation, from a toy code to a professional software package, has been the team of contributors split between three universities working together in a sustainable and robust process. We are very proud of the variety of our developers, who represent several different areas of science and range from undergraduates to professors. Indeed, we have had exceptional success with undergraduate research projects, which is remarkable given that most start with no or little experience in parallel computing and in quantum chemistry\u2014a notable example is the development of a novel Monte Carlo method by two undergraduate students.\n\nThe unexpected and organic growth has provided its challenges. How to transition into a community-owned code from the initial gatekeeper model we stumbled into? How to develop and support new contributors to the project? In some cases we planned ahead; in others we reached a consensus through iterative experimentation. Indeed, we have found flexibility and willingness to adapt to be of vital importance.\n\nIn this contribution we first describe the choices we made in an effort to write a sustainable, portable library, the approach we have settled on for development and the benefits we have subsequently obtained. We then discuss how we have trained students to be successful and valuable members of the development team and our future plans for the HANDE project before offering our conclusions and suggestions to the wider computational science community.\n\n# HANDE OVERVIEW\n\nHANDE is a small, but growing, project with half a dozen active developers at any one time. Most users are also developers but the user community is growing through active collaborations. The code base contains approximately 20000 lines of Fortran 2003, plus a smaller amount of C and several thousand lines of comments and is parallelised using MPI and OpenMP. HANDE is available as a source distribution via the project website and github. The distribution also contains a substantial amount of documentation, including compilation and usage instructions, and tutorials as well as python modules for data analysis. HANDE is developed on Linux, Mac OS X and Windows though, due to the nature of supercomputers, production calculations on high performance computer facilities are universally performed on Linux.\n\n# A DEVELOPMENT MODEL\n\nWe view ourselves as scientists *and* programmers (though our funding agencies might not agree!) and believe both roles are vital. As programmers, a maintainable and efficient code is our main goal. As scientists, we wish to rapidly address the questions posed in our research. These positions are not, however, contradictory: rather we have found the programmers' goal also minimizes delays in making scientific progress once spread over a number of consecutive projects. In other words, poor design and development choices eventually hinder us. Here we detail some of the choices we have made and their consequences. We note that the comments we have to make are surprisingly general; an in-depth knowledge of the algorithm is not necessary to appreciate what we are discussing. We are, however, aided by FCIQMC and related methods being simple and composed of only a few distinct data flows. In particular, the memory demands are dominated by the representation of the eigenvector and the computational cost per iteration by the tight loop in which the eigenvector is stochastically evolved.\n\n**Coding conventions**\u2014 We have taken care to maintain consistency in coding conventions throughout. This begins with a common, ordered commenting style; this visual cue helps developers become immediately aware of the existence of code norms and leads to it being easier to maintain wide-spread adoption of the other features below. Apart from making it far easier and more pleasant to read and understand code, such conventions serve as a guide to those with little prior experience programming and help prevent code from being rushed. We ensure that the functionality and inputs and outputs of all procedure interfaces are documented; this can then be extracted using tools such as sphinx and makes comprehension whilst navigating code (e.g.\u00a0using ctags) far faster. We further advocate the use of extensive commenting to provide both an overview of the theory and the choices that lie behind an implementation: indeed, in the more theoretically challenging parts of HANDE, the amount of comments rivals or exceeds the actual amount of code. Such cases can be viewed as an example of literate programming and may include theoretical overviews (which, for research software, are frequently not yet available in the literature), a discussion on implementation choices, benchmarks, examples and so on. These serve both as documentation and as extremely helpful material from which new members of the development team can learn about details which may be inappropriate for traditional papers.\n\n**Pure functions**\u2014 A growing trend in HANDE development, which has been successful, is a move towards the use of pure functions, which (along with other functional programming approaches) have been demonstrated to have compelling advantages. The results of pure functions depend only upon the input argument values and have no side effects on any part of the code outside the function. As such, pure functions cannot depend on any global data. We have found that functions which depend heavily on global data have many subtle interactions and assumptions, such that changing one part of the code can unexpectedly alter other parts. This problem becomes worse as the size of a program grows. In contrast, one can be confident that changes outside a pure function can never alter its results for the same set of inputs. Beyond this, code written in a pure style is more reusable (both within the code and in separate projects) and easier to test. Whilst writing code in a pure style can initially take longer, we are finding that it saves significant time and effort in the long run and makes implementing new functionality far easier. We have utilized this for threaded parallelism and alternate implementations.\n\n**Factorisation**\u2014 Open source software provides a huge advantage to our developers; they are encouraged to extract code which could be reused in other projects to contribute to the community. This approach to factorisation forces developers to plan and separate functionally and logically independent code, improving the quality and sustainability of the code. Conversely we benefit from similar efforts in the broader community and can use state-of-art portable libraries to minimise time-to-science and avoid duplication of effort. For example, we use HDF5 for checkpoint files\u00a0, dSFMT for random numbers\u00a0 and the python scientific stack (especially numpy\u00a0, pandas\u00a0 and matplotlib\u00a0) for data analysis. In return, our contributions include Fortran interfaces to libraries\u00a0, a test framework\u00a0 (see below) and a python library for removing serial correlations in Monte Carlo data\u00a0. We find such efforts are a way of broadening impact of our development work far beyond the immediate stochastic quantum chemistry community. Encouragingly, we have also received contributions to these libraries from outside of our team. Making the code publicly accessible via distributed version control (e.g.\u00a0on github) is key to reducing the barrier to entry.\n\nDespite the above, a large number of dependencies is undesirable from a usability viewpoint: requiring the user to manually compile several packages before using our program hinders experimentation and porting to new platforms. We try to overcome this in two ways: small libraries with permissive licenses can be included in the source distribution and non-core features which depend upon larger libraries can be disabled at compile-time.\n\n**Pull requests and code review**\u2014 In the last year we have moved to a system of pull requests based upon the git flow model. In this system, any contributions to HANDE must be made on a branch (using our version control system of choice, git) and a review of the branch performed (by at least one other contributor) before it may be merged into master (see Fig.\u00a01). Code review can easily be performed using (e.g.) github's inline commenting or, our preferred tool, watson. Code review is deliberately light weight and allows for rapid peer feedback about the approach used, problems in the design and consistency in code style. In particular, the process typically includes validation and verification, of the code, documentation and (crucially) any new theoretical work underlying it. We have found that this process greatly reduces bugs and rushed code from ending up in the master, which is designated to be sufficiently stable for production calculations. Already we have seen substantial improvements in the flexibility, sustainability and maintainability of the code. It also gives contributors an understanding of parts of the codebase that they may not otherwise know much about. Even those who do not perform a review in detail gain knowledge of the various projects being worked on. The social impact of this is interesting: we find code review to be an excellent way of flattening the academic hierarchical structure. In particular, we note that the levels of expertise in scientific and computational domains are often not aligned and the more 'junior' members of a research team are often the ones doing the most software development and hence their reviews of contributions from more 'senior' members can be the most enlightening.\n\nOne aspect deserves special consideration: not all development work is evolutionary; some must be revolutionary. This kind of development work is frequently long running and handling both the review and merging (often into a very different codebase after months of parallel development) is painful. We have found that regular peer review of intermediate work and occasional rebasing of such branches against the current development version of the code goes a long way to mitigating such issues.\n\n**Regression testing**\u2014 Scientific codes produce quantitative results that, in principle, should be extremely simple to test against when the code changes. When differences happen to indicate a bug, these can be tracked down between a relatively small number of commits using a bisection method. Whilst unit tests are valuable, we have found that regression tests are easier to retrofit to existing code bases and are good at capturing problems in the interfaces between procedures or changes compared to existing answers. This type of regression testing is relatively straight-forward to undertake. Apart from data extraction from output files, regression testing involves a generic set of tasks. One of us (JSS) maintains an open source portable tool for just such a purpose\u00a0, which has attracted use in the wider electronic structure community. Running the tests can be automated (e.g. to check every commit, every pull request, given time intervals) using tools such as jenkins, travis-ci or buildbot, which is currently used in the HANDE project, as is performed by many other projects (e.g.\u00a0). The design of tests themselves is a non-trivial challenge, and should not be underestimated. A test should check a broad sweep of functionality, but when there are many input parameters (and variably sparse matrices) it is impossible to check every combination, though tools such as gcov are invaluable in discovering the fraction of the code covered by a set of tests. HANDE contains over 160 tests which cover over 85% of the code base (excluding external libraries) and increasing this is an ongoing effort. Moreover, because the software is designed for high-performance computing and contains Monte Carlo algorithms, it can be hard to reliably review this functionality, especially for bugs which are only revealed when run on thousands of processors. Where possible, therefore, new conceptual developments are checked against numbers from other codes. A community which supports this kind of data sharing is extremely important for reliable scientific reproducibility.\n\n**Reproducibility**\u2014 Reproducibility of experimental results is one of the most important principles in the scientific community. Numerical experiments should be held to as high standards, but often this is more difficult than it seems as code can change rapidly over time. This is even more problematic for Monte Carlo algorithms where newly introduced features can alter the Markov chain resulting in slightly different numerical answers. Furthermore, complex calculations rely upon an existing set of input and checkpoint files and produce similar numbers of files as output, making data provenance complicated. As a simple measure to overcome this we output the input options and the git commit hash to the main output file and a UUID specific to the calculation in all output files which enables us and any other user to reproduce the results of a particular calculation. We are fans of the IPython Notebook for data analysis as a way of storing the analysis and output together. These notebooks also represent useful training aids.\n\n**Modern Standards**\u2014 Languages continue to evolve and exploiting new developments can be a powerful tool in making code more flexible, portable and maintainable. For example, the C interoperability features in Fortran 2003 make it much easier to combine existing code written in either language and so reduces the need to 'reinvent the wheel'. One word of caution: new language features are implemented at different rates across different compilers, which are updated infrequently in some environments. It is important to balance using new language features and staying away from the bleeding edge. Regular testing against a variety of common compilers is vital in maintaining the portability of the code.\n\n**Bug fixing**\u2014 Bug fixing in an academic environment is somewhat fraught given the inherently fluctional development community. Whilst we have found the many bugs are prevented (or rather, discovered at time-of-creation) by code review, inevitably bugs remain to be discovered at a later time. Whilst debugging is a universally hard problem, especially (as is often the case in academia) when the original student or researcher has moved on, we have found the approaches we discussed above crucial in mitigating this factor. Good documentation, commenting and tests provide an indication of what the code should do (or at least what its author thought it should do!) and remove one layer of mystery. We have also found code review an excellent strategy to aid this; having multiple developers review and understand a section of the codebase (albeit perhaps not on the same level as its author) aids the spread of knowledge throughout the development team and helps make it more likely that at least one person is capable of fixing the bug relatively quickly. Once a bug is reported, it is triaged and a fix is proposed. Following our standard code review process, it is then merged into the stable branch. It is then important to update the test suite so that the bug remains fixed. Who does this work can be problematic, especially in cases where the original is no longer working on HANDE. Sometimes a code developer tracks the problem down. In other cases we find the open source adage of 'scratching your own itch' useful: the user who wants a bug fixed will (hopefully!) be suitably motivated to also fix it, given support and guidance from the wider development team. We have found that this can be a powerful tool for encouraging users to become developers.\n\n# TRAINING\n\nThe challenges facing someone joining a computational science project are multi-faceted: one must be knowledgeable in broad technical issues, the programming language(s) used as well as the theory of the underlying science. However, in practice, applied computer science is often attempted in academia without formal training. This requires that students learn on-the-job, but students often come highly motivated to learn new skills from day one. Fortunately there are now excellent and affordable courses aimed at improving technical skills of computational scientists run by universities, national bodies (e.g.\u00a0ARCHER in the UK) and international groups. We especially praise the impact of Software Carpentry.\n\n**Introduction to HANDE**\u2014 Ideally, the instruction given should be: 'checkout the code and play around with it' and that should be sufficient; we aim for this to be the case. New developers frequently comment that strategies mentioned in the previous section greatly help them in coming to grips with the code and in keeping initial motivation high. We note this is a constant battle: additional features, optimisation and poor habits can cause the barrier of entry to creep up over time. However, we find a mindful approach beneficial. We recognise that initial impressions matter and so aim to make things as smooth as possible. We find that the speed at which new developers learn is helped by\n\nOur experience is that highly-motivated students on moving away from the community willingly stay involved and enjoy doing so; this sets good examples for incoming students. Informal, nonhierarchical, peer-based management greatly enhances this effect; learning happens organically in an environment where asking questions is easy and group discussion common.\n\n**Converting users to developers**\u2014 By the very nature of academia, the development community around research software fluctuates. Converting users into developers helps substantially in making a project sustainable, especially in niche fields. In addition to attempting to minimise the barrier to entry, we find a powerful technique is to encourage users to 'scratch their own itch': when a user has a feature request, we try to help them to implement it themselves (even if this takes more time than a core developer doing it themselves). The time investment is typically rewarded surprisingly quickly.\n\n**Coding retreats**\u2014 Engendering a development community and sharing knowledge across a geographically dispersed network is hard. To this end we recently held a residential coding retreat. Those in attendance were encouraged to implement a simple feature (*i.e.*\u00a0could be completed in the time available) of interest; coding review happened on-site. We found this to be a good community-building format. An important feature was to set aside substantial amounts of time for informal presentations and discussions, which provided a forum to discuss ongoing research as well as the codebase.\n\n# DISCUSSION\n\nWe conclude with some examples of where our approach succeeded and where it failed, followed by an outlook on the future.\n\nThe development of a flexible, modular code supported by a training regime for new team members might appear to be a bet which may or may not pay off. Our experiences show that it does pay off; in fact many of the approaches we discussed above were suggested naturally and adopted due to frustration with inefficiencies from *not* doing them. The impact on our work has been tremendous. For example, two undergraduate students in a few months were able to propose, implement and test a new finite-temperature Monte Carlo approach in electronic structure. This would not have been possible if they had to start from scratch or from a monolithic, inpenetrable codebase. Internal peer review has made our code more robust: review of recent improvements to the coupled cluster Monte Carlo revealed a subtle bias when MPI parallelisation was used. We have also found the community aspect in development to be important and have some unexpected benefits. Recently several of us realised we were all struggling with a similar limitation in the code base and, as a result, embarked jointly on the (thankless) task of re-engineering some core data structures to provide additional flexibility. It is unlikely this work would have taken place if everyone was instead just focussing on their own research project in isolation (which discourages this kind of improvement\/tidying\/maintance that benefits everyone) but doing so will actually open up new possibilities for all of us.\n\nIn other instances, we have been less successful. One project on improving parallel scaling ended up running for almost a year, completely separate from the rest of the development. Combining this with other work was painful: such large sets of changes are hard to review adequately and the resultant merge had lots of conflicts which had to be resolved manually. We should have instead broken this work up into smaller sections rather than aiming for perfection in the first instance: our development model is better suited to continual refinement and incremental steps than large, radical changes. Another example is from legacy work: a seemingly innocuous (largely stylistic) change three years ago introduced a bug in an extreme corner case which, naturally, was eventually triggered. The problematic code dated back to before we systematically performed code reviews. The developer who found the bug was able to spot it quickly in the affected procedure, but tracking it down to that point from some unusual results in production calculations was *much* harder. The last two cases are not where our development approach failed *per se*, but rather where we failed it. Whilst there is always the temptation to follow the 'easy' course in the short term, in our experience this turns out to lead to pain later on\u2014and often more quickly than anticipated!\n\nAs a project such as HANDE grows, there will be an increasing number of challenges in managing both the means of communication among the community as well as the direction of the project itself. To ensure community growth, it is vital that the low barrier of entry be maintained, and one way we are planning to ensure this is to include developer tutorials which provide a step-by-step introduction to both the code *and* our development practices. Requiring novitiates to work through these tutorials has the three-fold goal of indoctrination into the coding and development standards, learning the structures of the project, and keeping the tutorials up-to-date themselves. Often such tutorials are created on an *ad hoc* basis, but such practices are to be encouraged so as to sustain the accessibility to all. Indeed, the creation of tutorials aimed at users and developers would be a good introductory project when coupled with peer review.\n\nWe end with emphasising the benefits of an open source, collaborative approach, which we wholeheartedly endorse to the wider community. A code which is well written and easily understandable makes it easier to spot mistakes, which can then be fixed quickly and results produced with an open source implementation can be reproduced with no ambiguity. This enables scientists to spend more time pursing new ideas and less time resolving problems already solved by other groups, hence reducing the *collective* time to productive science.\n\n**Acknowledgments**\u2014 JSS and WMCF acknowledge the Thomas Young Centre under Grant No.\u00a0TYC-101. WAV is grateful to EPSRC for a studentship, FDM for an Imperial College PhD scholarship, NSB to Trinity College, Cambridge for an External Research Studentship, JJS to the Royal Commission for the Exhibition of 1851 for a Research Fellowship and AJWT to the Royal Society for a University Research Fellowship. We acknoledge the Imperial College High Performance Computing Service and ARCHER via a RAP award and via the Materials Chemistry Consortium (Grant No.\u00a0EP\/L000202).\n\n[^1]: , \n\n[^2]: , ","meta":{"dup_signals":{"dup_doc_count":26,"dup_dump_count":20,"dup_details":{"curated_sources":2,"2022-40":2,"2022-21":1,"2021-43":1,"2021-39":1,"2021-04":1,"2020-45":2,"2020-29":1,"2020-05":1,"2019-47":1,"2019-26":1,"2019-18":3,"2019-13":1,"2018-47":1,"2018-39":1,"2018-22":2,"2017-51":1,"2017-30":1,"2023-50":1,"2017-13":1}},"filename":"out\/1407.5407_extract_hande_wssspe2.tex.md"},"subset":"arxiv"} +{"text":"abstract: The band structure of iron-based superconductors gives rise to yet another scenario for the appearance of Dirac fermions. A viewpoint on \"Observation of Dirac cone electronic dispersion in BaFe$_2$As$_2$\": (Richard et.al., PRL 104, 137001 (2010)).\nauthor: M. Zahid Hasan; B. Andrei Bernevig\ndate: 2024-10-02\ntitle: Dirac cone in iron-based superconductors\n\nSuperconductivity above 30 K was realized in the mid 80s with the discovery of cuprate high-temperature superconductors . The monopoly of copper was broken in 2008 with the discovery of iron-based superconductors - of which there are now many families - which have a maximum transition temperature (55 K) comparable to that of single-layer cuprates . Their complex phase diagrams are poorly understood, but we do know that most have a magnetic phase that changes to a superconducting one as the concentration of electrons or holes in the bulk increases.\n\nThe parent magnetic state of iron-based superconductors determines their superconducting transition temperature. In their paper in *Physical Review Letters* , Pierre Richard and collaborators from China, Japan, and the US report band-structure measurements of the parent magnetic state of iron-based superconductors that show linearly dispersed bands in the shape of a cone, which implies that the carriers follow dispersion relations for massless relativistic fermions that obey the Dirac equation. These results confirm an existing theoretical prediction and earlier experimental results on the Dirac band structure and the orbital nature of the magnetic order in the parent superconductor . Richard *et al.* present angle-resolved photoemission (ARPES) studies of high-quality single crystals of BaFe$_2$As$_2$, which has a magnetic transition temperature of 138 K when undoped. They map the Fermi surfaces and find slightly anisotropic band crossings arranged in the shape of a Dirac cone. Dirac fermions are also known to exist in other condensed-matter systems such as in the bulk and surface of topological insulators , in graphene , and cuprate superconductors .\n\nIn iron-arsenide compounds, the parent state is a collinear antiferromagnet, with a commensurate ordering wave vector ($\\pi$, 0) or (0, $\\pi$). This wave vector also appears in the band structure of the normal state of iron-based superconductors as a momentum-space vector (nesting vector) connecting the Fermi surface's hole pockets that surround the $\\Gamma$ point and the electron pockets around the $M$ point in the unfolded Brillouin zone (see Fig. 1). This band structure is a consequence of the multiorbital nature of the system; all of iron's five 3$d$ orbitals are necessary for a full electronic description of the system. Of these, the most important for understanding the present experimental results are the *d$_{xz}$, d$_{xy}$, and d$_{yz}$* orbitals.\n\nPlausible explanations for the low-temperature magnetic ordering vector involve either a spin-density wave (SDW) formation tendency consistent with the nesting wave vector or an antiferromagnetic state formation tendency due to next nearest neighbor spin-spin interactions . In the magnetic phase, the Fermi surfaces reorganize, the Brillouin zone folds, and gaps due to electron-hole scattering are seen in the electronic spectrum. SDW formation usually leads to the opening of an electronic gap by destroying the Fermi surface if the Bloch vectors along the nesting vectors can span the full Fermi surface. Strong interactions can also gap the SDW state. In the iron-based compounds, as the present experiment shows, the situation is different: due to the topology of the Fermi surface a coupling between the Bloch wave vectors of the electron and hole pockets sets the SDW-induced electronic gap.\n\nIn accounting for the nature of the SDW state, a full theoretical analysis \\[6\\] based on a five-orbital model shows that the Bloch band of the electron Fermi surface pocket is odd under reflection, while the outer hole pocket of the Fermi surface is even along the SDW ordering axis. The SDW matrix element between them vanishes and gives rise to a nodal (not fully gapped) SDW, which can at most be semimetallic. This gapless SDW magnetic state survives even for strong interactions . In experiments, the authors observe a high-intensity point in the ARPES data on the $\\Gamma$?M symmetry line for the folded band structure of magnetically ordered BaFe$_2$As$_2$. This feature, occurring roughly where the electron band hybridizes with a hole band, disappears above the SDW transition temperature and has a dispersion characteristic of a relativistic Dirac cone. The observed Dirac cone is thus related to the nature of the SDW state in these materials. The apex of the Dirac cone is situated slightly above the Fermi energy, which would give rise to small hole pockets in the ARPES data. Its dispersion has a small degree of anisotropy. Away from the cone the band is gapped and the authors' data provide the variation of the gap function. A fourfold symmetric electron density pattern is observed around the $\\Gamma$ and M points. This property could be intrinsic, such as in a weak-SDW mode, or arise from the superposition of twin domains expected to form under the structural distortion and SDW transitions. The latter scenario was originally proposed in Ref. .\n\nThe occurrence of Dirac fermions in the system implies that the SDW magnetic state is always metallic. The experimental observations, however, cannot distinguish between a weak-coupling SDW and an antiferromagnetic picture in which some more localized bands are also present. The discovery of the Dirac cone shows that theoretical models that neglect orbital symmetries of iron-based superconductors may not explain the important aspects of the physics. More experiments are needed to address several puzzles. The 122 materials are known to have rather strong $z$-axis dispersion and the evolution of the Dirac cone with the $z$-axis momentum needs to be studied. In several materials, the SDW state coexists with the superconducting state. One possibility for the superconducting order parameter is that of the sign changing s$^\\pm$-wave type, which is nodeless. If the superconductor is indeed of the nodeless s$^\\pm$ type, does the gapless Dirac fermion still survive in the phase where SDW and superconductivity co-exist? If it does not, then could it be that the superconducting gap opening of the Dirac cone gives rise to a superconductor with topological properties ? The observation of a Dirac fermion in iron-based materials introduces a new system with protected cones, after cuprate superconductors , graphene , and topological insulators . It is likely to inspire new research and unravel the connection between nodal SDW magnetism and the nature of superconductivity.\n\nIron-based superconductors are the latest example of the Dirac physics that has recently energized condensed matter physics. In cuprate superconductors, Dirac nodes (points where the two cones meet) appear in the structure of the superconducting order parameter, caused by the electron-electron interactions in the parent magnetic Mott insulator. In graphene, degenerate Dirac fermions appear in the noninteracting band structure due to the ${C_3}$ symmetry of the lattice, along with inversion and time-reversal symmetry. In topological insulators, spin-polarized Dirac fermions appear in the strong spin-orbit coupled band structure at the edge of the insulating bulk and enjoy extended protection due to time-reversal symmetry . In iron-based superconductors, Dirac fermions appear in the magnetic SDW state band structure as a result of the point-group symmetry of the orbitals making up the Fermi surfaces involved in the nesting process. Even though they enjoy different degrees of protection against disorder and interactions, the Dirac fermions present in all these materials are fascinating examples of how different systems can lead to similar profound low-energy electron behavior.","meta":{"dup_signals":{"dup_doc_count":13,"dup_dump_count":8,"dup_details":{"curated_sources":1,"2015-11":2,"2015-06":2,"2014-10":2,"2013-48":2,"2013-20":1,"2017-13":1,"unknown":2}},"filename":"out\/1103.2946_extract_Phys_view_FeAs_v1.tex.md"},"subset":"arxiv"} +{"text":"abstract: We present a method to semiclassically compute the pair creation rate of bosons and fermions in de Sitter spacetime. The results in the bosonic case agree with the ones in the literature. We find that for the constant electric field the fermionic and bosonic pair creation rate are the same. This analogy of bosons and fermions in the semiclassical limit is known from several flat spacetime examples.\nauthor: Cl\u00e9ment Stahl; Strobel Eckhard\ndate: 1 June 2015\ntitle: Semiclassical fermion pair creation in de Sitter spacetime\n\n# Introduction\n\nQuantum field theory (QFT) in curved spacetime is one way of merging Einstein's theory of gravitation and QFT in the usual Minkowski spacetime within a self consistent framework . One of its central results is the discovery of particle creation in a time-dependent gravitational field . This mechanism is believed to generate the primordial cosmic inhomogeneities that serve as seeds for the observed large-scale structure of the universe. \nA lot of studies of QFT in curved spacetime are investigating de Sitter (dS) spacetime. It has constant scalar curvature and is maximally symmetric for any given dimension, in the same way as Minkowski spacetime. In cosmology, dS space is used as a model for both the early stage of inflation (for reviews see ) and the late stage of acceleration of the expansion (an introductory review is given in ). In addition to its relevance for cosmology, dS space may give hints to understand the quantum nature of spacetime . These facts motivate the investigation of physical processes in this spacetime. \nOriginally, particle creation was studied under a strong field background originating from a time dependent vector potential . Since then, it has been explored in a more general set-up, where the effects of the gravitational field and the electric field are both contributing to the creation of pairs. The Schwinger mechanism in curved spacetime, more specifically, in dS space, has recently started to be studied in depth . \nBetter understanding Quantum electrodynamics (QED) in dS spacetime could be insightful for a couple of problems. It is an interesting framework for the study of false vacuum decay or bubble nucleation . Constrains on magnetogenesis scenarios were put in via the backreaction of the created pairs and its induced current. In some models of preheating, it might give clues on the open problem of baryogenesis. Via the AdS-CFT correspondence, it has also been used as a playing field to test the ER=EPR conjecture . Additionally it might help to better understand renormalization schemes in curved space-time and the relations of these schemes to each other. \nThe purpose of this paper is to investigate spin 1\/2 pair production under the influence of an external electric field, in four-dimensional dS spacetime ($\\text{dS}_4$). This problem has already been studied for bosons in two-dimensional dS spacetime ($\\text{dS}_2$) and in $\\text{dS}_4$ . In spin 1\/2 pair production in $\\text{dS}_2$ was investigated but the 4D analog case is much less studied. We propose a derivation of the number of pairs created in the semiclassical limit following the ideas of . We find that there is no difference between bosons and fermions in this limit. \nThe paper is organized as follows: in section we investigate the Schwinger effect in $\\text{dS}_4$. After describing the basic equations, we show how the Dirac equation becomes a coupled second order differential equation in the presence of both electric and gravitational terms. We review the derivation of the bosonic pair creation rate and calculate the fermionic one with the help of a semiclassical expansion for general electric fields in dS spacetime in section . We use this result in section to compute the bosonic and fermionic pair creation rate of a constant electric field in dS spacetime. Finally we draw some conclusions in section .\n\n# Dirac field in dS Space-time\n\nWe consider QED coupled to a Dirac fermion in $\\text{dS}_4$. In order to study Schwinger pair production, we assume that the gravitational and electrical field are background fields and that the fermionic field is dynamic. The action is given by $$\\begin{aligned}\n\\label{action}\nS=\\int \\text{d}^4 \\text{x} \\sqrt{-g} \\mathcal{L}=\\int \\text{d}^4 \\text{x} \\sqrt{-g} \\left[ -\\frac{1}{\\kappa}R +\\frac{i}{2} (\\bar{\\psi} \\gamma^{\\mu} \\nabla_{\\mu} \\psi -\\nabla_{\\mu} \\bar{\\psi}\\gamma^{\\mu}\\psi) -m \\bar{\\psi} \\psi -\\frac{1}{4} F_{\\mu \\nu}F^{\\mu \\nu} \\right],\n\\end{aligned}$$ where the field strength is defined in the usual way $F_{\\mu \\nu}\\equiv \\partial_{\\mu} A_{\\nu}-\\partial_{\\nu} A_{\\mu}$. \nThe $\\text{dS}_4$ spacetime we want to study is described by the metric $$\\text{ds}^2=a(\\eta)^2 (\\text{d}\\eta^2-\\text{d}\\textbf{x}^2),$$ with signature (+,-,-,-). Here $\\eta$ is the conformal time which is parametrized by the Hubble factor in the following way $$\\eta= -\\frac{1}{a(\\eta)H},\\hspace{1cm} a(\\eta)^2 H \\equiv \\frac{\\text{d}a(\\eta)}{\\text{d} \\eta }, \\hspace{1cm} (-\\infty<\\eta<0).$$ For the description of spinors the tetrad field is used. It is related to the metric through the relation $$\\begin{aligned}\ng_{\\mu \\nu}=e^a_{\\mu} e^b_{\\nu} \\eta_{ab},\n\\end{aligned}$$ where $\\eta_{ab}$ is the usual Minkowski metric. Throughout this paper, we use Greek indices for spacetime indices ($\\mu,\\nu=\\eta,x,y,z$) and Latin indices for tetrad ones ($a,b=\\eta,x,y,z$). Applying the tetrad formalism to $\\text{dS}_4$, one gets $$\\label{tetrad} e_{\\mu}^a=\n\\begin{pmatrix}\n a(\\eta) & 0 & 0 &0 \\\\\n 0 & a(\\eta) & 0 & 0 \\\\\n 0 & 0 & a(\\eta) & 0 \\\\\n 0 & 0 & 0 & a(\\eta)\n \\end{pmatrix}.$$ The covariant derivative for fermion fields is defined as $$\\begin{aligned}\n \\nabla_{\\mu} \\equiv \\hbar\\partial_{\\mu} +ie A_{\\mu}(x) -\\frac{{\\text{i}}}{4}\\omega^{ab}_\\mu\\sigma_{ab},\n\\end{aligned}$$ with the commutator of the gamma matrices $$\\begin{aligned}\n \\sigma_{ab}&\\equiv\\frac{{\\text{i}}}{2}[\\gamma_a,\\gamma_b]\n\\end{aligned}$$ and the spin connection are defined as $$\\begin{aligned}\n\\begin{split}\n\\omega_{\\mu}^{ab}&\\equiv \\frac{1}{4}\\left[e^{b\\alpha}(x) \\partial_{\\mu}e^a_{\\alpha}(x)-e^{a\\alpha}(x) \\partial_{\\mu}e^b_{\\alpha}(x)+ e^{a \\alpha}(x) \\partial_{\\alpha}e^b_{\\mu}(x)-e^{b\\alpha}(x) \\partial_{\\alpha} e^{a \\mu}(x)\\right.\\\\ & \\hspace{2cm}\\left.+e^{b\\nu}(x)e^{a \\lambda}(x)e_{c \\mu}(x)\\partial_{\\lambda} e^c_{\\nu}(x)-e^{a\\nu}(x)e^{b \\lambda}(x)e_{c \\mu}(x)\\partial_{\\lambda} e^c_{\\nu}(x) \\right]. \\label{eq:spinconnection}\n\\end{split}\n\\end{aligned}$$ The non-zero components of the spin connection () can be shown to be $$\\begin{aligned}\n \\label{connection}\n \\omega_{1}^{01}=\\omega_{2}^{02}=\\omega_{3}^{30}=-\\omega_{1}^{10}=-\\omega_{2}^{20}=\\omega_{3}^{30}=\\frac{a'(\\eta)}{2a(\\eta)},\n \n\\end{aligned}$$ where prime denotes derivative with respect to conformal time $\\eta$. \nThe gamma matrices $\\gamma^{\\mu}$ are related to the gamma matrices in the tangent flat space $\\underline{\\gamma^a}$ viz. $$\\begin{aligned}\n\\underline{\\gamma^a} \\equiv \\gamma_{\\mu} e^a_{\\mu}. \n\\end{aligned}$$ We will work with the Dirac representation of the gamma matrices, *i.e.* $$\\begin{aligned}\n \\gamma^{j}=\\begin{pmatrix}\n 0 &\\sigma^j\\\\\n -\\sigma^j & 0\n \\end{pmatrix},\n&&\n \\gamma^{0}=\\begin{pmatrix}\n I_2 &0\\\\\n 0 & -I_2\n \\end{pmatrix},\n&& \\text{ where }&&\n \\sigma^x=\\begin{pmatrix}\n 0 & 1\\\\\n 1 & 0\n \\end{pmatrix},\n&&\n \\sigma^y=\\begin{pmatrix}\n 0 & -{\\text{i}}\\\\\n {\\text{i}}& 0\n \\end{pmatrix},\n &&\n \\sigma^z=\\begin{pmatrix}\n 1 & 0\\\\\n 0 & -1\n \\end{pmatrix}.\n\\end{aligned}$$ Varying the action with respect to the spinor field gives the Dirac equation $$\\left(i \\gamma^{\\mu} \\nabla_{\\mu} -m \\right) \\psi(\\textbf{x}, \\eta) =0.$$ Using Eqs.\u00a0() and (), this equation becomes $$\\label{eq:diracavantcahnge}\n\\left\\{i \\left(\\gamma^{\\mu} \\hbar\\partial_{\\mu}+\\frac{3}{2} a H \\underline{\\gamma^0} + i e A_{\\mu} \\gamma^{\\mu} \\right) -m \\right\\} \\psi(\\textbf{x}, \\eta)=0.$$ One now considers the auxiliary field $\\Psi(\\textbf{x}, \\eta) = a^{3\/2}(\\eta) \\psi(\\textbf{x}, \\eta)$ which can be thought of as the equivalent of the Mukhanov-Sasaki variable in inflation models . With this substitution the Dirac equation takes the form $$\\left\\{\\gamma^{\\mu} ( i \\hbar\\partial_{\\mu} - e A_{\\mu}) - m \\right\\} \\Psi(\\textbf{x}, \\eta)=0. \\label{eq:Dirac}$$ We will also decompose this field in momentum modes according to $$\\begin{aligned}\n \\Psi(\\textbf{x},\\eta)\\sim{\\text{e}}^{\\frac{{\\text{i}}}{\\hbar}\\textbf{k}.\\textbf{x}}\\psi_{\\textbf{k}}.\\label{TF}\n\\end{aligned}$$ To solve the Dirac equation, for the purpose of the calculation of the pair creation rate, it is often useful to use the squared version of the Dirac equation because of its similarities to the Klein-Gordon equation, see e.g. . The squared Dirac equation can be found using $$\\begin{aligned}\n \\Psi(\\textbf{x},\\eta)&=\\gamma^{\\mu}\\left[(i \\partial_{\\mu}-e A_{\\mu}(\\eta)) +m a(\\eta)\\right]\\phi(\\textbf{x},\\eta) \\label{eq:squared}\n \n\\end{aligned}$$ with $$\\begin{aligned}\n \\phi_{\\textbf{k}}(\\eta)&=\\begin{pmatrix}\\phi_1(\\eta)\\\\ \\phi_2(\\eta) \\end{pmatrix},&& \\phi_i(\\eta)=\\begin{pmatrix} \\phi_i^+(\\eta)\\\\ \\phi_i^-(\\eta)\\end{pmatrix}.\n \n\\end{aligned}$$ in the Dirac equation. In the previous equation, $\\phi_{\\textbf{k}}(\\eta)$ is the Fourier transform (in the sense of ()) of $\\phi(\\textbf{x},\\eta)$. Here we consider a background vector potential for the electromagnetic sector such that $$A_{\\mu}\\equiv A(\\eta)\\delta^{z}_{\\mu}.$$ For such fields the squared Dirac equation takes the form $$\\begin{aligned}\n&\\left(\\hbar^2 \\partial_{\\eta}^2+\\omega_{\\textbf{k}}(\\eta)^2-i\\hbar ma'(\\eta) \\right)\\phi_1^{\\pm} \\pm i\\hbar e A'(\\eta) \\phi_2^{\\pm}(\\eta)=0, \\\\\n&\\left(\\hbar^2 \\partial_{\\eta}^2+\\omega_{\\textbf{k}}(\\eta)^2+i\\hbar ma'(\\eta) \\right)\\phi_2^{\\pm} \\pm i\\hbar e A'(\\eta) \\phi_1^{\\pm}(\\eta)=0, \\label{to solve}\n\\end{aligned}$$ where the effective pulsation and the kinetical momentum are defined as $$\\begin{aligned}\n& \\omega_{\\textbf{k}}(\\eta)^2\\equiv p_z(\\eta)^2+k_{\\perp}^2+m^2 a(\\eta)^2, \\label{eq:omega}\n& p_z(\\eta)\\equiv k_z+e A(\\eta),\n&& k_\\perp^2\\equiv k_x^2+k_y^2.\n\\end{aligned}$$ This equation can be compared to the equivalent bosonic problem. The equation of motion derived from the Klein-Gordon equation is (see Eq\u00a0(2.13) of ) $$\\label{boson to solve}\n\\left(\\hbar^2 \\partial_{\\eta}^2+\\omega_{\\textbf{k},\\text{boson}}(\\eta)^2 \\right) q_{\\textbf{k}} =0,$$ with $$\\begin{aligned}\n\\omega_{\\textbf{k}. \\text{boson}}(\\eta)^2=\\omega_{\\textbf{k}}(\\eta)^2-\\frac{2}{\\eta^2}. \\label{eq:omegaboson}\n\\end{aligned}$$ The equation of the bosonic problem () can be understood as a harmonic oscillator with a time dependent pulsation. The two other terms in Eq.\u00a0() are new and due to the fermionic nature of the particles considered. On the one hand, the mass term was already derived *e.g*. in where no background electric field was considered. On the other hand, the electric term is present for instance in in flat spacetime. Such that this equation is a generalization of the Dirac equation in curved spacetime with background electric and gravitational field. \nThe squared Dirac equation is also analogous to the Dirac equation for two-component fields in flat spacetime. In a method was used to semiclassically compute the pair creation rate for these fields. It is possible to use the same method for the case studied here. Instead of looking for a solution of the squared Dirac-equation () we will however use the ansatz $$\\begin{aligned}\n\\label{newansatz}\n \\psi_{\\vec{k},\\uparrow}(\\eta)=\\begin{pmatrix}\n -{\\text{i}}k_\\perp \\, \\psi_2^+(\\eta)\\\\\n (k_x+{\\text{i}}k_y)\\, \\psi_2^+(\\eta)\\\\\n -{\\text{i}}k_\\perp \\, \\psi_1^+(\\eta)\\\\\n -(k_x+{\\text{i}}k_y)\\, \\psi_1^+(\\eta)\\\\\n \\end{pmatrix}, &&\n \\psi_{\\vec{k},\\downarrow}(\\eta)=\\begin{pmatrix}\n (k_x-{\\text{i}}k_y)\\, \\psi_2^-(\\eta)\\\\\n {\\text{i}}k_\\perp \\, \\psi_2^-(\\eta)\\\\\n -(k_x-{\\text{i}}k_y) \\, \\psi_1^-(\\eta)\\\\\n {\\text{i}}k_\\perp\\, \\psi_1^-(\\eta)\\\\\n \\end{pmatrix} . \n\\end{aligned}$$ This ansatz can be derived by finding the solution of the squared equation analogous to and then use () to construct a solution for the Dirac equation (). Observe that $\\psi_{\\vec{k},\\uparrow}(\\eta)$ and $\\psi_{\\vec{k},\\downarrow}(\\eta)$ are independent since $$\\begin{aligned}\n \\psi_{\\textbf{k},\\uparrow}(\\eta)^\\dagger\\cdot \\psi_{\\textbf{k},\\downarrow}(\\eta)=0.\n\\end{aligned}$$ Putting () in the Dirac equation () leads to the equations $$\\begin{aligned}\n {\\text{i}}\\hbar\\, \\psi_1^{'\\pm}(\\eta)+ m a(\\eta)\\, \\psi_1^\\pm(\\eta)\\pm (p_z(\\eta)+{\\text{i}}k_\\perp)\\,\\psi_2^\\pm(\\eta)=0,\\label{to solve 1}\\\\\n {\\text{i}}\\hbar\\, \\psi_2^{'\\pm}(\\eta)- m a(\\eta)\\, \\psi_2^\\pm(\\eta)\\pm (p_z(\\eta)-{\\text{i}}k_\\perp)\\,\\psi_1^\\pm(\\eta)=0,\\label{to solve 2}\n\\end{aligned}$$ which we will solve in section .\n\n# Semiclassical number of pairs in dS spacetime\n\nThe semiclassical expansion is considering cases where a vacuum state for the produced particles exits in the asymptotic future. That is true if the background fields are evolving slowly. To quantify the slow varying background more precisely, we introduce a dimensionless slowness parameter $T$ by replacing the scale factor $a(\\eta)$ by a family of functions $a_T(\\eta) \\equiv a(\\eta\/T)$. Doing that, in the limit of infinitely slow varying backgrounds, $T \\rightarrow \\infty$, the derivatives of $a(\\eta)$ will tend to zero. Orders of $T$ are usually called adiabatic orders but it can be noticed that the only place where $T$ is involved is in the derivative $\\partial_{\\eta}$ of the Dirac equation so it is possible to \"formally\" pose $T=1\/\\hbar$ and expand in power of $\\hbar$. \nIn this section, we will first review the calculation of the number of particles in the bosonic case, which can be carried out with flat spacetime techniques and then present the fermionic case. The strategy will be the following:\n\n- Reformulate the equation of motion in terms of an equation for the mode functions $\\alpha(\\eta)$, $\\beta(\\eta)$.\n\n- Perform a multiple integral iteration to compute $|\\beta|^2$.\n\n- Calculate the integrals with a semiclassical saddle point approximation to derive the number of created pairs for each momentum mode $\\mathbf{k}$.\n\n## Equations for the mode functions\n\nIn this section we derive the equations for the mode functions from the Klein-Gordon and Dirac equation respectively. The aim is to construct equations in which $\\alpha'(\\eta)$ depends only on $\\beta(\\eta)$ and vice versa, in order to perform the multiple integral iteration of the next section.\n\n### Bosonic case\n\nTo compute the semiclassical pair creation rate we start from (). The form of these equations is the same as in flat spacetime, the only difference being the specific time dependence of the fields. We will shortly review the well know techniques of flat spacetime (see e.g.\u00a0) applied to our field configurations. One can use the ansatz (which is inspired by a WKB expansion) $$\\begin{aligned}\n q_{\\textbf{k}}(\\eta)= &\\frac{ \\alpha(\\eta)}{\\sqrt{\\omega_{\\textbf{k}}(\\eta)}}e^{-\\frac{{\\text{i}}}{2} K_0(\\eta) }+\\frac{\\beta(\\eta)}{\\sqrt{\\omega_{\\textbf{k}}(\\eta)}}e^{\\frac{{\\text{i}}}{2} K_0(\\eta)},\\label{eq:WKB-ansatzboson}\\\\\n q'_{\\textbf{k}}(\\eta)=&-\\frac{{\\text{i}}\\omega_{\\textbf{k}}'(\\eta)}{\\hbar}\\left[\\frac{ \\alpha(\\eta)}{\\sqrt{\\omega_{\\textbf{k}}(\\eta)}}{\\text{e}}^{-\\frac{{\\text{i}}}{2}K_0(\\eta)}-\\frac{\\beta(\\eta)}{\\sqrt{\\omega_{\\textbf{k}}(\\eta)}}{\\text{e}}^{\\frac{{\\text{i}}}{2}K_0(\\eta)}\\right]\\label{eq:WKB-ansatz2boson},\n\\end{aligned}$$ where $$\\begin{aligned}\nK_0(\\eta)=\\frac{2}{\\hbar} \\int_{-\\infty}^{\\eta} \\omega_{\\textbf{k}}(\\tau) d \\tau, \\label{eq:K}\n\\end{aligned}$$ where $\\alpha(\\eta)$, $\\beta(\\eta)$ are the mode functions. \nThe momentum spectrum of the pair creation rate is defined as $$\\begin{aligned}\nn_{\\textbf{k}} \\equiv \\lim_{\\eta\\rightarrow 0} \\left|\\beta(\\eta)\\right|^2, \\label{eq:trans}\n\\end{aligned}$$ with the boundary conditions $$\\begin{aligned}\n \\beta(-\\infty)=0, && \\alpha(-\\infty)=1. \\label{eq:boundarycond}\n\\end{aligned}$$ Using this, together with the ansatz ()-() in the Klein-Gordon equation () we find that the mode functions are connected through coupled differential equations $$\\begin{aligned}\n \\alpha'(\\eta)&=\\frac{\\omega_{\\textbf{k}}'(\\eta)}{2\\omega_{\\textbf{k}}(\\eta)}{\\text{e}}^{ i K_0(\\eta)}\\beta(\\eta),\\label{eq:alphaboson}\\\\\n \\beta'(\\eta)&=\\frac{\\omega_{\\textbf{k}}'(\\eta)}{2\\omega_{\\textbf{k}}(\\eta)}{\\text{e}}^{-i K_0(\\eta)}\\alpha(\\eta)\\label{eq:betaboson}.\n\\end{aligned}$$\n\n### Fermionic case\n\nTo derive analogous equations in the fermionic case we start from the equations () and (). The initial idea of this work comes from the similarities between this equations and Eq.\u00a0(11-12) of . For the semiclassical treatment one makes the following ansatz $$\\begin{aligned}\n\\psi^{\\pm}_{1}(\\eta)&=\\frac{C_\\pm}{\\sqrt{\\omega_{\\textbf{k}}(\\eta)}}\\frac{\\sqrt{p(\\eta)}}{\\sqrt{p_z(\\eta)-{\\text{i}}k_\\perp}}\\left(\\alpha^\\pm(\\eta)[\\omega_{\\textbf{k}}(\\eta)-m a(\\eta)]{{\\text{e}}^{-\\frac{{\\text{i}}}{2}K(\\eta)}} +\\beta^\\pm(\\eta)[\\omega_{\\textbf{k}}(\\eta)+m a(\\eta)]{{\\text{e}}^{\\frac{{\\text{i}}}{2}K(\\eta)}}\\right),\\label{eq:ansatz1} \\\\\n \\psi^{\\pm}_{2}(\\eta)&=\\frac{\\mp C_\\pm}{\\sqrt{\\omega_{\\textbf{k}}(\\eta)}}\\frac{\\sqrt{p(\\eta)}}{\\sqrt{p_z(\\eta)+{\\text{i}}k_\\perp}}\\left(\\alpha^\\pm(\\eta)[\\omega_{\\textbf{k}}(\\eta)+m a(\\eta)]{{\\text{e}}^{-\\frac{{\\text{i}}}{2}K(\\eta)}} +\\beta^\\pm(\\eta)[\\omega_{\\textbf{k}}(\\eta)-m a(\\eta)]{{\\text{e}}^{\\frac{{\\text{i}}}{2}K(\\eta)}}\\right), \\label{eq:ansatz2}\n\\end{aligned}$$ with the integrals $$\\begin{aligned}\n K(\\eta)& \\equiv K_0(\\eta)+K_1(\\eta),\\label{eq:K_s}\\\\\n%K_0(\\eta)& \\equiv \\frac{2}{\\hbar}\\int^{\\eta} \\omega_{\\textbf{k}}(\\tau) d\\tau, \\label{eq:Kint}\\\\\n K_1(\\eta)& \\equiv k_\\perp\\int_{-\\infty}^{\\eta} \\frac{m a(\\tau)p_z'(\\tau)}{\\omega_{\\textbf{k}}(\\tau)p(\\tau)^2}d\\tau \\label{eq:K_xy}.\n \n\\end{aligned}$$ Using the ansatz ()-() in () and (), we find that the mode functions are connected through coupled differential equations $$\\begin{aligned}\n \\alpha'^\\pm(\\eta)&=-\\frac{\\omega_{\\textbf{k}}'(\\eta)}{2 \\omega_{\\textbf{k}}(\\eta)}G_{\\alpha}(\\eta) {\\text{e}}^{{\\text{i}}K(\\eta)}\\beta^\\pm(\\eta),\\label{eq:alphaferm}\\\\\n \\beta'^\\pm(\\eta)&=\\frac{\\omega_{\\textbf{k}}'(\\eta)}{2 \\omega_{\\textbf{k}}(\\eta)}G_{\\beta}(\\eta) {\\text{e}}^{-{\\text{i}}K(\\eta)}\\alpha^\\pm(\\eta).\\label{eq:betaferm}\n\\end{aligned}$$ with $$\\begin{aligned}\n& G_{\\alpha}=\\frac{ma(\\eta)}{p(\\eta)}-\\frac{\\omega_{\\textbf{k}}(\\eta)ma'(\\eta)-{\\text{i}}k_\\perp p_z'(\\eta)}{p(\\eta)\\omega_{\\textbf{k}}'(\\eta)}, \\\\\n&G_{\\beta}=\\frac{ma(\\eta)}{p(\\eta)}-\\frac{\\omega_{\\textbf{k}}(\\eta)ma'(\\eta)+{\\text{i}}k_\\perp p_z'(\\eta)}{p(\\eta)\\omega_{\\textbf{k}}'(\\eta)},\n\\end{aligned}$$ representing the fermionic corrections to the analog bosonic case.\n\n## Multiple integral iteration\n\nIn this section we will perform the multiple integral iteration for the fermionic case. Because of the similar form of the equations ()-() and ()-() the results for the scalar case can be derived from the ones obtained below by setting $G_\\alpha(\\eta)=G_\\beta(\\eta)=1$ and $K(\\eta)\\rightarrow K_0(\\eta)$. \nBy iteratively using Eqs.\u00a0() and () and the boundary conditions () one finds $$\\begin{split}\n& \\beta^{\\pm}(0)=\\sum_{m=0}^\\infty \\int_{-\\infty}^\\infty d\\eta_0\\frac{\\omega_k'(\\eta_0)}{2\\omega_k(\\eta_0)}G_{\\beta}(\\eta_0){\\text{e}}^{-iK(\\eta_0)} \\\\\n& \\times \\prod_{n=1}^m \\int_{-\\infty}^{\\eta_{n-1}} d\\tau_n\\frac{\\omega_k'(\\tau_n)}{2\\omega_k(\\tau_n)}G_{\\alpha}(\\tau_n){\\text{e}}^{iK(\\tau_n)} \\int_{-\\infty}^{\\tau_n}d\\eta_n\\frac{\\omega_k'(\\eta_n)}{2\\omega_k(\\eta_n)} G_{\\beta}(\\eta_n){\\text{e}}^{-iK(\\eta_n)}. \\\\\\label{eq:multiint}\n \\end{split}$$ As described in these integrals are dominated by the classical turning points given by $$\\begin{aligned}\n\\label{TP}\n \\omega_k(\\eta_p^\\pm)=0.\n\\end{aligned}$$ It can be shown (see [^1]) that for one pair of simple turning points the momentum spectrum of the pair creation rate () in a semiclassical saddlepoint approximation is equal to $$\\begin{aligned}\n n^{\\text{fermion}}_{\\textbf{k}}=\\left|{\\text{e}}^{-iK(\\eta_p^{-})}\\right|^2 \\label{eq:MomentumSpectrumConst}.\n\\end{aligned}$$ For the bosonic case we can find $$\\begin{aligned}\n n_{\\textbf{k}}^{\\text{scalar}}=\\left|{\\text{e}}^{-i K_0(\\eta_p^{-})}\\right|^2 .\n\\end{aligned}$$ by using the substitution discussed above. We thus find that analogous to the case of two-component fields in flat space-time the difference between fermions and bosons is a factor of the form $\\exp(K_1(\\eta_p^-))$.\n\n# Pair creation rate of a constant electric field in dS spacetime\n\nIn this section we derive the pair creation rate for a constant electric field, which is described in $\\text{dS}_4$ by $$A(\\eta)=\\frac{E}{H^2 \\eta}.$$ This is due to the fact that a co-moving observer, with a four-velocity $u^{\\mu}$, would measure an electric field of $$E_{\\mu} = u^{\\nu} F_{\\mu \\nu}= a E\\delta^{z}_{\\mu},$$ which leads to a constant field strength since $E_{\\mu}E^{\\mu}=E^2$. \nFor convenience we introduce $$\\mu^2 = \\frac{e^2 E ^2}{H^4}+ \\frac{m^2}{H^2}= \\lambda^2 + \\gamma^2,$$ where $\\lambda= \\frac{eE}{H^2}$ represent the electric field divided by the Hubble rate as usually in cosmological spacetime and $\\gamma= \\frac{m}{H}$ is the mass term divided by the Hubble rate. These two quantities represent the electric and gravitational contribution respectively. We can now check in which limit the semiclassical approximation holds. It requires the rate of change of the background to be small in the asymptotic future, that is $$\\begin{aligned}\n\\left(\\frac{\\omega'_{\\textbf{k}}(\\eta)}{\\omega_{\\textbf{k}}^2(\\eta)}\\right)^2 \\underset{\\eta \\rightarrow 0}{\\sim} \\mu^{-2} &&\\text{ and }&& \\left(\\frac{\\omega''_{\\textbf{k}}(\\eta)}{\\omega_{\\textbf{k}}^3(\\eta)}\\right) \\underset{\\eta \\rightarrow 0}{\\sim} 2\\mu^{-2}\n\\end{aligned}$$ being small, we see that this is the case when $\\mu \\gg 1$. Comparing to the bosonic case we see that in the limit $\\mu \\gg 1$ the \"-2\" in () is negligible, so that the fermionic and bosonic pulsation are the same. To compute the pair creation rate we first have to compute the integrals $K_0(\\eta)$ and $K_1(\\eta)$ defined in () and () respectively. The value of $\\eta$ at the turning point () is found to be $$\\eta_p^{-}=\\frac{-\\lambda\\frac{k_z}{k}-{\\text{i}}\\sqrt{\\gamma^2+\\lambda^2\\left(1-\\left(\\frac{k_z}{k}\\right)^2\\right)}}{k}.\\label{eq:TP}$$ The imaginary parts of $K_0(\\eta_p^-)$ and $K_1(\\eta_p^-)$ are the only ones contributing to (). One can show that $$\\begin{aligned}\n&{\\text{Im}}[K_0(\\eta_p^-)]=-\\pi\\left(\\mu-\\frac{k_z}{k} \\lambda\\right)\\theta(k_z \\lambda),\\\\\n& {\\text{Im}}[K_1(\\eta_p^-)]=0,\n\\end{aligned}$$ Where $\\theta(x)$ is the Heaviside step function. It was introduced since the real part of the turning point () has to be negative for the turning point to be inside of the closed contour which is needed in the approximation of (). We thus find that only pairs with a momentum in $z$-direction which has the same sign as $\\lambda$ are produced in the semiclassical limit. This is what was called pair production in \"screening\" direction or \"downward\" tunneling in . The case of opposite sign, called \"upward\" tunneling is suppressed in the semiclassical limit. \n$K_1(\\eta_p^-)$ was the only difference between bosons and fermions and is not contributing to the number of pairs created. Thus we find that the number of pairs in the semiclassical limit for both bosons and fermions is given by $$\\label{main}\n%\\boxed{\nn_{\\textbf{k}} = \\exp\\left[-2\\pi \\left(\\mu-\\frac{k_z}{k} \\lambda\\right)\\right] \\theta(k_z \\lambda).\n%}$$ We will close this section by performing the flat spacetime limit and making the relation with the bosonic case of explicit. The definition of the pair production rate is $$\\Gamma \\equiv \\frac{1}{(2 \\pi)^3 V} \\int d^3 \\textbf{k} \\,n_{\\textbf{k}},$$ \nwhere $V= a(\\eta)^4 d\\eta$ is the unit four volume of the spacetime. As in , an estimate for the moment when most of the particles are created can be found by analyzing when the adiabaticity is violated. This gives $$\\label{estimate}\n\\eta \\sim -\\frac{\\mu}{k} .$$ Using this the $k$-integral can be changed into a time integral. Going to spherical coordinates the $k_z$ integral can be performed. Putting everything together, one finds $$\\Gamma= \\frac{H^4}{(2\\pi)^3} \\frac{\\mu^3}{|\\lambda|} \\left({\\text{e}}^{2 \\pi |\\lambda|}-1\\right) {\\text{e}}^{-2 \\pi \\mu}.$$ We can compare this with Eq.\u00a0(2.33) of , that we reproduce here $$\\Gamma = \\frac{H^4}{(2 \\pi)^3} \\frac{(|\\mu|^2+1\/4)^{3\/2}}{\\sinh(2 |\\mu| \\pi)} \\left\\{\\frac{H^2}{eE} \\sinh\\left(\\frac{2 \\pi eE}{H^2} \\right) + 2 \\pi e^{-2 |\\mu| \\pi} \\right\\}.$$ We see that in the semiclassical limit $\\mu \\gg 1$ and $\\lambda \\gg 1$ the expressions are equal. As in the bosonic case the physical number density $n$ of produced pairs at the time $\\eta$ is given by $$n=\\frac{1}{a(\\eta)^3} \\int_{-\\infty}^{\\eta} d\\tau a(\\tau)^4 \\Gamma= \\frac{\\Gamma}{3H}.$$ The fact that it is constant shows that the dilution from the expansion of the universe is exactly compensated by the particles created from Schwinger and gravitational particle creation. Hence in the semiclassical limit, the population of fermions is always dominated by the particles created within a Hubble time. The vacuum decay rate is defined for fermions as $$\\Upsilon=\\log(1-|\\beta_{\\textbf{k}}|^2).$$ The limit in which the Hubble parameter is negligible compared to the gravitational and electrical strength corresponds to the limit to flat spacetime. After some calculations which are analogous to the ones in one can find the Minkowski limit by taking $H\\rightarrow0$, which gives $$\\label{M4}\n\\begin{split}\n& \\lim_{H\\rightarrow 0} \\Gamma = \\frac{(eE)^2}{(2 \\pi)^3} \\exp \\left(-\\frac{\\pi m^2}{|eE|} \\right), \\\\\n& \\lim_{H\\rightarrow 0} \\Upsilon =\\sum_{i=1}^{\\infty} \\frac{1}{i^2} \\frac{(eE)^2}{(2 \\pi)^3} \\exp \\left(-\\frac{i\\pi m^2}{|eE|} \\right).\n\\end{split}$$ These are the familiar results for Schwinger pair production in Minkowski spacetime .\n\n# Conclusions\n\nIn this work, we investigated the fermionic pair creation rate by the combination of an electric and a gravitational field in $\\text{dS}_4$. We first presented the basic equations of our setup and derived the corresponding Dirac equation in section . In section , we proposed a semiclassical approximation to compute the number of pairs produced. This approximation is based on a saddle point approximation of the integrals in (). To make the comparison to the analog bosonic case easier, we first reviewed its computation within our formalism. Then we presented the computation of the number of fermions created for a constant electric field. It is the main result of this paper shown in (). The pair creation rate, in this limit, is the same in the fermionic case as in the bosonic case. This equivalence between fermions and bosons in the semiclassical limit occurs also for one-component fields in flat spacetime when there is only one pair of turning points (see e.g. ). The limit to flat Minkowski spacetime is presented in () and agrees with the usual expression for the Schwinger effect. \nGoing beyond the semiclassical limit is possible but is out of the scope of this paper. To do this one can compute a more general quantity: the fermionic induced current. The current is proportional to the number of pairs created in the semiclassical limit but allows to explore a regime where the notion of adiabatic vacuum and of particles does not necessarily exist. It is a more precise way of describing the Schwinger effect because its definition does not depend on the time of creation of the pairs and the rough estimate () can be avoided. A calculation of the current induced by an electric field in $\\text{dS}_2$ is performed in . \nThroughout this paper the electric and gravitational fields have been assumed to be external sources. One of the next steps could be to consider backreaction effects of the newly created particles to the external electric and gravitational fields. To do so, for the electromagnetic side, the electric current needs to be computed and be plugged into the equivalent of the Maxwell equation in curved spacetime. For the gravitational side, the density of particle needs to be plugged in the Friedman equation. Another direction could be to look for cosmological application *e.g*. magnetogenesis, baryogenesis and the relation between dark energy and dark matter in the context of neutrino physics.\n\n# Acknowledgements\n\nThe authors thank Fernanda Gomes De Oliveira, Hendrik Ludwig, Pereira Jonas, Carlos Arg\u00fcelles and She Sheng Xue for fruitful discussions. They are also grateful to all the people who were involved with the organization of the 2nd C\u00e9sar Lattes Meeting in Rio de Janeiro. CS and ES are supported by the Erasmus Mundus Joint Doctorate Program by Grant Number 2012-1710 and 2013-1471 from the EACEA of the European Commission respectively.\n\n[^1]: The detailed intermediate steps can be found in Eqs.\u00a0(32)-(38) of . Observe that in the current paper the integration contour is closed in the lower imaginary half plane because of opposite convention for the phases in ().","meta":{"dup_signals":{"dup_doc_count":21,"dup_dump_count":20,"dup_details":{"curated_sources":2,"2019-51":1,"2019-43":1,"2019-35":1,"2019-30":1,"2019-22":1,"2019-13":1,"2019-04":1,"2018-47":1,"2018-39":1,"2018-30":1,"2018-22":1,"2018-13":1,"2017-51":1,"2017-43":1,"2017-34":1,"2017-26":1,"2017-17":1,"2020-10":1,"2017-13":1}},"filename":"out\/1507.01401_extract_proceed7.tex.md"},"subset":"arxiv"} +{"text":"author: X. \u00a0Luri ; A.G.A. \u00a0Brown ; L.M. \u00a0Sarro ; F. \u00a0Arenou ; C.A.L. \u00a0Bailer-Jones ; A. \u00a0Castro-Ginard ; J. \u00a0de Bruijne ; T. \u00a0Prusti ; C. \u00a0Babusiaux ; H.E. \u00a0Delgado\nbibliography: bibliography.bib\ndate: Received date \/ Accepted date\ntitle: Gaia Data Release 2: Using Gaia parallaxes\n\n# Introduction\n\nThe *Gaia* Data Release 2 (*Gaia\u00a0*\u00a0DR2\u00a0) provides precise positions, proper motions, and parallaxes for an unprecedented number of objects (more than 1.3 billion). Like Hipparcos in its day, the availability of a large amount of new astrometric data, and in particular parallaxes, opens the way to revisit old astrophysical problems and to tackle new ones. In many cases this will involve the inference of astrophysical quantities from *Gaia\u00a0*astrometry, a task that is less trivial than it appears, especially when parallaxes are involved.\n\nThe naive use of the simple approach of inverting the parallax to estimate a distance can provide an acceptable estimate in a limited number of cases, in particular when a precise parallax for an individual object is used. However, one of the important contributions of *Gaia\u00a0*\u00a0DR2\u00a0will be the possibility of working with large samples of objects, all of them with measured parallaxes. In these cases a proper statistical treatment of the parallaxes in order to derive distances, especially (but not only) when the relative uncertainties are large, is mandatory. Otherwise, the effects of the observational errors in the parallaxes can lead to potentially strong biases. More generally, the use of full astrometric data to derive astrophysical parameters should follow a similar approach. A proper statistical treatment of the data, its uncertainties, and correlations is required to take full advantage of the *Gaia\u00a0*results.\n\nThis paper is a complement for the *Gaia* consortium *Gaia\u00a0*\u00a0DR2\u00a0papers. We analyse the problem of the inference of distances (and other astrophysical parameters) from parallaxes. In Sect.\u00a0 we start with a short review of the properties of the *Gaia* astrometric data. Then in Sect.\u00a0 we review several of the most popular approaches to using measured parallaxes in astronomy and highlight their intricacies, pitfalls, and problems. In Sect.\u00a0 we make recommendations on what we think is the appropriate way to use astrometric data. Finally, in Sect.\u00a0 we link to some worked examples, ranging from very basic demonstrations to full Bayesian analysis, available as Python and R notebooks and source code from the tutorial section on the *Gaia* archive [^1].\n\n# *Gaia* astrometric data\n\nThe *Gaia* astrometry, i.e. celestial coordinates, trigonometric parallaxes, and proper motions for more than one billion objects, results from the observations coming from the spacecraft instruments and their subsequent processing by the *Gaia* Data Processing and Analysis Consortium (DPAC). The astrometric processing is detailed in and readers are strongly encouraged to familiarise themselves with the contents of that paper in order to understand the strengths and weaknesses of the published astrometry, and in particular of the parallaxes. The processed data was submitted to extensive validation prior to publication, as detailed in . This paper is also highly recommended in order to gain a proper understanding of how to use and how not to use the astrometric data. As a simple and striking example: a small number of sources with unrealistic very large positive and very large negative parallaxes are present in the data. Advice on how to filter these sources from the data analysis is provided in the *Gaia\u00a0*\u00a0DR2\u00a0documentation.\n\n## Uncertainties\n\nThe published parallaxes, and more generally all astrometric parameters, are measured quantities and as such have an associated measurement uncertainty. These uncertainties are published, source per source, and depend mostly on position on the sky as a result of the scanning law and on magnitude. For parallaxes, uncertainties are typically around 0.04\u00a0mas for sources brighter than $\\sim$``{=html}14\u00a0mag, around 0.1\u00a0mas for sources with a $G$ magnitude around 17, and around 0.7\u00a0mas at the faint end, around 20\u00a0mag. The astrometric uncertainties provided in *Gaia\u00a0*\u00a0DR2\u00a0have been derived from the formal errors computed in the astrometric processing. Unlike for *Gaia\u00a0*\u00a0DR1\u00a0, the parallax uncertainties have not been calibrated externally, i.e. they are known, as an ensemble, to be underestimated by $\\sim$``{=html}8\u201312%\u00a0for faint sources ($G \\ga 16$\u00a0mag) outside the Galactic plane and by up to $\\sim$``{=html}30%\u00a0for bright stars ($G \\la 12$\u00a0mag). Based on an assessment of the measured parallaxes of a set of about half a million known quasars, which can be assumed in practice to have zero parallax, the uncertainties are normally distributed with impressive approximation (Fig.\u00a0). However, as is common when taking measurements and especially in such large samples like the *Gaia* catalogue, there are small numbers of outliers, even up to unrealistically high confidence levels (e.g. at the 100$\\sigma$ level).\n\n## Correlations\n\nThe parallaxes for each source published in *Gaia\u00a0*\u00a0DR2\u00a0have not been derived in isolation, but result from a simultaneous five-parameter fit of an astrometric source model to the data. In *Gaia\u00a0*\u00a0DR2\u00a0, only one astrometric source model has been used, that of a single star. This model assumes a uniform, rectilinear space motion relative to the solar system barycentre. The astrometric data in *Gaia\u00a0*\u00a0DR2\u00a0thus comprise five astrometric parameters[^2] with their associated uncertainties, but also ten correlation coefficients between the estimated parameters. It is critical to use the full ($5 \\times 5$) covariance matrix when propagating the uncertainties on subsets and\/or linear combinations of the astrometric parameters.\n\nAs an example, consider the transformation of the measured proper motions $\\mu_{\\alpha*}$ and $\\mu_\\delta$ in equatorial coordinates to equivalent values $\\mu_{l*}$ and $\\mu_b$ in galactic coordinates. Following the notation in , we have $$\\begin{pmatrix} \n \\mu_{l*} \\\\ \\mu_b\n \\end{pmatrix}\n =\n \\begin{pmatrix} \n c & s \\\\\n -s & c\n \\end{pmatrix}\n \\begin{pmatrix} \n \\mu_{\\alpha*} \\\\ \\mu_\\delta\n \\end{pmatrix},$$ where the $2\\times2$ matrix is a rotation matrix that depends on the object's coordinates $\\alpha$ and $\\delta$: $c = c(\\alpha^\\ast, \\delta)$ and $s = s(\\alpha^\\ast, \\delta)$. In order to transform the proper-motion errors from the equatorial to the galactic system, we have $$\\begin{aligned}\n \\vec{C}_{l b} &=&\n \\begin{pmatrix} \n\\sigma_{\\mu_{l *}}^2 & \\rho_{\\mu_{l *}}^{\\mu_{b}} \\sigma_{\\mu_{l *}} \\sigma_{\\mu_b}\\\\\n\\rho_{\\mu_{l *}}^{\\mu_{b}} \\sigma_{\\mu_{l *}} \\sigma_{\\mu_b} & \\sigma_{\\mu_b}^2\n \\end{pmatrix}\\\\\n&=&\n \\vec{J}\\vec{C}_{\\alpha \\delta}\\vec{J}^\\prime\\\\\n&=&\n \\begin{pmatrix} \n c & s \\\\\n -s & c\n \\end{pmatrix}\n \\begin{pmatrix}\n\\sigma_{\\mu_{\\alpha *}}^2 & \\rho_{\\mu_{\\alpha *}}^{\\mu_{\\delta}} \\sigma_{\\mu_{\\alpha *}} \\sigma_{\\mu_\\delta}\\\\\n\\rho_{\\mu_{\\alpha *}}^{\\mu_{\\delta}} \\sigma_{\\mu_{\\alpha *}} \\sigma_{\\mu_\\delta} & \\sigma_{\\mu_\\delta}^2\n \\end{pmatrix}\n \\begin{pmatrix} \n c & -s \\\\\n s & c\n \\end{pmatrix},\n\\end{aligned}$$ where the prime denotes matrix transposition, $\\vec{J}$ denotes the Jacobian matrix of the transformation (which for a rotation is the rotation matrix itself), and $\\vec{C}$ denotes the variance-covariance matrix. It immediately follows that $\\sigma_{\\mu_{l *}}$ and $\\sigma_{\\mu_{b}}$ depend on the generally non-zero correlation coefficient $\\rho_{\\mu_{\\alpha *}}^{\\mu_{\\delta}}$ between the equatorial proper-motion measurements. Neglecting this correlation term can give seriously incorrect results. Some further examples of how error propagation should be handled can be found in, for instance, and . In addition to error propagation, the covariance matrix should also be taken into account when estimating model parameters, for example in chi-square fitting, maximum likelihood estimates, Bayesian analysis, etc. For more details, see Volume 1, Section 1.5 of .\n\n## Systematic errors\n\nBoth the design of the spacecraft and the design and implementation of the data processing software and algorithms aim to prevent biases or systematic effects in the astrometry. Systematic errors at low levels nonetheless exist in *Gaia\u00a0*\u00a0DR2\u00a0. Systematic effects are complicated and largely unknown functions of position on the sky, magnitude, and colour. Although systematic effects are not dealt with in the remainder of this paper, it is important for users to be aware of their presence.\n\nThe parallaxes and proper motions in *Gaia\u00a0*\u00a0DR2\u00a0may be affected by systematic errors. Although the precise magnitude and distribution of these errors is unknown, they are believed to be limited, on global scales, to $\\pm$``{=html}0.1\u00a0mas for parallaxes and $\\pm$``{=html}0.1\u00a0mas\u00a0yr$^{-1}$ for proper motions. There is a significant average parallax zero-point shift of about $-30$\u00a0$\\mu$as in the sense *Gaia* minus external data. This shift has *not* been corrected for and is present in the published data. Significant spatial correlations between stars, up to 0.04\u00a0mas in parallax and 0.07\u00a0mas\u00a0yr$^{-1}$ in proper motion, exist on both small ($\\la$``{=html}1$^\\circ$) and intermediate ($\\la$``{=html}20$^\\circ$) angular scales. As a result, **averaging parallaxes over small regions of the sky, for instance in an open cluster, in the Magellanic Clouds, or in the Galactic Centre, will *not* reduce the uncertainty on the mean below the $\\sim$$0.1$\u00a0mas level**.\n\nUnfortunately, there is no simple recipe to account for the systematic errors. The general advice is to proceed with the analysis of the *Gaia\u00a0*\u00a0DR2\u00a0data using the uncertainties reported in the catalogue, ideally while modelling systematic effects as part of the analysis, and to keep the systematics in mind when interpreting the results.\n\n## Completeness\n\nAs argued in the next sections, a correct estimation requires full knowledge of the survey selection function. Conversely, neglecting the selection function can causes severe biases. Derivation of the selection function is far from trivial, yet estimates have been made for *Gaia\u00a0*\u00a0DR1\u00a0(TGAS) by, for instance, and .\n\nThis paper does not intend to define the survey selection function. We merely limit ourselves to mentioning a number of features of the *Gaia\u00a0*\u00a0DR2\u00a0data that should be properly reflected in the selection function. The *Gaia\u00a0*\u00a0DR2\u00a0catalogue is essentially complete between $G \\approx 12$ and $\\sim$``{=html}17\u00a0mag. Although the completeness at the bright end ($G$ in the range $\\sim$``{=html}3\u20137\u00a0mag) has improved compared to *Gaia\u00a0*\u00a0DR1\u00a0, a fraction of bright stars in this range is still missing in *Gaia\u00a0*\u00a0DR2\u00a0. Most stars brighter than $\\sim$``{=html}3\u00a0mag are missing. In addition, about one out of every five high-proper-motion stars ($\\mu \\ga 0.6$\u00a0arcsec\u00a0yr$^{-1}$) is missing. Although the onboard detection threshold at the faint end is equivalent to $G = 20.7$\u00a0mag, onboard magnitude estimation errors allow *Gaia* to see fainter stars, although not at each transit. *Gaia\u00a0*\u00a0DR2\u00a0hence extends well beyond $G=20$\u00a0mag. However, in dense areas on the sky (above $\\sim$$400\\,000$\u00a0stars\u00a0deg$^{-2}$), the effective magnitude limit of the survey can be as bright as $\\sim$``{=html}18 mag. The somewhat fuzzy faint-end limit depends on object density (and hence celestial position) in combination with the scan-law coverage underlying the 22 months of data of *Gaia\u00a0*\u00a0DR2\u00a0and the filtering on data quality that has been applied prior to publication. This has resulted in some regions on the sky showing artificial source-density fluctuations, for instance reflecting the scan-law pattern. In small, selected regions, gaps are present in the source distribution. These are particularly noticeable near very bright stars. In terms of effective angular resolution, the resolution limit of *Gaia\u00a0*\u00a0DR2\u00a0is $\\sim$``{=html}0.4\u00a0arcsec.\n\nGiven the properties of *Gaia\u00a0*\u00a0DR2\u00a0summarised above, the interpretation of the data is far from straightforward. This is particularly true when accounting for the incompleteness in any sample drawn from the *Gaia\u00a0*\u00a0Archive. We therefore strongly encourage the users of the data to read the papers and documentation accompanying *Gaia\u00a0*\u00a0DR2\u00a0and to carefully consider the warnings given therein before drawing any conclusions from the data.\n\n# Critical review of the traditional use of parallaxes\n\nWe start this section by briefly describing how parallaxes are measured and how the presence of measurement noise leads to the occurrence of zero and negative observed parallaxes. In the rest of the section we review several of the most popular approaches to using measured parallaxes ($\\ensuremath{\\varpi}$) to estimate distances and other astrophysical parameters. In doing so we will attempt to highlight the intricacies, pitfalls, and problems of these 'traditional' approaches.\n\n## Measurement of parallaxes\n\nIn simplified form, astrometric measurements (source positions, proper motions, and parallaxes) are made by repeatedly determining the direction to a source on the sky and modelling the change of direction to the source as a function of time as a combination of its motion through space (as reflected in its proper motion and radial velocity) and the motion of the observing platform (earth, *Gaia\u00a0*, etc.) around the Sun (as reflected in the parallax of the source). As explained in more detail in and , this basic model of the source motion on the sky describes the time-dependent coordinate direction from the observer towards an object outside the solar system as the unit vector $$\\ensuremath{\\boldsymbol{u}}(t) = \\langle \\ensuremath{\\boldsymbol{r}} + (t_\\mathrm{B}-t_\\mathrm{ep})\n (\\ensuremath{\\boldsymbol{p}}\\mu_{\\alpha*} + \\ensuremath{\\boldsymbol{q}}\\mu_\\delta + \\ensuremath{\\boldsymbol{r}}\\mu_r) - \n \\varpi\\ensuremath{\\boldsymbol{b}}_\\mathrm{O}(t)\/A_\\mathrm{u} \\rangle\\,,\n \\label{eq:sourcemodel}$$ where $t$ is the time of observation and $t_\\mathrm{ep}$ is a reference time, both in units of Barycentric Coordinate Time (TCB); $\\boldsymbol{p}$, $\\boldsymbol{q}$, and $\\boldsymbol{r}$ are unit vectors pointing in the direction of increasing right ascension, increasing declination, and towards the position $(\\alpha,\\delta)$ of the source, respectively; $t_\\mathrm{B}$ is the time of observation corrected for the R\u00f8mer delay; $\\ensuremath{\\boldsymbol{b}}_\\mathrm{O}(t)$ is the barycentric position of the observer at the time of observation; $A_\\mathrm{u}$ is the astronomical unit; and $\\langle\\rangle$ denotes normalisation. The components of proper motion along $\\boldsymbol{p}$ and $\\boldsymbol{q}$ are respectively $\\mu_{\\alpha*}=\\mu_\\alpha\\cos\\delta$ and $\\mu_\\delta$, $\\varpi$ is the parallax, and $\\mu_r=v_r\\varpi\/A_\\mathrm{u}$ is the 'radial proper motion' which accounts for the fact that the distance to the star changes as a consequence of its radial motion, which in turn affects the proper motion and parallax. The effect of the radial proper motion is negligibly small in most cases and can be ignored in the present discussion.\n\nThe above source model predicts the well-known helix or wave-like pattern for the apparent motion of a typical source on the sky. A fit of this model to noisy observations can lead to negative parallaxes, as illustrated in Fig.\u00a0. We note how in the source model described in Eq.\u00a0 the parallax appears as the factor $-\\varpi$ in front of the barycentric position of the observer, which means that for each source its parallactic motion on the sky will have a sense which reflects the sense of the motion of the observer around the Sun. In the presence of large measurement noise (comparable to the size of the parallax) it is entirely possible that the parallax value estimated for the source model vanishes or becomes negative. This case can be interpreted as the measurement being consistent with the source going 'the wrong way around' on the sky, as shown in Fig.\u00a0.\n\nThis example is intended to clarify why parallaxes can have non-positive observed values and, more importantly, to convey the message that the parallax is not a direct measurement of the distance to a source. The distance (or any other quantity depending on distance) has to be *estimated* from the observed parallax (and other relevant information), taking into account the uncertainty in the measurement. A simplified demonstration of how negative parallaxes arise (allowing the reader to reproduce Fig.\u00a0) can be found in the online tutorials accompanying this paper [^3].\n\n## Estimating distance by inverting the parallax\n\nIn the absence of measurement uncertainties, the distance to a star can be obtained from its true parallax through $r=1\/\\ensuremath{\\varpi_{\\rm True}}$, with $\\varpi_{\\rm True}$ indicating the true value of the parallax. Thus, naively we could say that the distance to a star can be obtained by inverting the observed parallax, $\\rho=1\/\\varpi$, where now $\\rho$ is used to indicate the distance derived from the observed value of the parallax. For this discussion the observed parallax is assumed to be free of systematic measurement errors and to be distributed normally around the true parallax\n\n$$p(\\ensuremath{\\varpi}\\mid \\ensuremath{\\varpi_{\\rm True}}) = \\frac{1}{\\ensuremath{\\sigma_\\ensuremath{\\varpi}}\\sqrt{2 \\pi}} \\exp\\left({-\\frac{(\\ensuremath{\\varpi}- \\ensuremath{\\varpi_{\\rm True}})^2}{2\\ensuremath{\\sigma_\\ensuremath{\\varpi}}^2}}\\right)\\,,\n \\label{eq:pdf_truepi}$$\n\nwhere $\\sigma_\\ensuremath{\\varpi}$ indicates the measurement uncertainty on $\\varpi$. Blind use of $1\/\\ensuremath{\\varpi}$ as an estimator of the distance will lead to unphysical results in case the observed parallax is non-positive. Nevertheless, we could still consider the use of the $1\/\\ensuremath{\\varpi}$ distance estimate for positive values, for instance, a sample where most or all of the observed values are positive or, in the limiting case, where there is a single positive parallax value. In this case, it is crucial to be aware of the statistical properties of the estimate $\\rho$. Given a true distance $r=1\/\\ensuremath{\\varpi_{\\rm True}}$, what will be the behaviour of $\\rho$? We can obtain the probability density function (PDF) of $\\rho$ from Eq.\u00a0 as\n\n$$\\begin{aligned}\n p( \\rho \\mid \\ensuremath{\\varpi_{\\rm True}}) & = & p(\\ensuremath{\\varpi}=1\/\\rho \\mid \\ensuremath{\\varpi_{\\rm True}})\\cdot\n \\left|\\frac{d\\ensuremath{\\varpi}}{d\\rho}\\right| \\nonumber \\\\ & = & \\frac{1}{\\rho^2\n \\ensuremath{\\sigma_\\ensuremath{\\varpi}}\\sqrt{2 \\pi}} \\exp\\left({-\\frac{(1\/\\rho -\n \\ensuremath{\\varpi_{\\rm True}})^2}{2\\ensuremath{\\sigma_\\ensuremath{\\varpi}}^2}}\\right) \n \\label{eq:pdf_rho}\n\\end{aligned}$$\n\nIn Fig.\u00a0 we depict $p( \\rho\\mid\\ensuremath{\\varpi_{\\rm True}})$ for two extreme cases of very low and very high relative uncertainty. The shape of $p( \\rho\\mid\\ensuremath{\\varpi_{\\rm True}})$ describes what we can expect when using $\\rho$ as an estimate of the true distance $r$. The distribution of the figure on the left corresponds to a case with a low fractional parallax uncertainty, defined as $f=\\ensuremath{\\sigma_\\ensuremath{\\varpi}}\/\\ensuremath{\\varpi_{\\rm True}}$. It looks unbiased and symmetrical. Thus, using $\\rho=1\/\\ensuremath{\\varpi}$ to estimate the distance in a case like this is relatively safe and would lead to more or less reliable results. However, in spite of its appearance, the figure hides an intrinsic non-Gaussianity that is made evident in the right-hand figure. This second plot corresponds to the case of high fractional parallax uncertainty and the distribution shows several features: first, the mode (the most probable value) does not coincide with the true distance value; second, the distribution is strongly asymmetric; and finally, it presents a long tail towards large values of $\\rho$. For more extreme values of $f$ there is a noticeable negative tail to this distribution, corresponding to the negative tail of the observed parallax distribution.\n\nIn view of Fig.\u00a0 it is tempting to apply corrections to the $\\rho$ estimator based on the value of the fractional parallax uncertainty $f$. Unfortunately, in order to do so we would need to know the true value of the parallax and $f$. Using the apparent fractional uncertainty $f_{app}=\\ensuremath{\\sigma_\\ensuremath{\\varpi}}\/\\ensuremath{\\varpi}$ is not feasible since the denominator in $f$ (the true parallax) can be very close to zero, so its distribution has very extended wings and using $f_{app}$ will often result in gross errors.\n\nFurthermore, reporting a $\\rho$ value should always be accompanied by an uncertainty estimate, usually the standard deviation of the estimator, but the standard deviation or the variance is defined in terms of an unknown quantity: $\\ensuremath{\\varpi_{\\rm True}}$. In addition, the long tail shown in the right panel of Fig.\u00a0 makes the estimates of the variance quickly become pathological, as discussed below.\n\nIn order to clarify the previous assertions, we recall the classical concept of bias because it plays a central role in the discussion that develops in this section. In statistics, an estimator is said to be biased if its expected value differs from the true value. In our case, we aim to infer the true value of the parallax $\\ensuremath{\\varpi_{\\rm True}}$ (or, alternatively, related quantities such as the true distance $r$, absolute magnitude, luminosity, or 3D velocity components), and we aim to infer it from the measured parallax. In the *Gaia* case this measured parallax will be affected by quasi-Gaussian uncertainties (see Sect.\u00a0). In this case the expectation value of the observed parallax coincides with the true value:\n\n$$\\mathbb{E}[\\ensuremath{\\varpi}]=\\int \\ensuremath{\\varpi}p(\\ensuremath{\\varpi}|\\ensuremath{\\varpi_{\\rm True}})\\cdot{\\rm\n d}\\ensuremath{\\varpi}=\\int \\ensuremath{\\varpi}\\mathcal{N}(\\ensuremath{\\varpi};\\ensuremath{\\varpi_{\\rm True}},\\ensuremath{\\sigma_\\ensuremath{\\varpi}})\\cdot{\\rm\n d}\\ensuremath{\\varpi}=\\ensuremath{\\varpi_{\\rm True}},$$\n\nwhere $\\mathcal{N}(\\ensuremath{\\varpi};\\ensuremath{\\varpi_{\\rm True}},\\ensuremath{\\sigma_\\ensuremath{\\varpi}})$ represents the Gaussian probability distribution centred at the true parallax and with a standard deviation $\\ensuremath{\\sigma_\\ensuremath{\\varpi}}$. Hence, the observed parallax is an unbiased estimator of the true parallax (under the strong hypothesis that there are no systematic biases associated with the survey and that the errors are normally distributed).\n\nNow, in order to assess the bias of $\\rho= 1\/\\ensuremath{\\varpi}$ as an estimator of the true distance we need to calculate its expected value:\n\n$$\\mathbb{E}[\\rho]=\\mathbb{E}[1\/\\ensuremath{\\varpi}]=\\int\n\\frac{1}{\\ensuremath{\\varpi}}\\cdot p(\\ensuremath{\\varpi}|\\ensuremath{\\varpi_{\\rm True}})\\cdot{\\rm\n d}\\ensuremath{\\varpi}=\n\\int\n\\frac{1}{\\ensuremath{\\varpi}}\\cdot\\mathcal{N}(\\ensuremath{\\varpi_{\\rm True}},\\ensuremath{\\sigma_\\ensuremath{\\varpi}})\\cdot{\\rm\n d}\\ensuremath{\\varpi}$$\n\nThis bias was approximated by (see Sect.\u00a0) as a function of the fractional parallax uncertainty $f$ using a series expansion of the term in the integral and several approximations for the asymptotic regimes of small and large values of $f$, and it indeed shows that the distance estimator $1\/\\ensuremath{\\varpi}$ is unbiased for vanishingly small values of $f$, but it rapidly becomes significantly biased for values of $f$ beyond 0.1. But not only is $1\/\\ensuremath{\\varpi}$ a biased estimator of the true distance, it is also a high-variance estimator. The reason for this variance explosion is related to the long tail towards large distances illustrated in the right panel of Figs.\u00a0 and . Relatively large fractional uncertainties inevitably imply noise excursions in the parallax that result in vanishingly small observed parallaxes and disproportionate distances (and hence an inflation of the variance).\n\nThe effects discussed above can be illustrated with the use of simulated data. Figure\u00a0 shows the results of a simulation of objects located between 0.5 and 2kpc where starting from the true distances we have simulated observed parallaxes with a Gaussian uncertainty of $\\ensuremath{\\sigma_\\ensuremath{\\varpi}}=0.3$ mas and then calculated for each object $\\rho = 1\/\\ensuremath{\\varpi}$.\n\nThe figure on the left shows that (by construction) the errors in the observed parallaxes are well behaved and perfectly symmetrical (Gaussian), while in the centre figure the errors in the estimation of distances using $\\rho$ show a strong asymmetry. The characteristics of these residuals depend on the distribution of true distances and uncertainties. This is more evident in the figure on the right, where the true distance $r$ is plotted against $\\rho$; there is a very prominent tail of overestimated distances and the distribution is asymmetrical around the one-to-one line: the more distant the objects, the more marked the asymmetry. These features are very prominent because we have simulated objects so that the relative errors in parallax are large, but they are present (albeit at a smaller scale) even when the relative errors are small.\n\nThe plots in Fig.\u00a0 correspond to a simple simulation with a mild uncertainty $\\ensuremath{\\sigma_\\ensuremath{\\varpi}}=0.3$ mas. Figure\u00a0 shows the same plots for a realistic simulation of the *Gaia\u00a0*\u00a0DR2\u00a0data set. The simulation is described in Appendix ; in this case the errors in parallax follow a realistic model of the *Gaia\u00a0*\u00a0DR2\u00a0errors, depicted in Fig.\u00a0.\n\nAs a summary, we have seen in previous paragraphs that the naive approach of inverting the observed parallax has significant drawbacks: we are forced to dispose of valuable data (non-positive parallaxes), and as an estimator $\\rho=1\/\\ensuremath{\\varpi}$ is biased and has a very high variance.\n\n## Sample truncation\n\nIn addition to the potential sources of trouble described in the previous sections, the traditional use of samples of parallaxes includes a practice that tends to aggravate these effects: truncation of the used samples.\n\nAs discussed in Sect.\u00a0, negative parallaxes are a natural result of the *Gaia\u00a0*measurement process (and of astrometry in general). Since inverting negative parallaxes leads to physically meaningless negative distances we are tempted to just get rid of these values and form a 'clean' sample. This results in a biased sample, however.\n\nOn the one hand, removing the negative parallaxes biases the distribution of this parameter. Consider for instance the case illustrated in Fig.\u00a0 for the quasars from the AllWISE catalogue. These objects have a near zero true parallax, and the distribution of its observed values shown in the figure corresponds to this, with a mean of $-10$\u00a0$\\mu$as, close to zero. However, if we remove the negative parallaxes from this sample, deeming them 'unphysical', the mean of the observed values would be significantly positive, about $0.8$ mas. This is completely unrealistic for quasars; in removing the negative parallaxes we have significantly biased the observed parallax set for these objects. With samples of other types of objects with non-zero parallaxes the effect can be smaller, but it will be present.\n\nOn the other hand, when by removing negative parallaxes the contents of the sample are no longer representative of the base population from which it has been extracted since stars with large parallaxes are over-represented and stars with small parallaxes are under-represented. This can be clearly illustrated using a simulation. We have generated a sample of simulated stars mimicking the contents of the full *Gaia\u00a0*\u00a0DR2\u00a0(see Appendix ) and truncated it by removing the negative parallaxes. In Fig.\u00a0 we can compare the distribution of the true distances of the original (non-truncated) sample and the resulting (truncated) sample; it is clear that after the removal of negative parallaxes we have favoured the stars at short distances (large parallaxes) with respect to the stars at large distances (small parallaxes). The spatial distribution of the sample has thus been altered, and may therefore bias any analysis based on it.\n\nA stronger version of truncation that has traditionally been applied is to remove not only negative parallaxes, but also all the parallaxes with a relative error above a given threshold $k$, selecting $\\frac{\\ensuremath{\\sigma_\\ensuremath{\\varpi}}}{\\ensuremath{\\varpi}} \\ensuremath{\\varpi_{\\rm True}}|\\ensuremath{\\varpi_{\\rm True}})=0.5$. The same value holds for the probability that $\\ensuremath{\\varpi}< \\ensuremath{\\varpi_{\\rm True}}$ because the Gaussian distribution is symmetrical with respect to the true value of the parallax. This is also true for the joint probability of $\\ensuremath{\\varpi}$ and $\\ensuremath{\\varpi_{\\rm True}}$,\n\n$$p(\\ensuremath{\\varpi}> \\ensuremath{\\varpi_{\\rm True}}) = \\iint_\\mathcal{S} p(\\ensuremath{\\varpi},\\ensuremath{\\varpi_{\\rm True}})\\cdot{\\rm d}\\ensuremath{\\varpi}\\cdot{\\rm d}\\ensuremath{\\varpi_{\\rm True}}\n =0.5\n ,$$\n\nwhere $\\mathcal{S}$ is the region of the $(\\ensuremath{\\varpi},\\ensuremath{\\varpi_{\\rm True}})$ plane where $\\ensuremath{\\varpi}> \\ensuremath{\\varpi_{\\rm True}}$.\n\nHowever, the probability distribution of the true parallax given the observed parallax $p(\\ensuremath{\\varpi_{\\rm True}}|\\ensuremath{\\varpi})$ does not fulfil this seemingly desirable property of probability mass equipartition at the $\\ensuremath{\\varpi}=\\ensuremath{\\varpi_{\\rm True}}$ point. We can write the latter probability as\n\n$$p(\\ensuremath{\\varpi_{\\rm True}}|\\ensuremath{\\varpi})=\\frac{p(\\ensuremath{\\varpi},\\ensuremath{\\varpi_{\\rm True}})}{p(\\ensuremath{\\varpi})}=\n \\frac{p(\\ensuremath{\\varpi}|\\ensuremath{\\varpi_{\\rm True}})\\cdot p(\\ensuremath{\\varpi_{\\rm True}})}{p(\\ensuremath{\\varpi})}\n\\label{eq:LKposterior}$$\n\nusing the product rule of probability. In the context of inferring the true parallax from the observed one, Eq.\u00a0 is the well-known Bayes' theorem, where the left-hand side is the posterior probability, $p(\\ensuremath{\\varpi}\\mid\\ensuremath{\\varpi_{\\rm True}})$ is the likelihood, $p(\\ensuremath{\\varpi_{\\rm True}})$ is the prior, and $p(\\ensuremath{\\varpi})$ is the evidence. For most realistic prior distributions $p(\\ensuremath{\\varpi_{\\rm True}})$, neither the median nor the mode or the mean of the posterior in Eq.\u00a0 is at $\\ensuremath{\\varpi}=\\ensuremath{\\varpi_{\\rm True}}$. Let us take for example a uniform volume density of sources out to a certain distance limit. In such a distribution the number of sources in a spherical shell of infinitesimal width at a distance $r$ scales as $r^2$, as does the probability distribution of the distances. Since\n\n$$p(r)\\cdot{\\rm d}r=p(\\ensuremath{\\varpi_{\\rm True}})\\cdot {\\rm d}\\ensuremath{\\varpi_{\\rm True}},$$\n\nthe probability distribution for the true parallax in such a truncated constant volume density scenario is proportional to\n\n$$p(\\ensuremath{\\varpi_{\\rm True}}) \\propto \\ensuremath{\\varpi_{\\rm True}}^{-4}$$\n\nout to the truncation radius. Hence, for Gaussian distributed uncertainties we can write $p(\\ensuremath{\\varpi_{\\rm True}}|\\ensuremath{\\varpi})$ as\n\n$$p(\\ensuremath{\\varpi_{\\rm True}}|\\ensuremath{\\varpi}) \\propto \\frac{1}{\\sigma_{\\varpi}}\\cdot\n\\exp(\\frac{-(\\ensuremath{\\varpi}-\\ensuremath{\\varpi_{\\rm True}})^2}{2\\sigma_\\varpi})\\cdot\\ensuremath{\\varpi_{\\rm True}}^{-4}.\n\\label{eq:LKposterior2}$$\n\nThe joint distribution $p(\\ensuremath{\\varpi},\\ensuremath{\\varpi_{\\rm True}})$ (i.e. the non-normalised posterior, plotted as a 2D function of data $parallax$ and parameter $\\ensuremath{\\varpi_{\\rm True}}$) for this particular case of truncated uniform stellar volume densities is depicted in Fig.\u00a0 together with the conditional distributions for particular values of $\\ensuremath{\\varpi}$ and $\\ensuremath{\\varpi_{\\rm True}}$. It shows graphically the symmetry of the probability distribution $p(\\ensuremath{\\varpi}|\\ensuremath{\\varpi_{\\rm True}})$ (with respect to $\\ensuremath{\\varpi_{\\rm True}}$) and the bias and asymmetry of $p(\\ensuremath{\\varpi_{\\rm True}}|\\ensuremath{\\varpi})$.\n\nobtain Eq.\u00a0 in their Sect. ii under the assumption of uniform stellar volume densities and constant fractional parallax uncertainties (constant $f$). They discuss several distributions for different values of the ratio $\\ensuremath{\\sigma_\\ensuremath{\\varpi}}\/\\ensuremath{\\varpi_{\\rm True}}$. In their Sect. iii they use the expected value of the true parallax given by the distribution $p(\\ensuremath{\\varpi_{\\rm True}}|\\ensuremath{\\varpi})$ in Eq.\u00a0 to infer the expected value of the difference between the true absolute magnitude $M_{True}$ and the value obtained with the naive inversion of the observed parallax. The expected value of this absolute magnitude error is derived and tabulated for the distribution $p(\\ensuremath{\\varpi_{\\rm True}}|\\ensuremath{\\varpi})$ as a function of the fractional parallax uncertainty $f$. This so-called Lutz\u2013Kelker correction is often applied to stellar samples that do not fulfil the assumptions under which it was derived because the stellar volume density is far from uniform at scales larger than a few tens of parsecs and the samples to which the correction is applied are never characterised by a unique value of $f$.\n\n## Astrometry-based luminosity\n\nAn obvious way to avoid the problems associated with the naive inversion of observed parallaxes (see Sect.\u00a0) is to remain in the space of parallaxes (as opposed to that of distances) insofar as this is possible. One example of this approach is the astrometry-based luminosity (ABL) method originating from . The ABL method consists in substituting the absolute magnitudes by a proxy that is linearly dependent on the parallax. The original proposal was\n\n$$a_V\\equiv 10^{0.2M_V}=\\ensuremath{\\varpi}10^{\\frac{m_V+5}{5}},$$\n\nand has been recently used to obtain maximum likelihood estimates of the period-luminosity relation coefficients for Cepheids and RR\u00a0Lyrae stars , and to improve the *Gaia* parallax uncertainties using deconvolved colour-magnitude diagrams as prior . The new astrometry-based luminosity depends linearly on the parallax, and thus its uncertainty can be expected to have an approximately Gaussian distribution if the fractional uncertainty of the apparent magnitude is negligible. This is more often the case than for fractional parallax uncertainties and is in general a good approximation.\n\nUnfortunately, the astrometry-based luminosity can only be applied to the study of the luminosity and can do nothing for the analysis of spatial distributions where distances or tangential velocities are inevitable ingredients.\n\n# Recommendations for using astrometric data\n\nIn this section we provide specific advice on the use of astrometric data in astronomical data analysis. Although the focus is on the use of *Gaia* data, many of the recommendations hold for the analysis of astrometric data in general. To illustrate the recommendations we also provide a small number of worked examples, ranging from very basic demonstrations of the issues mentioned in Sect.\u00a0 to full Bayesian analyses. Some of these examples are available in the *Gaia* archive tutorial described in Sect.\u00a0.\n\n## Using *Gaia* astrometric data: how to proceed?\n\nThe fundamental quantity sought when measuring a stellar parallax is the distance to the star in question. However, as discussed in the previous sections the quantity of interest has a non-linear relation to the measurement, $r=1\/\\ensuremath{\\varpi_{\\rm True}}$, and is constrained to be positive, while the measured parallax can be zero or even negative. Starting from a measured parallax which is normally distributed about the true parallax, this leads to a probability density for the simple distance estimator $\\rho=1\/\\ensuremath{\\varpi}$ (see Sect.\u00a0) for which the moments are defined in terms of unknown quantities. This means we cannot calculate the variance of the estimator or the size of a possible bias, which renders the estimator useless from the statistical point of view.\n\n**Our first and main recommendation is thus to always treat the derivation of (astro-)physical parameters from astrometric data, in particular when parallaxes are involved, as an inference problem which should preferably be handled with a full Bayesian approach.**\n\n### Bayesian inference of distances from parallaxes\n\nThe Bayesian approach to inference involves estimating a PDF over the quantity of interest given the observables. In this case we want to estimate the distance, $r$, given the parallax, $\\varpi$. A fuller treatment of this problem has been given in , so only a brief summary will be given here. Using Bayes' theorem we can write the posterior as\n\n$$P(\\ensuremath{r}\\ensuremath{\\hspace{0.07em}\\mid\\hspace{0.07em}}\\ensuremath{\\varpi}) \\,=\\, \\frac{1}{Z}P(\\ensuremath{\\varpi}\\ensuremath{\\hspace{0.07em}\\mid\\hspace{0.07em}}\\ensuremath{r})P(\\ensuremath{r}) \\ .\n\\label{eqn:bayesdistance}$$\n\nFormally, everything is also conditioned on the parallax uncertainty, $\\sigma_\\ensuremath{\\varpi}$, and any relevant constraints or assumptions, but symbols for these are omitted for brevity. The quantity $P(\\ensuremath{\\varpi}\\ensuremath{\\hspace{0.07em}\\mid\\hspace{0.07em}}\\ensuremath{r})$ is the likelihood from Eq.\u00a0. The prior, $P(\\ensuremath{r})$, incorporates our assumptions and $Z$ is a normalisation constant.\n\nIn addition to the likelihood, there are two important choices which must be made to estimate a distance: the choice of prior and the choice of estimator. We will first focus on the former, and start discussing the simplest prior: the uniform unbounded prior. With a uniform boundless (and thus improper) prior on distances the posterior is proportional to the likelihood, and if we choose the mode of the posterior as our estimator, then the solution is mathematically equivalent to maximising the likelihood. However, a boundless uniform prior permits negative distances, which are non-physical, so we should at least truncate it to exclude these values.\n\nThe more measurements we have, or the more precise the measurements are, the narrower the likelihood and the lower the impact of the prior should be on the posterior. It turns out, however, that with the unbounded uniform prior the posterior is improper, i.e.\u00a0it is not normalisable. Consequently, the mean and median are not defined. The only point estimator is the mode, i.e.\u00a0$\\ensuremath{\\ensuremath{r}_{\\rm est}}=1\/\\ensuremath{\\varpi}$ (the standard deviation and the other quantiles are likewise undefined), which is rather restrictive. Finally, this Bayesian distance estimate defined for an unbounded uniform prior reduces to the maximum likelihood estimate and coincides with the naive inversion discussed in Sect.\u00a0. The posterior is ill-defined for the unbounded uniform prior for parallaxes. This prior describes an unrealistic situation where the observer is placed at the centre of a distribution of sources that is spherically symmetric and the volume density of which decreases sharply with distance.\n\nThe solution to these problems (non-physical distances, improper posterior) is to use a more appropriate prior. The properties of various priors and estimators have been studied by and . The latter makes a detailed study using a Milky Way model for a prior, and also investigates how the estimates change when the *Gaia* photometric measurements are used in addition to the parallax. One of the least informative priors we can use is the exponentially decreasing space density prior:\n\n$$P(\\ensuremath{r}) \\,=\\, \\begin{dcases}\n \\ \\ \\frac{1}{2\\ensuremath{L}^3}\\,\\ensuremath{r}^2e^{-\\ensuremath{r}\/\\ensuremath{L}} & \\:{\\rm if}~~ r >0 \\\\\n \\ \\ 0 & \\:{\\rm otherwise} \\ .\n\\end{dcases}\n\\label{eqn:r2e-2prior}$$\n\nFor distances $\\ensuremath{r}\\ll \\ensuremath{L}$ this corresponds to a constant space density of stars, with the probability dropping exponentially at distances much larger than the mode (which is at $2\\ensuremath{L}$). Examples of the shape of the posterior for parallaxes of different precisions are shown in and .\n\nThe posterior obtained for the prior defined in Eq.\u00a0 is normalised and thus, we have a choice of point estimators (mean, median, or mode). Also, the distribution is asymmetric, and two quantiles (5% and 95%) rather than the standard deviation are recommended to summarise the uncertainty in the point estimate. The median, as a point estimate, is guaranteed to lie between these quantiles. used this prior, as well as a Milky Way prior, to infer distances for the two million TGAS stars in the first *Gaia* data release. The behaviour of the estimates derived from the exponentially decreasing space density prior can be explored using the interactive tool available in the tutorial described in Sect.\u00a0.\n\nIn general, the introduction of reasonable prior probabilities accounts for the Lutz\u2013Kelker bias, although the inevitable mismatch between the true distribution of parallaxes and the prior used will result in less accurate inferences. In any case, the advantage with respect to the methods discussed in Sect.\u00a0 is clear: *i)* we do not need to tabulate corrections for each prior assuming constant $f$; *ii)* we do not need to dispose of non-positive parallaxes; *iii)* we obtain a proper full posterior distribution with well-defined moments and credible intervals; *iv)* even simple priors such as the exponential decreasing volume density will improve our estimates with respect to the unrealistic prior underlying the maximum likelihood solution $\\ensuremath{\\ensuremath{r}_{\\rm est}}=1\/\\ensuremath{\\varpi}$; and finally, *v)* we obtain estimators that degrade gracefully as the data quality degrades, and with credible intervals that grow with the observational uncertainties until they reach the typical scales of the prior when the observations are non-informative. These advantages come at the expense of an inference that is more computationally demanding in general (as it requires obtaining the posterior and its summary statistics if needed), the need for a thoughtful choice of a prior distribution, and the analysis of the influence of the prior on the inference results.\n\nFigure\u00a0 shows the distribution of means (left), modes (centre), and medians (right) of the posteriors inferred for a simulation of $10^5$ sources drawn from an exponentially decreasing space density distribution. This simulation represents the unlikely case where the prior is a perfect representation of the true distribution.\n\nFrom a Bayesian perspective the full posterior PDF is the final result of our inference if we only use parallax measurements to infer the distance (see below), and further analyses should make use of it as a whole. This is our recommendation in general: avoid expectations and summaries of the posterior. However, it is often useful to compute summary statistics such as the mean (expectation), median, mode, quantiles, etc., to have an approximate idea of the distribution, but we should not use these summaries for further inference, for example to estimate absolute magnitudes, Cartesian velocities, etc. We recommend inferring the full posterior distributions for these derived quantities using the posterior of the true parallax or of the distance, or using the same Bayesian scheme as for the true parallax as explained in Sect.\u00a0. In Fig.\u00a0 we show the values of the mean (left), mode (centre), and median (right) that we would obtain from a set of $10^4$ simulated observations of a star at 100 parsecs with $f=0.2$. We assume a Gaussian distribution of the observations around the true parallax. The posterior distribution is inferred using Eq.\u00a0 and two priors: a uniform volume density of sources truncated at 1 kpc (results in grey) and a uniform density of sources multiplied by an exponential decay of length scale 200 pc as defined in Eq.\u00a0 (in blue). The expectation values of the histograms are shown as dashed lines of the same colour, with the true value (100 pc) shown as a red dashed line. We see in general that *i)* the truncation has the effect of increasing the number of overestimated distances; *ii)* the three estimators are biased towards larger distances (smaller parallaxes), but the expectation of the mode is significantly closer to the true value; and *iii)* the abrupt truncation of the prior results in a spurious peak of modes at the truncation distance as already discussed in .\n\nFigure\u00a0 and Table\u00a0 show a comparison of the absolute value of the empirical bias and standard deviation associated with some distance estimators discussed in this paper as a function of the measured fractional uncertainty in the parallax. We chose the measured value even though it is a very poor and non-robust estimator because, as stated in Sect.\u00a0, we never have access to the true fractional parallax uncertainty. This figure shows the results obtained for $10^7$ sources in the *Gaia\u00a0*\u00a0DR2\u00a0 simulation described in Appendix for the maximum likelihood estimate $\\rho=\\frac{1}{\\ensuremath{\\varpi}}$ with and without the Smith\u2013Eichhorn correction, and for the mode estimates based on the posterior distribution for two priors (a uniform distance prior, UD, with maximum distance $r_{lim}=100$ kpc, and an exponentially decreasing space density prior, EDSD, with $L=1.35$ kpc), neither of which matches the true distribution of sources in the simulation. Only the mode of the posteriors is plotted (but not the mean or the median) for the sake of clarity. The conclusions described next are only valid under the conditions of the exercise and are provided as a demonstration of the caveats and problems described in previous sections, not as a recommendation of the mode of the posterior inferred under the EDSD prior as an estimator. At the risk of repeating ourselves, we emphasise the need to adopt priors adapted to the inference problem at hand. Also, the conclusions only hold for the used simulation (where we generate the true distances and hence can calculate the bias and standard deviation) and need not be representative of the true performance for the real *Gaia\u00a0*data set. They can be summarised as follows:\n\n- the mode of the EDSD prior shows the smallest bias and standard deviation in practically the entire range of estimated fractional parallax uncertainties (in particular, everywhere beyond the range of $f_{\\rm app}$ represented in the plot);\n\n- the Smith\u2013Eichhorn estimate shows pathological biases and standard deviations in the vicinity of the supposedly best-quality measurements at $f_{app}=0$. Away from this region, it provides the next less biased estimates (averaged over bins of $f_{app}$) after the mode of the EDSD posterior;\n\n```latex\n\\begin{table*}[hhh]\\caption{Average bias and standard deviation in three regimes of\n $f_{\\rm app}$ for four distance estimators discussed in this\n paper: (from left to right) the mode of the posterior based on the\n exponentially decreasing space density (EDSD) prior; the mode of the\n posterior of the uniform distance (UD) distribution; the maximum\n likelihood estimate corrected according to \\cite{SmithEichhorn}\n abbreviated as SE; and the maximum likelihood (ML) estimate. The\n wider ranges of $f_{\\rm app}$ exclude narrower ranges shown in\n previous rows of the table.} \\centering\n\\label{global}\n\\begin{tabular}{llcccc}\n\\hline \\hline\nSummary & $f_{\\rm app}$ Range & EDSD & UD & SE & ML \\\\\n\\hline\n\\multirow{3}{*}{Bias} &(-1,1) & -0.2 &9.7 & 34.2 &-0.95 \\\\ \n &(-5,5) & -0.3 &10.7 &-0.34 &-1.2 \\\\\n &(-50,50)& -0.3 &16.2 &-0.4 &-3.8 \\\\\n\\hline\n\\multirow{3}{*}{Std. Deviation} &(-1,1) & 0.4 &8.0 &685.8 & 0.5 \\\\\n &(-5,5) & 0.4 &8.4 & 0.5 & 1.95 \\\\\n &(-50,50)& 0.4 &10.6 & 0.4 & 17.1 \\\\\n\\hline\n\\end{tabular}\n\\end{table*}\n```\n\n### Bayesian inference of distances from parallaxes and complementary information\n\nThe methodology recommended in the previous paragraphs is useful when we only have observed parallaxes to infer distances. There are, however, common situations in astronomy where the parallaxes are only one of many observables, and distances are not the final goal of the inference problem but a means to achieve it. In this context, we recommend an extension of the classical Bayesian inference methods described in the previous section. These problems are characterised by a set of observables and associated uncertainties (that include but are not restricted to parallaxes) and a series of parameters (the values of which are unknown a priori) with complex interdependence relationships amongst them. Some of these parameters will be the ultimate goal of the inference process. Other parameters do play an important role, but we are not interested in their particular values, and we call them *nuisance* parameters, following the literature. For example, in determining the shape of a stellar association, the individual stellar distances are not relevant by themselves, but only insomuch as we need them to achieve our objective. We show below how we deal with the nuisance parameters. The interested reader can find applications of the methodology described in this section to inferring the coefficients of period-luminosity relations in and . Also, the same methodology (a hierarchical Bayesian model) is applied in where the constraint on the distances comes not from a period-luminosity relation, but from the relatively small dispersion of the absolute magnitudes and colour indices of red clump stars. A last example of this methodology can be found in where the constraint comes from modelling the colour-magnitude diagram.\n\nJust as in the previous section where we aimed at estimating distances from parallaxes alone, the two key elements in this case are the definitions of a likelihood and a prior. The likelihood represents the probability distribution of the observables given the model parameters. Typically, the likelihood is based on a *generative* or *forward* model for the data. Such models predict the data from our assumptions about the physical process that generates the true values (i.e. the distribution of stars in space) and our knowledge of the measurement process (e.g. justifying the assumption of a normal distribution of the observed parallax around its true value). Forward models can be used to generate arbitrarily large synthetic data sets for a given set of the parameters. In this case, however, where we are concerned with several types of measurements that depend on parameters other than the distance, the likelihood term will be in general more complex than in Sect.\u00a0 and may include probabilistic dependencies between the parameters. The term *hierarchical* or *multi-level* model is often used to refer to this kind of model.\n\nLet us illustrate the concept of hierarchical models with a simple extension of the Bayesian model described in Sect.\u00a0, where instead of assuming a fixed value of the prior length scale $L$ in Eq.\u00a0, we make it another parameter of the model and try to infer it. Let us further assume that we have a set of $N$ parallax measurements $\\{\\ensuremath{\\varpi}_k\\}$, one for each of a sample of $N$ stars. In this case, the likelihood can be written as $$\\begin{aligned}\np(\\{\\ensuremath{\\varpi}_k\\}|\\{r_k\\},L) & = & p(\\{\\ensuremath{\\varpi}_k\\}\\mid \\{r_k\\})\\cdot p(\\{r_k\\}\\mid L) \\nonumber \\\\\n & = & p(\\{\\ensuremath{\\varpi}_k\\}\\mid \\{r_k\\})\\cdot \\prod_{k=1}^N p(r_k\\mid L), \n\\end{aligned}$$ where $r_k$ is the true unknown distance to the $k$th star. We note that very often the measured parallaxes are assumed independent, and thus $p(\\{\\ensuremath{\\varpi}_k\\}\\mid \\{r_k\\})$ is written as the product $\\prod_{k=1}^N p(\\ensuremath{\\varpi}_k \\mid r_k)$. This is incorrect in general for *Gaia\u00a0* parallaxes because the parallax measurements are not independent. As described in and Sect.\u00a0 of this paper, there are regional correlations amongst them (see Sect.\u00a0), but for the sake of simplicity let us assume the sample of $N$ measurements is spread all over the celestial sphere such that the correlations average out and can be neglected. Hence, we write\n\n$$p(\\{\\ensuremath{\\varpi}_k\\}|\\{r_k\\},L) = \\prod_{k=1}^N p(\\ensuremath{\\varpi}_k \\mid r_k)\\cdot \np(r_k\\mid L).\n\\label{eq:hbm-ex1}$$\n\nUnder the assumption of Gaussian uncertainties, the first term in the product is given by Eq.\u00a0, while the second is given by Eq.\u00a0.\n\nThis likelihood can be represented by a simple directed graph (see Fig.\u00a0) that provides information about the conditional dependencies amongst the parameters. The shaded nodes represent the observations, the open circles represent model parameters, and the small black circles at the origin of the arrows represent model constants. The arrows denote conditional dependence relations, and the plate notation indicates repetition over the measurements $k$.\n\nThe next key element is, as in Sect.\u00a0, the prior. According to Fig.\u00a0, the only parameter that needs a prior specification is the one without a parent node: $L$. The rest of the arcs in the graph are defined in the likelihood term (Eq.\u00a0). If the sample of $N$ stars were representative of the inner Galactic halo for example, we could use a Gaussian prior centred at $\\approx 30$ kpc . Such a hierarchical model can potentially shrink the individual parallax uncertainties by incorporating the constraint on the distribution of distances.\n\nIf we are only interested in the individual distances $r_k$, we can consider $L$ as a nuisance parameter:\n\n$$\\begin{aligned}\np({\\ensuremath{\\varpi_{\\rm True}}}_{;k} \\mid \\{\\ensuremath{\\varpi}_{k}\\}) & = & \\int p({\\ensuremath{\\varpi_{\\rm True}}}_{;k},L \\mid \\{\\ensuremath{\\varpi}_{k}\\})\\cdot {\\rm d}L \\label{eq:marginalisation} \\\\\n & = & \\int p({\\ensuremath{\\varpi_{\\rm True}}}_{;k} \\mid \\{\\ensuremath{\\varpi}_{k}\\},L)\\cdot p(L\\mid\\{\\ensuremath{\\varpi}_{k}\\})\\cdot {\\rm d}L. \\nonumber\n\\end{aligned}$$\n\nThis integral (known as the marginalisation integral) allows us to write the posterior we are interested in without having to fix the value of $L$ to any particular value. Depending on the objective of the inference, we could have alternatively determined the posterior distribution of $L$ by marginalising the individual distances with an $N$-dimensional integral over the $\\{r_k\\}$ parameters.\n\nIn parameter spaces of dimensionality greater than 3\u20134 the computation of the possibly marginalised posteriors and\/or evidence requires efficient sampling schemes like those inspired in Markov chain Monte Carlo (MCMC) methods to avoid large numbers of calculations in regions of parameter space with negligible contributions to the posterior. This adds to the higher computational burden of the Bayesian inference method mentioned in the previous section.\n\nThe previous simple example can be extended to include more levels in the hierarchy and, more importantly, more parameters and measurement types. Section\u00a0 and develop in greater detail two examples of hierarchical models of direct applicability in the *Gaia\u00a0*context.\n\n## Absolute magnitudes, tangential velocities, and other derived quantities.\n\nThe approaches described in the previous sections can be applied to any quantity we want to estimate using the parallax. For example, if we want to infer the absolute magnitude $M_{\\rm G}$, then given the measured apparent magnitude $G$\u00a0and line-of-sight extinction $A_{\\rm G}$, the true parallax $\\varpi_{\\rm True}$\u00a0is related to $M_{\\rm G}$\u00a0via the conservation of flux\n\n$$5\\log\\ensuremath{\\varpi_{\\rm True}}\\,=\\, \\ensuremath{M_{\\rm G}}+ \\ensuremath{A_{\\rm G}}- \\ensuremath{G}- 5 \\ .\n\\label{eqn:mg}$$\n\nAssuming for simplicity that $G$\u00a0and $A_{\\rm G}$\u00a0are known, Bayes' theorem gives the posterior on $M_{\\rm G}$\u00a0as\n\n$$P(\\ensuremath{M_{\\rm G}}\\ensuremath{\\hspace{0.07em}\\mid\\hspace{0.07em}}\\ensuremath{\\varpi}, \\ensuremath{G}, \\ensuremath{A_{\\rm G}}) \\,=\\ \\frac{1}{Z}P(\\ensuremath{\\varpi}\\ensuremath{\\hspace{0.07em}\\mid\\hspace{0.07em}}\\ensuremath{M_{\\rm G}}, \\ensuremath{A_{\\rm G}}, \\ensuremath{G})P(\\ensuremath{M_{\\rm G}})\n\\label{eqn:bayes_mg}\n,$$\n\nwhere the likelihood is still the usual Gaussian distribution for the parallax (Eq.\u00a0) in which the true parallax is given by Eq.\u00a0. As this expression is non-linear, we again obtain an asymmetric posterior PDF over $M_{\\rm G}$, the exact shape of which also depends on the prior.\n\nThe inference of other quantities can be approached in the same way. In general we need to consider a multi-dimensional likelihood to accommodate the measurement uncertainties (and correlations) in all observed quantities. For instance, the set of parameters ${\\boldsymbol \\theta}=\\{r,v,\\phi\\}$ (distance, tangential speed, and direction measured anticlockwise from north) can be inferred from the *Gaia\u00a0*astrometric measurements $\\mathbf{o}=\\{\\ensuremath{\\varpi},\\mu_{\\alpha^*},\\mu_{\\delta}\\}$ (where $\\mu_{\\alpha^*}$ and $\\mu_{\\delta}$ are the measured proper motions) using the likelihood\n\n$$p(\\mathbf{o}\\mid \\boldsymbol{\\theta})=\n \\mathcal{N}(\\boldsymbol{\\theta},\\Sigma)=\\frac{1}{(2\\pi)^{3\/2}|\\Sigma|^{1\/2}}\n \\exp\\left(-\\frac{1}{2}(\\mathbf{o}-\\mathbf{x})^T \\Sigma^{-1}\n (\\mathbf{o}-\\mathbf{x})\\right),\n\\label{eq:tanvel}$$\n\nwhere $\\mathcal{N}$ denotes the Gaussian distribution, $\\Sigma$ is the full (non-diagonal) covariance matrix provided as part of the *Gaia\u00a0*Data Release, and\n\n$$\\mathbf{x} = \\left(\\frac{1}{r},\\frac{v\\sin(\\phi)}{r},\\frac{v\\cos(\\phi)}{r}\\right)$$\n\nis the vector of model parameters geometrically transformed into the space of observables in noise-free conditions. Equation\u00a0 assumes correlated Gaussian uncertainties in the measurements .\n\nThe posterior distribution can then be obtained by multiplying the likelihood by a suitable prior. The simplest assumption would be a separable prior such that $p(\\boldsymbol\\theta)=p(r)\\cdot\n p(v)\\cdot p(\\phi)$, where $p(v)$ and $p(\\phi)$ should reflect our knowledge about the dynamical properties of the population from where the source or sources were drawn (e.g.\u00a0thin disk, thick disk, bulge, halo). Again, hierarchical models can be used in the analysis of samples in order to infer the population properties (prior hyper-parameters) themselves.\n\nSimilar procedures can be followed to infer kinematic energies or full 3D velocities when the forward model is extended with radial velocity measurements.\n\n## Further recommendations\n\nIn this subsection we provide some further recommendations and guidance in the context of the Bayesian approach outlined above. Although powerful, inference with Bayesian methods usually comes at a large computational cost. Some of the recommendations below can also be seen in the light of taking data analysis approaches that approximate the Bayesian methodology and can be much faster.\n\n#### **Where possible, formulate the problem in the data space**\n\nThe problems caused by the ill-defined uncertainties on quantities derived from parallaxes can be avoided by carrying out the analysis in the data space where the behaviour of the uncertainties is well understood. This means that the quantities to be inferred are treated as parameters in a forward or generative model that is used to predict the data. Some adjustment process then leads to estimates of the parameters. A very simple forward modelling example can be found in who studied the luminosity calibrations of O-stars by predicting the expected Hipparcos<\/span> parallaxes from the assumed luminosity calibration and comparing those to the measured parallaxes. A more complex example can be found in who present a kinematic model for clusters which describes the velocity field of the cluster members and predicts the proper motions, accounting for the astrometric uncertainties and covariances. As shown in previous sections, the Bayesian approach naturally lends itself to (and in fact requires) forward modelling of the data.\n\nForward modelling has the added advantage that it forces us to consider the proper formulation of the questions asked from the astrometric data. This naturally leads to the insight that often the explicit knowledge of the distances to sources is not of interest. For example, in the case an assumed luminosity of the O-stars and their known apparent magnitude is sufficient to predict the observed parallaxes. In more complex analyses the distances to sources can often be treated as nuisance parameters, which in a Bayesian setting can be marginalised out of the posterior.\n\n#### **Use all relevant information**\n\nAlthough the parallax has a direct relation to the distance of a star, it is not the only measurement that contains distance information. The apparent magnitude and the colour of a star carry significant information on the plausible distances at which it can be located as the colour provides information on plausible absolute magnitude values for the star. This is used in two papers in which the information contained in the photometry of stars is combined with *Gaia\u00a0*\u00a0DR1\u00a0 parallax information to arrive at more precise representations of the colour-magnitude diagram than can be obtained through the parallaxes alone. Similarly, proper motions contain distance information which can be used to good effect to derive more precise distances or luminosities for open cluster members . The Bayesian approach naturally allows for the combination of different types of data in the modelling of the problem at hand. It should be emphasised, however, that adding additional data necessitates increasing the model complexity (e.g. to include the relation among apparent magnitude, absolute magnitude, extinction, and parallax) which leads to the need to make more assumptions.\n\n#### **Incorporate a proper prior**\n\ndiscussed the simplest case of inferring the distance to a source from a single observed parallax. He showed that if the minimal prior that $r$ should be positive is used, the resulting posterior is not normalisable and hence has no mean, variance, other moments, or percentiles. This behaviour of the posterior is not limited to inference of distance. Examples of other quantities that are non-linearly related to parallax are absolute magnitude ($M\\propto\\log_{10}\\varpi$), tangential velocity ($v_\\mathrm{T}\\propto1\/\\ensuremath{\\varpi}$), kinetic energy, and angular momentum (both proportional to $1\/\\ensuremath{\\varpi}^2$ when determined relative to the observer). In all these cases it can be shown that the posterior for an improper uniform prior is not normalisable and has no moments. The conclusion is that a proper prior on the parameters to be estimated should be included to ensure that the posterior represents a normalised probability distribution. In general using unconstrained non-informative priors in Bayesian data analysis (such as the one on $r$ above) is bad practice. Inevitably, there is always a mismatch between the prior and the true distribution (if there were not, there would be no need to do the inference). This will unavoidably lead to some biases, although the better the data, the smaller these will be. We cannot expect to do much better than our prior knowledge in case we only have access to poor data. The Bayesian approach guarantees a graceful transition of the posterior into the prior as the data quality degrades.\n\nThe above discussion raises the question of what priors to include for a specific inference problem. Simple recipes cannot be given as the prior to be used depends on the problem at hand and the information already available. When deciding on a prior we should keep in mind that some information is always available. Even if a parallax is only available for a single star, we know that it cannot be located at arbitrary distances. It is unlikely to be much closer that 1 pc (although we cannot fully exclude the presence of faint stars closer than Proxima Centauri) and it must be located at a finite distance (we can observe the star). This would suggest a non-informative uniform prior on $r$ with liberal lower and upper bounds (such that the prior is normalised). However, as pointed out in a uniform distribution in $r$ implies a space density of stars that falls of as $1\/r^2$ from the position of the Sun, which is of course physically implausible. Hence one should assume a reasonable distribution of stars in space and construct the prior on $r$ accordingly. presents a prior derived from a uniform space density with an exponential cut-off which was used in to derive distances to stars for which parallaxes are listed in *Gaia\u00a0*\u00a0DR1\u00a0. This prior should not be used indiscriminately, at the very least we should carefully consider the choice of the scale length $L$ (or leave that as a parameter to be estimated, as described in Sect.\u00a0) and in most cases a more tailored prior based on our broad knowledge of the distribution of a given stellar population in the Milky Way would be better. The tutorial cases introduced in the next section contain some more examples of priors on distance and other astrophysical parameters.\n\nThe next two items discuss simplifications to the Bayesian approach that nevertheless need to be justified carefully.\n\n#### **Maximum likelihood modelling**\n\nWe have seen that priors are the bridges that allow us to go from the probability of the observations given the unknown parameters to the desired probability of the parameters given the observations. It is only based on this probability distribution that we can make statements about credible intervals (uncertainties) for the inferred quantities, or select amongst competing models (a topic that is beyond the scope of this paper). However, if making prior-free inferences is preferred, then maximising the likelihood is the only alternative. The kinematic modelling presented in is a non-trivial example of this. A more complex example can be found in . The ML approach, just as the Bayesian framework described above, allows the combination of different types of data, and accounts for selection functions or missing data. We have seen in Sect.\u00a0 that the maximum likelihood estimate of the distance given a single parallax measurement is $\\rho=\\frac{1}{\\ensuremath{\\varpi}}$ and this is a poor estimator of the distance except for subsets of very accurate measurements. In general, the Bayesian and the maximum likelihood estimates coincide in the limit of very small uncertainties or infinite numbers of measurements. In such limits, the maximum likelihood estimate is simpler to obtain, although its computational cost may still be large as the ML method is often equivalent to a complex optimisation problem involving a multi-dimensional function of many parameters.\n\n#### **Selecting the 'best' data**\n\nAnalyses that use parallax data are often restricted to positive parallaxes with relative uncertainties below some limit (typically 20 %). This allows working in a regime where the uncertainties of derived quantities such as distance, tangential velocities, luminosity, etc.,\u00a0are thought to be manageable, which allows working in the space of astrophysical variables rather than the data. Truncation on relative parallax error might be justified in an exploratory phase of the data analysis; however, there are a number of reasons why this approach is almost never advisable. Even at relative uncertainties below $0.2$ the quantities calculated from the parallax can be biased and suffer from a large variance . More importantly, however, the selection of 'good' parallaxes will bias the sample studied to nearby and\/or bright representatives of any stellar population, and the selection may lead to discarding a very large fraction of the potential sample. Hence any inferences based on such data will be severely biased. This is clearly illustrated in Fig.\u00a0 where for an even less strict truncation of stars with a relative uncertainties below 50% the distribution of distances of the resulting sample is clearly biased with respect the original sample.\n\n#### **Accounting for data selection and incompleteness**\n\nAlthough the *Gaia* survey is designed such that the only selection of sources is that they are brighter than the survey limit at $G=20.7$, the combination of the onboard detection algorithm, the *Gaia* scanning law, and the filtering of results of insufficient quality prior to a data release, lead to a complex selection function, especially in the early data releases. This selection function should be taken into account in any data analysis and this is most naturally done as part of a Bayesian analysis where the selection function is part of the forward model that predicts the data given the model parameters. Precise prescriptions of the selection functions are not foreseen to be part of the data release documentation. Hence, selection function parameters need to be included as part of the parameters inferred by the Bayesian analysis or, if this is not possible, the selection functions have to be borne in mind when interpreting the results.\n\n#### **Covariances in the uncertainties**\n\nAll the uncertainties on the astrometric data quoted in the *Gaia* catalogue are presented as full covariance matrices, where the diagonal elements represent the standard uncertainties on the astrometric parameters, and the off-diagonal elements the covariances or correlations between the uncertainties. This amounts to treating the astrometric data vector as having been drawn from a multivariate normal distribution with a covariance matrix as given in the *Gaia* catalogue. The covariances are most easily handled in the data space as part of the likelihood . If the covariances in the astrometric uncertainties are not accounted for, we can easily be misled, for example, by spurious features in the inferred velocity field of an open cluster .\n\nThe uncertainties are also correlated from one source to the next, especially over small distances on the sky. A study of the star-to-star correlations in the parallax uncertainties in *Gaia\u00a0*\u00a0DR1\u00a0 was done for the Kepler field where independent and precise asteroseismic distances to the stars are available, enabling the authors to derive an expression for the correlation strength and spatial scale. This expression can be used for studies of the Kepler field, but care should be taken when extrapolating to other fields on the sky. The functional form for the star-to-star correlations used by could be introduced as part of the forward model, with the parameters as a good first guess.\n\nFor *Gaia\u00a0*\u00a0DR1\u00a0 the length scale for the star-to-star correlations was estimated to vary from subdegree scales to tens of degrees on the sky, where derived the correlation function over length scales of $\\sim0.2$ to $\\sim10$ degrees. For *Gaia\u00a0*\u00a0DR2\u00a0 estimate that the spatial correlations extend over scales of below 1 degree to 10\u201320 degrees.\n\n#### **Accounting for non-Gaussian and\/or systematic uncertainties**\n\nAlthough the bulk of the sources in the *Gaia* catalogue have normally distributed uncertainties, there is a significant fraction for which the uncertainties exhibit non-Gaussian behaviour (e.g. when uncertainties are over- or underestimated). This can be accounted for in the data analysis by including the uncertainties as part of the forward model or by explicitly modelling an outlier population. Similarly, systematic uncertainties can be included as part of the forward model. For example, include a global parallax zero-point as part of their probabilistic model used to analyse the period-luminosity relation of RR Lyrae stars with *Gaia\u00a0*\u00a0DR1\u00a0 data. An alternative approach to the investigation of systematics in the parallaxes (or distance estimates obtained from\u00a0photometry or spectroscopy, for example) is presented in and is applied to *Gaia* DR1 in . In this case we can consider that for samples covering a significant fraction of the sky, any systematic error in the estimated distances to the stars will show up as correlations in their 3D velocity components. The presence of such correlations can be used to make inferences about systematic errors, for example, in the parallaxes.\n\nSystematic uncertainties are more difficult to handle as they may show variations as a function of source brightness or celestial position, they may be correlated across neighbouring sources, and they may not be well understood for a given early *Gaia* data release. In general the information needed to accurately model systematic uncertainties or uncertainty correlations between sources may not be readily available. This information can be obtained from a comparison between *Gaia\u00a0* and other high-precision data or by examining, for example, plots of the parallax or proper motion averaged over sky regions for samples where the true parallax or proper motion values can be assumed to be known, such as zero parallax and proper motion for quasars (see for examples).\n\nTwo special cases should be mentioned: when the sample is well distributed over the sky, we can safely assume that the local systematics vanish and that only the global parallax zero-point need to be subtracted; locally, we may be interested not by the absolute value of the parallaxes, but by the relative ones, in which case the difference between parallaxes and their average removes part of the systematics.\n\nThere is no general recipe for dealing with non-Gaussian uncertainties or correlated systematic uncertainties. The main advice we can give here is to proceed with the analysis of the astrometric data as they are, but to keep in mind the systematics and correlations discussed in Sect.\u00a0 when interpreting the results. Alternatively, the forward model can be extended to include systematic and correlation terms for which parameters are also to be estimated. Such models can be guided by the studies of systematic uncertainties mentioned above.\n\n#### **Testing with simulations**\n\nFinally, we strongly advise that the inference problem at hand should be investigated through simulated data, and that the simulations performed should be as close as possible to the real data (in particular correctly modelling the uncertainties and selection effects). The simulations allow the analysis method to be developed and tested for accuracy. However, the performance of the analysis method should be interpreted strictly in terms of how well the assumed model explains the simulated observed data. That is, we should not fall into the trap of trying to tune the analysis method to get an answer that is as close to the 'truth' as possible. In real problems we can only judge the adequacy of a model and its parameter values by how well they predict the observed data (to within the observational uncertainties, it should be stressed, as we should avoid 'over-fitting' the data).\n\n# Using astrometric data: practical examples\n\nWe introduce here a few worked examples to illustrate the points that were made in the previous section. These examples are available in full detail as online tutorials in the form of source code, accompanied by much more extensive explanation than can be provided here. The source code and corresponding Python and R Notebooks can be found at the following URL: . In all cases the reader is strongly encouraged to download the source code and experiment with modifications of the simulated input data and\/or the parameter choices in the inference methods.\n\n## Comparison of distance and distance modulus estimators\n\nThe use of the Bayesian inference with non-informative priors described in Sect.\u00a0 is illustrated and implemented in the following tutorial . The tutorial compares the performance of Bayesian distance estimation methods with the Smith\u2013Eichhorn transformation (Sect.\u00a0) and the naive parallax inversion.\n\nThe tutorial contains a Graphical User Interface that easily visualises and compares the behaviour of all these estimators for a given parallax and uncertainty. For the Bayesian inference, estimations using the mode and the median are provided together with a 90% confidence interval. The tutorial also provides a library, *pyrallaxes*, with the implementation of all these estimators. The library can easily be customised to implement other priors for the Bayesian inference.\n\nAdditionally, an implementation of the Bayesian distance estimator using the *Exponentially Decreasing Space Density* prior introduced in will be available in TopCat () and Stilts () from respectively versions 4.6 and 3.1-3 onwards.\n\n## Inferring the distance to a single source using just the parallax\n\nThe issues surrounding the use of a parallax to infer a distance were explored in and applied to simulated *Gaia\u00a0*data in and to TGAS (*Gaia\u00a0*\u00a0DR1\u00a0) in . A tutorial exploring this is provided at . This can be used to investigate how the posterior depends on the prior and the data. It also includes a simple example of a hierarchical model to avoid specifying the exact length scale of a distance prior.\n\n## Inferring the distance to and size of a cluster using just the parallaxes\n\nIn many applications we are more interested in the average distance to a cluster (or a group of stars) rather than to the individual stars. In this case a basic mistake to be avoided is estimating the individual distances (whatever the method) and then averaging these individual values. A more correct approach is to average the individual parallaxes and then to obtain a distance from this average value. However, a much better solution is to set up a model for the cluster, which would use as parameters the distance to its centre, for example, and some measure of its size, and to infer its parameters. This is explored in the tutorial at . This introduces the overall problem and derives a general solution. Code is implemented for the specific case of a model which assumes a random isotropic distribution of the true stars from the centre of the cluster. This model has two parameters, the cluster distance and the cluster size. The solution uses a small angle approximation to make the problem simpler, although it is easily extended to the case of clusters with a significant angular extent. It is applied to the Pleiades selection from the *Gaia\u00a0*\u00a0DR1\u00a0 main release paper . The tutorial also considers the problem of how to accommodate correlations in the measured parallaxes of different stars. Finally, it also shows the results from a classical and a naive combination of stellar parallaxes to estimate the cluster distance. The combination of parallaxes and proper motions of individual stars in a cluster into a single solution for the mean parallax and proper motion is treated as an iterative least squares problem in (see their Appendix A for details).\n\n## Inferring the distance and velocity of a source using the parallax and proper motions\n\nThe velocity (speed and direction) of a source in the plane of the sky can be inferred from measurements of its parallax and two proper motions. The uncertainties in all three affect the inferred velocity and its uncertainty. Moreover, as the *Gaia\u00a0*parallaxes and proper motions generally have non-zero correlations, these must also be taken into account. This can be done in a straightforward manner in the Bayesian approach, as is shown in the tutorial at . This sets up a three-parameter model (distance, speed, angle) for a source. Using the three measurements (parallax, two proper motions) in a multivariate Gaussian likelihood, and suitable priors on the parameters, we can compute the trivariate posterior. This is sampled in the posterior using an MCMC algorithm for a set of stars.\n\n## Luminosity calibration\n\nIn this tutorial () the problem of inferring (or calibrating) the mean absolute magnitude of a specific class of stars is treated. The measurements at hand are the parallax and apparent magnitude for each of the stars in the sample and the task is to infer their mean absolute magnitude $\\mu_M$ and the spread $\\sigma_M$ around this mean. This is very similar to the problem that and treated, and a Bayesian approach to solving this problem was presented by (albeit with the use of improper priors, which we again note is bad practice). A more complex version of this problem (accounting for extinction and a contaminating population of stars) and its Bayesian solution was presented in . In this tutorial three important points are illustrated:\n\n- Often the explicit computation of the distances to stars is not of interest. In this example only the mean absolute magnitude of the stars is to be estimated, and the forward modelling approach as part of the Bayesian inference avoids the need to calculate or estimate distances.\n\n- The data for all the stars carry information on the mean absolute magnitude, including the negative parallaxes or parallaxes with large relative errors. This information can naturally be incorporated in a forward modelling approach (in this example as part of a Bayesian inference method), thus avoiding the introduction of truncation biases caused by the selection of stars with 'good' parallaxes.\n\n- If the selection function is known (in this example the survey is magnitude limited), it can and should be included in the forward modelling. This accounts for sample selection biases that would otherwise occur.\n\n## Period-luminosity relation\n\nIn this tutorial () we include a hierarchical model to infer period-luminosity-metallicity relations for classical pulsating stars. The full model can be applied to fundamental mode RR\u00a0Lyrae stars and the abridged version (without the metallicity dependence) is suitable for samples of classical Cepheids. We include the data set for the RR\u00a0Lyrae stars described and used for inference in and . It contains a sample of 200 stars (including fundamental radial pulsators but also *fundamentalised* first overtone pulsators) with measured periods, apparent magnitudes in the $K$-band, metallicities, and parallaxes from the TGAS catalogue. In the tutorial, we describe the hierarchical model and discuss potential biases in the data set. Finally, we analyse the sensitivity of the results to different choices of the priors and related parameters.\n\n# Conclusions\n\n*Gaia\u00a0*data releases will provide a huge increase of astrometric data available for the scientific community. More than a billion parallaxes and proper motions allow new openings into many astronomical topics. In most cases astronomers are exploiting the *Gaia\u00a0*catalogues to obtain physical quantities such as distance and velocity. Although it is easy to extract from the *Gaia\u00a0*data, it is well known that direct inversion of parallax will lead to biases, which become more and more significant the larger the relative parallax uncertainty. While *Gaia\u00a0*will provide high-quality astrometric measurements, hundreds of millions of stars have precisions which require proper statistical treatment in order to avoid biased conclusions. The aim of this paper is to guide the users of *Gaia\u00a0*data to handle astrometric data correctly.\n\nIn this study we summarise methods used to avoid biases when converting astrometric data into physical quantities. Starting from simple, non-recommended, sample truncation to more complex methods, the biases associated with the methods are presented. The basic recommendation is to treat derivation of physical quantities from astrometric measurements as an inference problem, which should be preferably handled with Bayesian approach. The recommended methods are described in Sect.\u00a0 with a summary in Sect.\u00a0. To aid the users further, Sect.\u00a0 contains practical examples with links to Python and R code.\n\n*Gaia\u00a0*will provide fundamental data for many fields of astronomy. Further data releases will provide more data, and more precise data. Nevertheless, for full use of the potential it will always be necessary to pay careful attention to the statistical treatment of parallaxes and proper motions. The purpose of this paper is to help astronomers find the correct approach.\n\n[^1]: \n\n[^2]: For a subset of the data, only two parameters (right ascension $\\alpha$ and declination $\\delta$) could be determined.\n\n[^3]: ","meta":{"dup_signals":{"dup_doc_count":12,"dup_dump_count":3,"dup_details":{"curated_sources":1,"2024-22":1,"unknown":10}},"filename":"out\/1804.09376_extract_32964_Arxiv.tex.md"},"subset":"arxiv"} +{"text":"abstract: We consider the problem of optimally executing an order involving multiple crypto-assets, sometimes called tokens, on a network of multiple constant function market makers (CFMMs). When we ignore the fixed cost associated with executing an order on a CFMM, this optimal routing problem can be cast as a convex optimization problem, which is computationally tractable. When we include the fixed costs, the optimal routing problem is a mixed-integer convex problem, which can be solved using (sometimes slow) global optimization methods, or approximately solved using various heuristics based on convex optimization. The optimal routing problem includes as a special case the problem of identifying an arbitrage present in a network of CFMMs, or certifying that none exists.\nauthor: Guillermo Angeris \n`email@example.com`; Tarun Chitra \n`firstname.lastname@example.com`; Alex Evans \n`email@example.com`; Stephen Boyd \n`firstname.lastname@example.com`\nbibliography: citations.bib\ndate: December 2021\ntitle: Optimal Routing for Constant Function Market Makers\n\n# Introduction\n\nDecentralized exchanges (DEXs) are a popular application of public blockchains that allow users to trade assets without the need for a trusted intermediary to facilitate the exchange. DEXs are typically implemented as *constant function market makers* (CFMMs)\u00a0. In CFMMs, liquidity providers contribute reserves of assets. Users can then trade against these reserves by tendering baskets of assets in exchange for other baskets. CFMMs use a simple rule for accepting trades: a trade is only valid if the value of a given function at the post-trade reserves (with a small adjustment to account for fees collected) is equal to the value at the pre-trade reserves. This function is called the *trading function* and gives CFMMs their name. A common example of a trading function is the constant product popularized by Uniswap\u00a0, wherein a trade is only accepted if it preserves the product of the reserve amounts.\n\nCFMMs have quickly become one of the most popular applications of public blockchains, facilitating several billion dollars of trading volume per day. As DEXs have grown in popularity, so have the number of CFMMs and assets offered, creating complexity for traders who simply want to maximize their utility for trading one basket of assets for another. As a result, several \"DEX aggregators\" have emerged to route orders across multiple CFMMs on behalf of users. These aggregators currently execute several billion dollars per month across all DEXs on Ethereum\u00a0. At the same time, CFMM platforms such as Uniswap offer software for routing orders across the subset of CFMMs they support\u00a0.\n\nWhile the properties of individual CFMMs have been studied extensively (see, , ) it is more common for users to want to access liquidity on multiple CFMMs to minimize their total trading costs. In this paper, we study the optimal routing problem for CFMMs. We consider a user who can trade with multiple CFMMs in order to exchange one basket of assets for another and ask how one should perform such trades optimally. We show that, in the absence of fixed costs, the optimal routing problem can be formulated as a convex optimization problem, which is efficiently solvable. As an (important) sub-case, we demonstrate how to use this problem to identify arbitrage opportunies on a set of CFMMs. Our framework encompasses the routing problems considered in prior work\u00a0 and offers solutions in the more general case where users seek to trade any basket of assets for any other basket across any set of CFMMs whose trading functions are concave and not necessarily differentiable. When including transaction costs, we show that the optimal routing problem is a mixed-integer convex problem, which permits (potentially computationally intensive) global solutions as well as approximate solutions using heuristics based on convex optimization.\n\n#### Outline.\n\nWe describe the optimal routing problem in\u00a0\u00a7, ignoring the fixed transaction costs. In\u00a0\u00a7 we give the optimality conditions for the optimal routing problem, and give conditions under which the optimal action is to not trade at all. We give some examples of the optimal routing problem in\u00a0\u00a7, including as a special case the detection of arbitrage in the network. We give a simple numerical example in\u00a0\u00a7. In\u00a0\u00a7 we show how to add fixed transaction costs to the optimal routing problem, and briefly describe some exact and approximate solution approaches.\n\n# Optimal routing problem\n\n#### Network of CFMMs.\n\nWe consider a set of $m$ CFMMs, denoted $i=1, \\ldots, m$, each of which trades multiple tokens from among a universe of $n$ tokens, labeled $j=1 \\ldots, n$. We let $n_i$ denote the number of tokens that CFMM $i$ trades, with $2 \\leq n_i \\leq n$. We can think of the $m$ CFMMs as the vertices of a network or hypergraph, with $n$ hyper-edges, each corresponding to one of the $n$ assets, adjacent to those CFMMs that trade it. Alternatively we can represent this as a bipartite graph, with one group of $m$ vertices the CFMMs, and the other group of $n$ vertices the tokens, with an edge between a CFMM and a token if the CFMM trades the token.\n\nA simple example is illustrated as a bipartite graph in figure\u00a0. In this network there are $m=5$ CFMMs, which trade subsets of $n=3$ tokens. CFMM\u00a01 trades all three tokens, so $n_1=3$; the remaining 4 CFMMs each trade pairs of tokens, so $n_2= \\cdots = n_5=2$.\n\n#### Global and local token indices.\n\nWe use multiple indices to label the tokens. The *global* index uses the token labels $1, \\ldots, n$. CFMM $i$, which trades a subset of $n_i$ of the $n$ tokens, has its *local* token index, $j=1, \\ldots, n_i$. To link the global and local indexes, we introduce matrices $A_i \\in\n\\reals^{n \\times n_i}$, $i=1\\ldots, m$, where $(A_i)_{jk} = 1$ if token $k$ in the CFMM $i$'s local index corresponds to global token index $j$, while $(A_i)_{jk} =0$ otherwise.\n\nAs an example, consider the simple network shown in figure\u00a0. CFMM\u00a01 trades all three tokens, with its local indexing identical to the global indexing, so $A_1$ is the $3\\times 3$ identity matrix. As a more interesting example, consider CFMM\u00a04, which trades two tokens, labeled 1 and 2 in the local indexing, but 1 and 3 in the global indexing, so $$A_4 = \\begin{bmatrix}\n 1 & 0\\\\\n 0 & 0\\\\\n 0 & 1\\\\\n \\end{bmatrix}.$$\n\n#### CFMM tendered and received baskets.\n\nFor CFMM $i$ we denote the tendered and received baskets as $\\Delta_i,~\\Lambda_i \\in \\reals_+^{n_i}$. These are quantities of tokens (in the local indexing) that we give to, and receive from, CFMM $i$ in a proposed trade.\n\n#### CFMM trading semantics.\n\nThe proposed trade $(\\Delta_i,\\Lambda_i)$ is valid and accepted by CFMM $i$ provided $$\\varphi_i(R_i+\\gamma \\Delta_i - \\Lambda_i) = \\varphi_i(R_i),$$ where $\\varphi: \\reals^{n_i}_+ \\to \\reals$ is the trading function, $R_i \\in \\reals_+^{n_i}$ are the current reserves, and $\\gamma_i \\in (0,1]$ is the trading fee for CFMM $i$. We will assume that the trading functions $\\varphi_i$ are concave and increasing functions. (For much more detail, see .)\n\nExamples of trading functions are the sum function, with $$\\varphi_i(R)= R_1+ \\cdots + R_{n_i},$$ the product or geometric mean function $$\\varphi_i(R)= (R_1 \\cdots R_{n_i})^{1\/n_i},$$ and a generalization, the weighted geometric mean function $$\\varphi_i(R)= R_1^{w_1} \\cdots R_{n_i}^{w_{n_i}},$$ with weights $w>0$, $\\ones^Tw=1$, where $\\ones$ is the vector with all components one.\n\n#### Network trade vector.\n\nThe CFMM tendered and received baskets $\\Delta_i$ and $\\Lambda_i$ can be mapped to the global indices as the $n$-vectors $A_i \\Delta_i$ and $A_i \\Lambda_i$, which give the numbers of tokens tendered to and received from CFMM $i$, using the global indices for the tokens. Summing the difference of received and tendered tokens over all CFMMs we obtain the *network trade vector* $$\\Psi = \\sum_{i=1}^m A_i (\\Lambda_i - \\Delta_i).$$ This is an $n$-vector, which gives the total net number of tokens tendered to the CFMMs in the network. We interpret $\\Psi$ as the net network trade vector. If $\\Psi_k \\geq 0$, it means that we do not need to tender any of token $k$ to the network. This does not mean we that do not trade token $k$; it only means that in our trading with the CFMMs, we receive at least as much as token $k$ as we tender.\n\n#### Network trade utility.\n\nWe introduce a utility function $U:\\reals^n \\to \\reals \\cup \\{-\\infty\\}$ that gives the utility of a trade $\\Psi$ to a trader as $U(\\Psi)$. We will assume that $U$ is concave and increasing. Infinite values of $U$ are used to impose constraints; we consider a proposed trade with $U(\\Psi) = -\\infty$ as unacceptable. As an important example, consider the constraint $\\Psi + h \\geq 0$, where $h \\in \\reals_+^n$ is the vector of a user's current holdings of tokens. This constraint specifies that the post-trade holdings $\\Psi+ h$ should be nonnegative, , we cannot enter into a trade that requires more of any token than we currently have on hand. To express this in the utility function, we define $U(z)$ to be $-\\infty$ when $z+g \\not\\geq 0$. (This modified utility is also concave and increasing.)\n\nThere are many possible choices for $U$. Perhaps the simplest is the linear utility function $U(z) = \\pi^Tz$, with $\\pi \\in \\reals_{++}$, where we interpret $\\pi_i$ as the trader's internal or private value or price of token $i$. Several other choices are discussed in\u00a0.\n\n#### Optimal routing problem.\n\nWe wish to find a set of valid trades that maximizes the trader's utility. This optimal routing problem can be expressed as\n\nThe variables here are $\\Psi \\in \\reals^n$, $\\Lambda_i \\in \\reals_+^{n_i}$, $\\Delta_i \\in \\reals^{n_i}_+$, $i=1, \\ldots, m$. The data are the utility function $U$, the global-local matrices $A_i$, and those associated with the CFMMs, the trading functions $\\varphi_i$, the trading fees $\\gamma_i$, and the reserve amounts $R_i \\in \\reals^{n_i}_+$.\n\n#### Convex optimal routing problem.\n\nUnless the trading functions are affine, the optimal routing problem is not a convex optimization problem. We can, however, form an equivalent problem that is convex. To do this we replace the equality constraints with inequality constraints:\n\nThis problem is evidently convex\u00a0 and can be readily solved.\n\nWe will show that any solution of is also a solution of . This the same as showing that for any solution of , the inequality constraints hold with equality. Suppose that $\\Delta_i^\\star$ and $\\Lambda_i^\\star$ are feasible for , but $\\varphi_k(R_k + \\gamma_k \\Delta^\\star_k - \\Lambda^\\star_k) > \\varphi_k(R_k)$ for some $k$. This means we can find $\\tilde \\Lambda > \\Lambda^\\star_k$ which satisfies $\\varphi_k(R_k + \\gamma_k \\Delta^\\star_k - \\tilde \\Lambda) \\geq \\varphi_k(R_k)$. The associated trade vector $\\tilde \\Psi = \\Psi^\\star + A_k \\Lambda_k$ satisfies $\\tilde \\Psi \\geq \\Psi^\\star$, $\\tilde \\Psi \\neq \\Psi^\\star$, , at least one component of $\\tilde \\Psi$ is larger that the corresponding component of $\\Psi^\\star$. It follows that $U(\\tilde \\Psi) > U(\\Psi^\\star)$, so $\\Psi^\\star$ is not optimal and therefore cannot be a solution. We conclude that any solution of satisfies the inequality constraints as equality, and so is optimal for .\n\nA similar statement holds in the case when $U$ is only nondecreasing, and not (strictly) increasing. In this case we can say that there is a solution of that is optimal for . One simple method to find such a solution is to solve with objective $U(\\Psi)+\\epsilon \\ones^T\\Psi$, where $\\epsilon$ is small and positive. The objective for this problem is increasing. In principle we can let $\\epsilon$ go to zero to recover a solution of the original problem; in practice choosing a single small value of $\\epsilon$ works.\n\n#### Consequences of convexity.\n\nSince the problem is convex, it can be reliably and quickly solved, even for large problem instances\u00a0. Domain specific languages for convex optimiztion such as CVXPY\u00a0 or JuMP can be used to specify the optimal routing problem in just a few lines of code; solvers such as ECOS\u00a0, SCS\u00a0, or Mosek\u00a0 can be used to solve the problem.\n\n#### Implementation.\n\nThe solution to\u00a0 provides the optimal values $(\\Delta_i,\\Lambda_i)$ that one must trade with each CFMM $i$. Note that we do not explicitly consider the ideal execution of these trades, as these will depend on the semantics of the underlying blockchain that the user is interacting with. For example, users may wish to execute trades in a particular sequence, starting with an initial portfolio of assets and updating their asset composition after each transaction until the final basket is obtained. Users may alternatively use flashloans\u00a0 to atomically perform all trades in a single transaction, netting out all trades before repaying the loan at the end of the transaction.\n\n# Optimality conditions\n\nAssuming that $U$ is differentiable (which it need not be), the optimality conditions of problem\u00a0 are feasibility, and the dual conditions U() = , and \\_i\\_i \\_i(R_i + \\_i\\_i - \\_i) A_i\\^T \\_i\\_i(R_i + \\_i \\_i - \\_i), i=1, \u2026, m, where $\\nu \\in \\reals^n$, $\\lambda_i \\in \\reals_+$, $i=1,\\ldots, m$ are the Lagrange multipliers. (We derive these conditions in appendix\u00a0.)\n\nThe first condition has a very simple interpretation: $\\nu$ is the vector of marginal utilities of the tokens. The second set of conditions\u00a0 has a simple interpretation that is very similar to the one given in\u00a0 for a single CFMM. The term $\\nabla \\varphi_i(R_i + \\gamma_i\\Delta_i - \\Lambda_i)$, which we will write as $P_i \\in \\reals^{n_i}_+$, can be interpreted as the unscaled prices of the tokens that CFMM $i$ trades\u00a0, in the local indexing. Plugging into\u00a0,we get \\_i\\_iP_i A_i\\^TU() \\_iP_i, i=1, \u2026, m. The middle term is the vector of marginal utilities of the tokens that CFMM $i$ trades, in the local indexing. These marginal utilities must lie between a multiple of the discounted prices, scaled, and the same prices, adjusted by $\\gamma_i$.\n\nWe can also recognize the conditions\u00a0 as those for the problem of finding an optimal trade for CFMM $i$ alone, with the linear utility $U_i(z) = \\pi_i^T z$, where $\\pi_i = A_i^T \\nabla U(\\Psi)$. (See\u00a0.) This is very appealing: it states that the optimal trades for the network of CFMMs are also optimal for each CFMM separately, when they use a linear utility, with prices equal to the marginal utility of the overall trade.\n\n#### Non-differentiable utility.\n\nWhen $U$ is not differentiable, the optimality conditions are the same, but in we substitute a supergradient $g \\in -\\partial (-U)(\\Psi)$ for the gradient $\\nabla U(\\Psi)$, where $\\partial$ denotes the subdifferential. When $U$ is not differentiable at $\\Psi$, there are multiple such $g$'s, which we can consider to be multiple marginal utilities. The optimality condition is that hold, with $\\nabla U(\\Psi)$ replaced with $g$, for any $g \\in -\\partial (-U)(\\Psi)$.\n\n#### No-trade condition.\n\nFrom the optimality conditions we can derive conditions under which the optimal trades are zero, , we should not trade. We will assume that $U(0) > -\\infty$, , $\\Psi=0$ is feasible; if this is not the case, then evidently $\\Psi=0$ is not optimal. The zero trade $$\\Psi=0, \\qquad \\Delta_i = \\Lambda_i = 0, \\quad i=1, \\ldots, m,$$ is feasible for , so the optimality condition is \\_i\\_i P_i A_i\\^TU(0) \\_iP_i, i=1, \u2026, m, where $P_i = \\nabla \\varphi_i(R_i)$ is the unscaled price of CFMM $i$ at the current reserves $R_i$. This condition is a generalization of the no-trade condition given in\u00a0 for one CFMM, to the case where there are multiple CFMMs. When $U$ is not differentiable, we replace $\\nabla U(0)$ with a supergradient $g \\in -\\partial (-U)(0)$.\n\n# Examples\n\n#### Linear utility.\n\nConsider the linear utility $U(z)=\\pi^Tz$, with $\\pi >0$. In this case the optimal routing problem is separable across the CFMMs, since the objective is a sum of functions of $\\Lambda_i-\\Delta_i$, and constraints are also only on $(\\Lambda_i,\\Delta_i)$. It follows that we can solve the optimal routing problem by solving $m$ single CFMM problems independently, using linear utilities with prices given by $\\pi$.\n\n#### Liquidating a basket of tokens.\n\nSuppose we start with an initial holding (basket) of tokens $h^\\text{init} \\in \\reals_+^n$ and wish to convert them all to token $k$. We use the utility function $$U(z) = \\left\\{ \\begin{array}{ll} z_k & h^\\text{init} + z \\geq 0\\\\\n-\\infty & \\mbox{otherwise.} \\end{array}\\right.$$ (This utility is nondecreasing but not increasing.)\n\nA special case of this problem is converting one token into another, , when $$h^\\mathrm{init} = t e_j,$$ where $t \\ge 0$ and $e_j \\in \\reals^n$ is the $j$th unit vector. In this special case, we can write the optimal value of the optimization problem, which we will call $u(t)$, as a function of $t$. It is not hard to show that $u$ is nonnegative and increasing. This function is also concave as it is the partial maximization of a concave function (over all variables except $t$). We show an example instance of this problem, along with an associated function $u$, in\u00a0\u00a7.\n\n#### Purchasing a basket of tokens.\n\nThis is the opposite of liquidating a basket of tokens. Here too we start with initial token holdings $h^\\text{init} \\in \\reals_+^n$, and end up with the holdings $h^\\text{init} + \\Psi$. Let $h^\\text{des} \\in \\reals_+^n$ be our target basket; we wish to end up with the largest possible multiple of this basket. Let $\\mathcal K \\subseteq \\{1, \\ldots, n\\}$ denote the set of indices for which $h^\\text{des}_i>0$, , the indexes associated with tokens in the desired basket. We seek to maximize the value of $\\alpha$ for which $h^\\text{init} + \\psi \\geq \\alpha h^\\text{des}$. To do this we use the utility function $$U(z) = \\left\\{ \\begin{array}{ll} \\min_{i \\in \\mathcal K} (h_i^\\text{init}+\\Psi_i)\/\nh_i^\\text{des} & h^\\text{init} + z \\geq 0\\\\\n-\\infty & \\mbox{otherwise.} \\end{array}\\right.$$\n\n#### Arbitrage detection.\n\nAn arbitrage is a collection of valid CFMM trades with $\\Psi \\geq 0$ and $\\Psi \\neq 0$, , a set of trades for the CFMMs in which we tender no tokens, but receive a positive amount of at least one token. The optimal routing problem can be used to find an arbitrage, or certify that no arbitrage exists.\n\nConsider any $U$ that is increasing, with domain $\\{\\Psi \\mid U(\\Psi)>-\\infty \\} = \\reals_+^n$. Evidently there is an arbitrage if and only if there is a nonzero solution of the routing problem, which is the same as $U(\\Psi^\\star) > U(0)$, where $\\Psi^\\star$ is optimal. So by solving this optimal routing problem, we can find an arbitrage, if one exists.\n\n#### No-arbitrage condition.\n\nUsing the version of for nondifferentiable $U$ we can derive conditions under which there is no arbitrage. We consider the specific utility function $$U(\\Psi) = \\left\\{ \\begin{array}{ll} \\ones^T \\Psi & \\Psi \\geq 0\\\\\n-\\infty & \\mbox{otherwise},\n\\end{array}\\right.$$ where $\\ones$ denotes the vector with all entries one. This utility is the total number of tokens received, when they are nonnegative. Its supergradient at $0$ is $$-\\partial (-U)(0) = [1,\\infty)^n = \\{ g\\mid g \\geq \\ones \\}.$$ The condition becomes: There exists $g \\geq \\ones$, and $\\lambda_i \\geq 0$, for which $$\\gamma_i\\lambda_i P_i \\le A_i^T g \\le \\lambda_iP_i, \\quad i=1, \\ldots, m.$$ By absorbing a scaling of $g$ into the $\\lambda_i$, we can say that $g>0$ is enough.\n\nThis makes sense: it states that there is no arbitrage if we can assign a set of positive prices (given by $g$) to the tokens, for which no CFMM would trade. In the (unrealistic) case when $\\gamma_i=1$, , there is no trading cost, the no arbitrage condition is that there exists a global set of prices for the tokens, $g$, consistent with the local prices of tokens given by $\\lambda_iP_i$.\n\n# Numerical example\n\nThe Python code for the numerical example we present here is available at\n\n`https:\/\/github.com\/angeris\/cfmm-routing-code`.\n\nThe optimization problems are formulated and solved using CVXPY . A listing of the core of the code is given in appendix\u00a0.\n\n#### Network.\n\nWe consider the network of 5 CFMMs and 3 tokens shown in figure\u00a0. The trading functions, Fee parameters, and reserves are listed in table\u00a0.\n\n| **CFMM** | **Trading function $\\varphi_i$** | **Fee parameter $\\gamma_i$** | **Reserves** $R_i$ |\n|:---|:---|:---|:---|\n| 1 | Geometric mean, $w=(3, 2, 1)$ | 0.98 | (3, .2, 1) |\n| 2 | Product | 0.99 | (10, 1) |\n| 3 | Product | 0.96 | (1, 10) |\n| 4 | Product | 0.97 | (20, 50) |\n| 5 | Sum | 0.99 | (10, 10) |\n\nCFMM attributes.\n\n#### Problem and utility.\n\nWe wish to trade an amount $t\\geq 0$ of token\u00a01 for the maximum possible amount of token\u00a03. This is a special case of the problem of liquidating a basket of tokens, as described in\u00a0\u00a7, with initial holdings $h^\\text{init}=t e_1$. The utility function is $U(z) = z_3$, provided $z+h^\\text{init} \\geq 0$, and $U(z) = -\\infty$ otherwise. We let $u(t)$ denote the maximum amount of token\u00a03 we can obtain from the network when we tender token\u00a01 in the amount $t$.\n\n#### Results.\n\nWe solve the optimal routing problem for many values of $t$, from $t=0$ to $t=50$. The amount of token $3$ we obtain in shown in figure\u00a0. We see that $u(0) > 0$, which means there is an arbitrage in this network; indeed, there is a set of trades that requires giving zero net tokens to the network, but we receive an amount around 7 of token\u00a03.\n\nThe associated optimal trades are shown in figure\u00a0. We can see many interesting phenomena here. At $t=0$ we see the arbitrage trades, indeed, the arbitrage trades that yield the largest amount of token\u00a03. Several asset flows reverse sign as $t$ varies. For example, for $t<11$, we receive token\u00a01 from CFMM\u00a01, whereas for $t>11$, we tender token\u00a01 to CFMM\u00a01. We can also see that the sparsity pattern of the optimal trades changes with $t$.\n\nWe illustrate the changing signs and sparsity of the optimal trades in figure\u00a0, which shows the optimal trades for $t=0$, $t=20$, and $t=50$. We plot whether each token is tendered to or received from each CFMM using color coded edges. A red edge connecting a token to a CFMM means that the CFMM is receiving this token, while a blue edge denotes that the CFMM is tendering this token. A dashed edge denotes that the CFMM neither tenders nor receives this token.\n\nEven in this very simple example, the optimal trades are not obvious and involve trading with and among multiple CFMMs. For larger networks, the optimal trades are even less obvious.\n\n# Fixed transaction costs\n\nOur optimal routing problem includes the trading costs built into CFMMs, via the parameters $\\gamma_i$. But it does not include the small fixed cost associated with any trade. In this section we explore how these fixed transaction costs can be incorporated into the optimal routing problem.\n\nWe let $q_i \\in \\reals_+$ denote the fixed cost of executing a trade with CFMM $i$, denominated in some numeraire. We pay this whenever we trade, , $\\Lambda_i - \\Delta_i \\neq 0$. We introduce a new set of Boolean variables into the problem, $\\eta \\in \\{0, 1\\}^m$, with $\\eta_i = 1$ if a nonzero trade is made with CFMM $i$, and $\\eta_i = 0$ otherwise, so the total fixed transaction cost is $q^T\\eta$. We assume that there is a known maximum size of a tendered basket with CFMM $i$, which we will denote $\\Delta^\\mathrm{max}_i \\in \\reals_+^{n_i}$. We can then express the problem of maximizing the utility minus the fixed transaction cost as $$\\label{eq:micp}\n\\begin{aligned}\n & \\text{maximize} && U(\\Psi) - q^T\\eta\\\\\n & \\text{subject to} && {\\textstyle \\Psi =\\sum_{i=1}^m A_i(\\Lambda_i - \\Delta_i)}\\\\\n &&& \\varphi_i(R_i + \\gamma_i \\Delta_i - \\Lambda_i) \\ge \\varphi_i(R_i), \\quad i=1, \\dots, m\\\\\n &&& 0 \\le \\Delta_i \\le \\eta_i\\Delta^\\mathrm{max}_i, \\quad \\Lambda_i \\ge 0, \\quad i=1, \\dots, m,\\\\\n &&& \\eta \\in\\{0, 1\\}^m,\n\\end{aligned}$$ where the variables are $\\Psi$, $\\Lambda_i$, $\\Delta_i$, $i=1, \\ldots, m$, and $\\eta \\in \\{0,1\\}^m$.\n\nThe optimal routing problem with fixed costs is a mixed-integer convex program (MICP). It can be solved exactly, possibly with great computational effort, using global optimization methods, with MICP solvers such as Mosek\u00a0 or Gurobi\u00a0. When $m$ is small (say, under ten or so), it can be practical to solve it by brute force, by solving the convex problem we obtain for each of the $2^m$ feasible values of $\\eta$.\n\n#### Approximate solution methods.\n\nMany approximate methods have the speed of convex optimization, and (often) produce good approximate solutions. For example we can solve the relaxation of obtained by replacing the constraints on $\\eta$ to $\\eta \\in[0,1]^m$ (which gives a convex optimization problem). After that we set a threshold $t\\in (0,1)$ for the relaxed optimal values $\\eta^\\text{rel}$, and take $\\eta_i = 1$ when $\\eta^\\text{rel}\\geq t$ and and $\\eta_i = 0$ when $\\eta^\\text{rel}= cp.geo_mean(reserves[0], p=np.array([3, 2, 1])),\n\n # Uniswap v2 pools\n cp.geo_mean(new_reserves[1]) >= cp.geo_mean(reserves[1]),\n cp.geo_mean(new_reserves[2]) >= cp.geo_mean(reserves[2]),\n cp.geo_mean(new_reserves[3]) >= cp.geo_mean(reserves[3]),\n\n # Constant sum pool\n cp.sum(new_reserves[4]) >= cp.sum(reserves[4]),\n new_reserves[4] >= 0,\n\n # Allow all assets at hand to be traded\n psi + current_assets >= 0\n]\n\n# Set up and solve problem\nprob = cp.Problem(obj, cons)\nprob.solve()\n\nprint(f\"amount of asset 3 received: {psi[2].value}\")\n```","meta":{"dup_signals":{"dup_doc_count":15,"dup_dump_count":6,"dup_details":{"curated_sources":1,"2024-26":2,"2024-22":1,"2024-18":1,"2024-30":1,"unknown":9}},"filename":"out\/2204.05238_extract_cfmm-routing.tex.md"},"subset":"arxiv"} +{"text":"abstract: Artificial autonomous agents and robots interacting in complex environments are required to continually acquire and fine-tune knowledge over sustained periods of time. The ability to learn from continuous streams of information is referred to as lifelong learning and represents a long-standing challenge for neural network models due to catastrophic forgetting in which novel sensory experience interferes with existing representations and leads to abrupt decreases in the performance on previously acquired knowledge. Computational models of lifelong learning typically alleviate catastrophic forgetting in experimental scenarios with given datasets of static images and limited complexity, thereby differing significantly from the conditions artificial agents are exposed to. In more natural settings, sequential information may become progressively available over time and access to previous experience may be restricted. Therefore, specialized neural network mechanisms are required that adapt to novel sequential experience while preventing disruptive interference with existing representations. In this paper, we propose a dual-memory self-organizing architecture for lifelong learning scenarios. The architecture comprises two growing recurrent networks with the complementary tasks of learning object instances (episodic memory) and categories (semantic memory). Both growing networks can expand in response to novel sensory experience: the episodic memory learns fine-grained spatiotemporal representations of object instances in an unsupervised fashion while the semantic memory uses task-relevant signals to regulate structural plasticity levels and develop more compact representations from episodic experience. For the consolidation of knowledge in the absence of external sensory input, the episodic memory periodically replays trajectories of neural reactivations. We evaluate the proposed model on the CORe50 benchmark dataset for continuous object recognition, showing that we significantly outperform current methods of lifelong learning in three different incremental learning scenarios.\nauthor: German I. Parisi$^1$, Jun Tani$^2$, Cornelius Weber$^1$, Stefan Wermter$^1$ \n$^1$``{=html}Knowledge Technology, Department of Informatics, Universit\u00e4t Hamburg, Germany \n$^2$``{=html}Cognitive Neurorobotics Research Unit, Okinawa Institute of Science and Technology, Japan \ntitle: Lifelong Learning of Spatiotemporal Representations with Dual-Memory Recurrent Self-Organization\n\n# Introduction\n\nArtificial autonomous agents and robots interacting in dynamic environments are required to continually acquire and fine-tune their knowledge over time (Thrun and Mitchell, 1995; Parisi et al., 2018a). The ability to progressively learn over a sustained time span by accommodating novel knowledge while retaining previously learned experiences is referred to as continual or lifelong learning. In contrast to state-of-the-art deep learning models that typically rely on the full training set being available at once (see LeCun et al., 2015 for a review), lifelong learning systems must account for situations in which the training data become incrementally available over time. Effective models of lifelong learning are crucial in real-world conditions where an autonomous agent cannot be provided with all the necessary prior knowledge to interact with the environment and the direct access to previous experience is restricted (Thrun and Mitchell, 1995). Importantly, there may be no distinction between training and test phases, which requires the system to concurrently learn and timely trigger behavioral responses (Cangelosi and Schlesinger, 2015; Tani, 2016). Lifelong machine learning represents a long-standing challenge due to catastrophic forgetting or interference, i.e., training a model with a new task leads to an abrupt decrease in the performance on previously learned tasks (McCloskey and Cohen, 1989). To overcome catastrophic forgetting, computational models must adapt their existing representations on the basis of novel sensory experience while preventing disruptive interference with previously learned representations. The extent to which a system must be flexible for learning novel knowledge and stable for preventing the disruption of consolidated knowledge is known as the stability-plasticity dilemma, which has been extensively studied for both computational and biological systems (e.g., Grossberg, 1980, 2007; Mermillod et al., 2013; Ditzler et al., 2015). Neurophysiological evidence suggests distributed mechanisms of structural plasticity that promote lifelong memory formation, consolidation, and retrieval in multiple brain areas (Power and Schlaggar, 2016; Zenke et al., 2017a). Such mechanisms support the development of the human cognitive system on the basis of sensorimotor experiences over sustained time spans (Lewkowicz, 2014). Crucially, the brain must constantly perform two complementary tasks: (i) recollecting separate episodic events (specifics), and (ii) learning the statistical structure from the episodic events (generalities). The complementary learning systems (CLS) theory (McClelland et al., 1995; Kumaran et al., 2016) holds that these two interdependent operations are mediated by the interplay of the mammalian hippocampus and neocortex, providing the means for episodic memory (specific experience) and semantic memory (general structured knowledge). Accordingly, the hippocampal system exhibits quick learning of sparse representations from episodic experience which will, in turn, be transferred and integrated into the neocortical system characterized by a slower learning rate with more compact representations of statistical regularities.\n\nRe-training a (deep) neural architecture from scratch in response to novel sensory input can require extensive computational effort. Furthermore, storing all the previously encountered data in lifelong learning scenarios has the general drawback of large memory requirements. Instead, Robins (1995) proposed pseudo-rehearsal (or intrinsic replay) in which previous memories are revisited without the need of explicitly storing data samples. Pseudo-samples are drawn from a probabilistic or generative model and replayed to the system for memory consolidation. From a biological perspective, the direct access to past experiences is limited or restricted. Therefore, the replay of hippocampal representations in the absence of external sensory input plays a crucial role in memory encoding (Carr et al., 2011; Kumaran et al., 2016). Memory replay is argued to occur through the reactivation of neural patterns during both sleep and awake states (e.g., free recall; Gelbard-Sagiv et al., 2008). Hippocampal replay provides the means for the gradual integration of knowledge into neocortical structures through the reactivation of recently acquired knowledge interleaved with the exposure to ongoing episodic experience (McClelland et al., 1995). Consequently, the periodic replay of previously encountered samples can alleviate catastrophic forgetting during incremental learning tasks, especially when the number of training samples for the different classes is unbalanced or when a sample is encountered only once (Robins, 1995).\n\nA number of computational approaches have drawn inspiration from the learning principles observed in biological systems. Machine learning models addressing lifelong learning can be divided into approaches that regulate intrinsic levels of plasticity to protect consolidated knowledge, that dynamically allocate neural resources in response to novel experience, or that use complementary dual-memory systems with memory replay (see section 2). However, most of these methods are designed to address supervised learning on image datasets of very limited complexity such as MNIST (LeCun et al., 1998) and CIFAR-10 (Krizhevsky, 2009) while not scaling up to incremental learning tasks with larger-scale datasets of natural images and videos (Kemker et al., 2018; Parisi et al., 2018a). Crucially, such models do not take into account the temporal structure of the input which plays an important role in more realistic learning conditions, e.g., an autonomous agent learning from the interaction with the environment. Therefore, in contrast to approaches in which static images are learned and recognized in isolation, we focus on lifelong learning tasks where sequential data with meaningful temporal relations become progressively available over time.\n\nIn this paper, we propose a growing dual-memory (GDM) architecture for the lifelong learning of spatiotemporal representations from videos, performing continuous object recognition at an instance level (episodic knowledge) and at a category level (semantic knowledge). The architecture comprises two recurrent self-organizing memories that dynamically adapt the number of neurons and synapses: the episodic memory learns representations of sensory experience in an unsupervised fashion through input-driven plasticity, whereas the semantic memory develops more compact representations of statistical regularities embedded in episodic experience. For this purpose, the semantic memory receives neural activation trajectories from the episodic memory and uses task-relevant signals (annotated labels) to modulate levels of neurogenesis and neural update. Internally generated neural activity patterns in the episodic memory are periodically replayed to both memories in the absence of sensory input, thereby mitigating catastrophic forgetting during incremental learning. We conduct a series of experiments with the recently published Continuous Object Recognition (CORe50) benchmark dataset (Lomonaco and Maltoni, 2017). The dataset comprises 50 objects within 10 categories with image sequences captured under different conditions and containing multiple views of the same objects (indoors and outdoors, varying background, object pose, and degree of occlusion). We show that our model scales up to learning novel object instances and categories and that it outperforms current lifelong learning approaches in three different incremental learning scenarios.\n\n# Related Work\n\nThe CLS theory (McClelland et al., 1995) provides the basis for computational frameworks that aim to generalize across experiences while retaining specific memories in a lifelong fashion. Early computational attempts include French (1997) who developed a dual-memory framework using pseudorehearsal (Robins, 1995) to transfer memories, i.e., the training samples are not explicitly kept in memory but drawn from a probabilistic model. However, there is no empirical evidence showing that this or similar contemporaneous approaches (see O'Reilly and Norman, 2002 for a review) scale up to large-scale image and video benchmark datasets. More recently, Gepperth and Karaoguz (2015) proposed two approaches for incremental learning using a modified self-organizing map (SOM) and a SOM extended with a short-term memory (STM). We refer to these two approaches as GeppNet and GeppNet+STM, respectively. In GeppNet, task-relevant feedback from a regression layer is used to select whether learning in the self-organizing hidden layer takes place. In GeppNet+STM, the STM is used to store novel knowledge which is occasionally played back to the GeppNet layer during sleep phases interleaved with training phases. This latter approach yields better performance and faster convergence in incremental learning tasks with the MNIST dataset. However, the STM has a limited capacity, thus learning new knowledge can overwrite old knowledge. In both cases, the learning process is divided into the initialization and the actual incremental learning phase. Furthermore, GeppNet and GeppNet+STM require storing the entire training dataset during training. Kemker and Kanan (2018) proposed the FearNet model for incremental class learning inspired by studies of memory recall and consolidation in the mammalian brain during fear conditioning (Kitamura et al., 2017). FearNet uses a hippocampal network capable of immediately recalling new examples, a PFC network for long-term memories, and a third neural network inspired by the basolateral amygdala for determining whether the system should use the PFC or hippocampal network for a particular example. FearNet consolidates information from its hippocampal network to its PFC network during sleep phases. Kamra et al. (2018) presented a similar dual-memory framework for lifelong learning that uses a variational autoencoder as a generative model for pseudo-rehearsal. Their framework generates a short-term memory module for each new task. However, prior to consolidation, predictions are made using an oracle, i.e., they know which module contains the associated memory.\n\nDifferent methods have been proposed that are based on regularization techniques to impose constraints on the update of the neural weights. This is inspired by neuroscience findings suggesting that consolidated knowledge can be protected from interference via changing levels of synaptic plasticity (Benna and Fusi, 2016) and is typically modeled in terms of adding regularization terms that penalize changes in the mapping function of a neural network. For instance, Li and Hoiem (2016) proposed a convolutional neural network (CNN) architecture in which the network that predicts the previously learned tasks is enforced to be similar to the network that also predicts the current task by using knowledge distillation, i.e., the transferring of knowledge from a large, highly regularized model to a smaller model. This approach, known as learning without forgetting (LwF), has the drawbacks of highly depending on the relevance of the tasks and that the training time for one task linearly increases with the number of old tasks. Kirkpatrick et al. (2017) proposed elastic weight consolidation (EWC) which adds a penalty term to the loss function and constrains the weight parameters that are relevant to retain previously learned tasks. However, this approach requires a diagonal weighting over the parameters of the learned tasks which is proportional to the diagonal of the Fisher information metric, with synaptic importance being computed offline and limiting its computational application to low-dimensional output spaces. Zenke et al. (2017b) proposed to alleviate catastrophic forgetting by allowing individual synapses to estimate their importance for solving a learned task. Similar to Kirkpatrick et al. (2017), this approach penalizes changes to the most relevant synapses so that new tasks can be learned with minimal interference. In this case, the synaptic importance is computed in an online fashion over the learning trajectory in the parameter space.\n\nIn general, regularization approaches comprise additional loss terms for protecting consolidated knowledge which, with a limited amount of neural resources, leads to a trade-off on the performance of old and novel tasks. Other approaches expand the neural architecture to accommodate novel knowledge. Rusu et al. (2016) proposed to block any changes to the network trained on previous knowledge and expand the architecture by allocating novel sub-networks with a fixed capacity to be trained with the new information. This prevents catastrophic forgetting but leads the complexity of the architecture to grow with the number of learned tasks. Draelos et al. (2017) trained an autoencoder incrementally using the reconstruction error to show whether the older digits were retained. Their model added new neural units to the autoencoder to facilitate the addition of new MNIST digits. Rebuffi et al. (2017) proposed the iCaRL approach which stores example data points that are used along with new data to dynamically adapt the weights of a feature extractor. By combining new and old data, they prevent catastrophic forgetting but at the expense of a higher memory footprint.\n\nThe approaches described above are designed for the classification of static images, often exposing the learning algorithm to training samples in a random order. Conversely, in more natural settings, we make use of the spatiotemporal structure of the input. In previous research (Parisi et al., 2017), we showed that the lifelong learning of action sequences can be achieved in terms of prediction-driven neural dynamics with internal representations emerging in a hierarchy of recurrent self-organizing networks. The networks can dynamically allocate neural resources and update connectivity patterns according to competitive Hebbian learning by computing the input based on its similarity with existing knowledge and minimizing interference by creating new neurons whenever they are required. This approach has shown competitive results with batch learning methods on action benchmark datasets. However, the neural growth and update are driven by the minimization of the bottomup reconstruction error and, thus, without taking into account top-down, task-relevant signals that can regulate the plasticitystability balance. Furthermore, the model cannot learn in the absence of external sensory input, which leads to a non-negligible degree of disruptive interference during incremental learning tasks.\n\n# Proposed Method\n\nThe proposed architecture with growing dual-memory learning (GDM) comprises a deep convolutional feature extractor and two hierarchically arranged recurrent self-organizing networks (Figure\u00a01). Both recurrent networks are extended versions of the Gamma-GWR model (Parisi et al., 2017) that dynamically create new neurons and connections in response to novel sequential input. The growing episodic memory (G-EM) learns from sensory experience in an unsupervised fashion, i.e., levels of structural plasticity are regulated by the ability of the network to predict the spatiotemporal patterns given as input. Instead, the growing semantic memory (G-SM) receives neural activation trajectories from G-EM and uses task-relevant signals (input annotations) to modulate levels of neurogenesis and neural update, thereby developing more compact representations of statistical regularities embedded in episodic experience. Therefore, G-EM and G-SM mitigate catastrophic forgetting through self-organizing learning dynamics with structural plasticity, increasing information storage capacity in response to novel input.\n\nThe architecture classifies image sequences at an instance level (episodic experience) and a category level (semantic knowledge). Thus, each input sample carries two labels which are used for the classification task at the different levels of the network hierarchy. For the consolidation of knowledge over time in the absence of sensory input, internally generated neural activity patterns in G-EM are periodically replayed to both memories, thereby mitigating catastrophic forgetting during incremental learning tasks. For this purpose, G-EM is equipped with synapses that learn statistically significant neural activity in the temporal domain. As a result, sequence-selective neural activation trajectories can be generated and replayed after each learning episode without explicitly storing sequential input.\n\n## Gamma-GWR\n\nThe Gamma-GWR model\u00a0(Parisi et al., 2017) is a recurrent extension of the Grow-When-Required (GWR) self-organizing network (Marsland et al., 2002) that embeds a Gamma memory (Principe et al., 1994) for representing short-term temporal relations. The Gamma-GWR can dynamically grow or shrink in response to the sensory input distribution. New neurons will be created to better represent the input and connections (synapses) between neurons will develop according to competitive Hebbian learning, i.e. neurons that activate simultaneously will be connected to each other. The Gamma-GWR learns the spatiotemporal structure of the input through the integration of temporal context into the computation of the self-organizing network dynamics.\n\nThe network is composed of a dynamic set of neurons, $A$, with each neuron consisting of a weight vector $\\textbf{w}_j$ and a number $K$ of context descriptors $\\textbf{c}_{j,k}$\u00a0($\\textbf{w}_j,\\textbf{c}_{j,k}\\in\\mathbb{R}^n$). Given the input $\\textbf{x}(t)\\in\\mathbb{R}^n$, the index of the best-matching unit (BMU), $b$, is computed as: $$\\label{eq:GetB}\nb = \\arg\\min_{j\\in A}(d_j),$$ $$\\label{eq:BMU}\nd_j = \\alpha_0 \\Vert \\textbf{x}(t) - \\textbf{w}_j \\Vert^2 + \\sum_{k=1}^{K}\\ \\alpha_k \\Vert \\textbf{C}_k(t)-\\textbf{c}_{j,k}\\Vert^2,$$ $$\\label{eq:MergeStep}\n\\textbf{C}_{k}(t) = \\beta \\cdot \\textbf{w}_b^{t-1}+(1-\\beta) \\cdot \\textbf{c}_{b,k-1}^{t-1},$$ where $\\Vert \\cdot \\Vert^2$ denotes the Euclidean distance, $\\alpha_i$ and $\\beta$ are constant factors that regulate the influence of the temporal context, $\\textbf{w}_b^{t-1}$ is the weight vector of the BMU at $t-1$, and $\\textbf{C}_{k}\\in\\mathbb{R}^n$ is the global context of the network with $\\textbf{C}_{k}(t_0)=0$.\n\nThe activity of the network, $a(t)$, is defined in relation to the distance between the input and its BMU (Equation\u00a0) as follows: $$\\label{eq:Activity}\na(t)=\\exp(-d_b),$$ thus yielding the highest activation value of $1$ when the network can perfectly predict the input sequence\u00a0(i.e.\u00a0$d_b=0$). Furthermore, each neuron is equipped with a habituation counter $h_j \\in [0,1]$ expressing how frequently it has fired based on a simplified model of how the efficacy of a habituating synapse reduces over time (Stanley, 1976). Newly created neurons start with $h_j=1$. Then, the habituation counter of the BMU, $b$, and its neighboring neurons, $n$, iteratively decrease towards 0. The habituation rule (Marsland et al., 2002) for a neuron $i$ is given by: $$\\label{eq:FiringCounter}\n\\Delta h_i=\\tau_i \\cdot \\kappa \\cdot (1-h_i)-\\tau_i,$$ with $i\\in\\{b,n\\}$ and where $\\tau_i$ and $\\kappa$ are constants that control the monotonically decreasing behavior. Typically, $h_b$ is decreased faster than $h_n$ with $\\tau_b>\\tau_n$.\n\nThe network is initialized with two neurons and, at each learning iteration, a new neuron is created whenever the activity of the network, $a(t)$, in response to the input $\\textbf{x}(t)$ is smaller than a given insertion threshold $a_T$. Furthermore, $h_b$ must be smaller than a habituation threshold $h_T$ in order for the insertion condition to hold, thereby fostering the training of existing neurons before new ones are added. The new neuron is created halfway between the BMU and the input. The training of the neurons is carried out by adapting the BMU $b$ and the neurons $n$ to which the $b$ is connected: $$\\label{eq:UpdateRateW}\n\\Delta \\textbf{w}_i = \\epsilon_i \\cdot h_i \\cdot (\\textbf{x}(t) - \\textbf{w}_i),$$ $$\\label{eq:UpdateRateC}\n\\Delta \\textbf{c}_{i, k} = \\epsilon_i \\cdot h_i \\cdot (\\textbf{C}_k(t) - \\textbf{c}_{i, k}),$$ with $i\\in\\{b,n\\}$ and where $\\epsilon_i$ is a constant learning rate ($\\epsilon_n<\\epsilon_b$). Furthermore, the habituation counters of the BMU and the neighboring neurons are updated according to Equation\u00a0. Connections between neurons are updated on the basis of neural co-activation, i.e. when two neurons fire together (BMU and second-BMU), a connection between them is created if it does not yet exist. Each connection has an age that increases at each learning iteration. The age of the connection between the BMU and the second-BMU is reset to $0$, whereas the other ages are increased by a value of $1$. The connections with an age greater than a given threshold can be removed, and neurons without connections can be deleted.\n\nFor the purpose of classification, an associative matrix $H(j,l)$ stores the frequency-based distribution of sample labels during the learning phase so that each neuron $j$ stores the number of times that an input with label $l$ had $j$ as its BMU. Thus, the predicted label $\\xi_j$ for a neuron $j$ can be computed as: $$\\label{eq:WinnerLabel}\n\\xi_j = \\arg\\max_{l \\in L} H(j,l),$$ where $l$ is an arbitrary label. Therefore, the unsupervised Gamma-GWR can be used for classification without requiring the number of label classes to be predefined.\n\n## Episodic Memory\n\nThe learning process of growing episodic memory G-EM is unsupervised, thereby creating new neurons or updating existing ones to minimize the discrepancy between the sequential input and its neural representation. In this way, episodic memories can be acquired and fine-tuned iteratively through sensory experience. This is functionally consistent with hippocampal representations, e.g. in the dentate gyrus, which are responsible for pattern separation through the orthogonalization of incoming inputs supporting the auto-associative storage and retrieval of item-specific information from individual episodes\u00a0(Yassa and Stark, 2011; Neuneubel and Knierim, 2014).\n\nGiven an input image frame, the extracted image feature vector (see section\u00a04.1) is given as input to G-EM which recursively integrates the temporal context into the self-organizing neural dynamics. The spatial resolution of G-EM neurons can be tuned through the insertion threshold, $a_T$, with a greater $a_T$ leading to more fine-grained representations since new neurons will be created whenever $a(t)onthisday.com<\/a> and Wikipedia) or people who had important personal events on that day (e.g. date of birth, date of death, graduation day). Moreover, the social context can be mined and used to drive the automated design process by including for instance trending topics of Twitter or headlines of today's newspapers. Early examples of such data-driven process have been explored for example by ANGELINA which used titles and articles from The Guardian website and connected them with relevant visuals and the appropriate mood. It is expected that a more maximalist data-driven design process would strengthen the feeling of contemporaneity by including more data sources (i.e.\u00a0more data to transform) or stronger gameplay implications (i.e.\u00a0broader transformations and functional impact).\n\nContemporaneity can make games generated on a specific day appealing to people who wish to get a \"feel\" for current issues but not necessarily dig deeply. On the other hand, the plethora of games (30 games per month alone) and the fact that each game is relevant to that day only could make each game itself less relevant. Contemporaneity and the fleeting nature of daily events could be emphasized if each game was playable only during the day that it was produced, deleting all its files when the next game is generated. This would enhance the perceived value of each game, similarly to *permadeath* in rogue-like games as it enhances nostalgia and the feeling of loss when a favorite gameworld is forever lost.\n\nAny maximalist game could satisfy a contemporaneity goal, but such games can be more amenable to data transformation. For example, data could be transformed to more closely fit the theme of the day, e.g.\u00a0query only female NPCs on International Women's Day. Contemporaneous data can be functional (to more strongly raise awareness of issues) but can also easily be decorative, e.g. giving a snowy appearance to locations during the Christmas holidays.\n\n## Personalization\n\nWhen game content is generated from data, it is possible to highlight certain bits of information. When the game takes player input as part of the data selection process, it personalizes their experience. If player information is available in the form of interests, important personal dates such as birthdays, or even social networks, the potential data sources that can be selected to form the game can be narrowed down. Presenting game content which is personally relevant (e.g. adventures with NPCs based on people living before Christ for an archeology student), or contextually relevant (such as solving the murder of an NPC born on the player's birthday) could contribute to a more engaging experience. It might also be possible to tailor the game's source repositories based on such personal interests. There are numerous online wikis, most of which follow a common format; therefore a user can implicitly (via personal interests) or explicitly (by providing a direct URL) switch search queries of a data-driven maximalist game to a specific wiki of choice.\n\n## Opinion & Critique\n\nOften designers want to make a statement through their games. For instance, Game-o-matic creates games from manually defined associations (as *micro-rhetorics*). *September 12th: A Toy World* (Newsgaming 2003) makes a political statement about the futility of America's War on Terror. Open data could similarly be used in a game to critique some aspect of culture by adding a weight of relevance and realism. For instance, a game such as *September 12th* could use the real map or skyline of Baghdad, or data on daily deaths in Iraq, to instantiate the challenge of the game. Similarly, if designers wish to critique the unprofessional use of social media in the White House, one could use real tweets to form dialog lines rather than generating them as in DATA Agent .\n\n## Entertainment\n\nOstensibly, all games have entertainment as a (primary or secondary) purpose. This includes maximalist games, even if they have an additional purpose as listed in this paper. It is meaningful therefore to investigate what data-driven maximalist design has to offer to the entertainment dimension of any such game. Since maximalism \u2014as we define it\u2014 does not necessarily apply to the mechanics of a game, a more relevant dimension is the end-user aesthetic that such games facilitate, following the mechanics-dynamics-aesthetics framework of (). Data-driven maximalist games primarily enhance the aesthetic of *discovery*, similarly to data exploration via such a game, and *expression* if it can be personalized to a user based on provided input such as birthday, hometown or interests. In many ways, data-driven games can enhance the aesthetic of *fantasy* by using and transforming real-world information. DATA agent, for example, describes an alternate history setting where a famous historical figure has been murdered (often by colleagues). The fantasy aesthetic is further enhanced by having a player take the role of a detective traveling through time and space to interrogate suspects. Other possible aesthetics that can be enhanced through data are *sensation* if the data comes from sources of high quality video, audio, or visuals (e.g. paintings of the National Gallery of London), or *fellowship* if the data comes from other users (e.g. anonymous users' trending tweets or social media postings of the player's friends). Evidently, games geared primarily towards entertainment can be fairly flexible in terms of data transformation, and can adapt the data to the intended game mechanics and game flow. While data can act as a decoration in such games (if intended to enhance the sensation aesthetic), in general games intended primarily for entertainment are fairly focused in the mechanics and feedback loops, and thus data would primarily be transformed into functional elements.\n\n## Human Computation\n\nPresenting hand-picked results from a vast database in an engaging, playful way is not only relevant for humans to consume. The human-computer interaction loop can be closed if human users provide feedback on the quality of the data itself. This human feedback can be used internally by the game, adapting its criteria in order to avoid unwanted data repositories, queries, associations or transformations made to the data. For instance, a future DATA agent version could re-compute the set of suspects for the next games (removing one or more suspects from the pool of possible suspects) if a player provides negative feedback explicitly (e.g. via a 'report' button) or implicitly (e.g. by being unable to solve the mystery). More ambitiously, the positive or negative feedback of players engaging with the playable \u2014transformed\u2014 data can be fed back to the source repositories which instantiated the game. This can allow for instance misinformation found in Wikipedia to be flagged, alerting moderators that either a human error (e.g. a wrong date or a misquote) or malformed data (e.g. unreadable titles) exists and must be corrected. Whether these corrections should be made by an expert human curator, or directly based on player interactions with the game could be a direction for future research.\n\n# Issues with Data-Driven Game Design\n\nAccomplishing *good* data-driven maximalist game design is a challenge. While the previous sections presented ways of doing so, there are still many implementation- or game-specific details which affect the design process. Beyond the core challenge of a good game design, there are several peripheral challenges to the design task itself which however spring from the practice of data-driven design. We elaborate on those peripheral challenges here.\n\n## Legal & Ethical Issues\n\nAny software which relies on external data that it cannot control may be prone to legal or ethical violations. Privacy of personal information may be a concern for a game generated from the social media profile of a user, especially if that game can then be played by a broader set of people. Using results from Google Images may lead to direct infringements of copyrights; using results from models built from text mining, on the other hand, may or may not result in such copyright infringements depending on whether the model returns actual copyrighted material. The issue of copyright becomes more complex when the data is transformed: relevant to data mining, a judge has ruled for fair use for Google Books as \"Google Books is also transformative in the sense that it has transformed book text into data for purposes of substantive research, including data mining and text mining in new areas\" . One can only assume that transformations of data into game content, depending on the fidelity to the original data and the purpose (e.g. data exploration and education), would make for a clearer case of fair use.\n\nGame content built on fair use or open data combined into an interactive experience may lead to unexpected issues. This is especially true in cases where the player has sufficient agency to interpret or act upon content of high fidelity with the original data in an open-ended fashion: consider, for example, a violent shooter game where opponents' visual depictions (3D models or faces) are those of Hollywood celebrities. Even in Data Adventures, where player interaction is fairly \"curated\", a generated game featured solving the murder of Justin Bieber . Apart from the fictional narrative of a popular celebrity's death, the game identifies another celebrity as the murderer: both of these decisions may cause concern to highly visible people (be they depicted murdered, murderers, or suspects). A disclaimer that the game characters are fictional can only alleviate that much of the ethical responsibility of game designers for such data-driven games.\n\n## Misinformation & Bias\n\nConnected to the concerns of misrepresenting contemporary or historical celebrities are the inherent issues of error in the source data. Before data is transformed into game content, open repositories that can be edited by anyone can be saturated by personal opinion and perhaps deliberate misinformation. As noted previously, not all data provided by different stakeholders in the information age are factual; this may be more pronounced in certain repositories than others. Beyond deliberate misinformation, an inherent bias is also present even in \"objective\" data. For example, algorithms for Google query results or image results are based on machine learned models that may favor stereotypes (based on what most people think of a topic). Even though WikiMystery uses what we arguably consider \"objective\" repositories such as Wikipedia, the 8 most popular locations in 100 generated games were in North America , pointing to a bias of the articles or the DBpedia entries chosen to be digitized. Other cases where misinformation may arise is when different content is combined inaccurately: examples from the Data Adventures series include cases where an image search for a character named Margaret Thatcher resulted in an image of Aung San Suu Kyi . When data-driven design uses social network data such as trending topics on Twitter, then the potential for sensitive or provocative topics to be paired with inappropriate content or combined in an insensitive way becomes a real possibility. If data-driven maximalist games are intended towards critique or opinion, the misinformation or misappropriation could be deliberately inserted by a designer (by pairing different repositories) or accidentally introduce a message that runs contrary to the intended one.\n\n# Outlook\n\nMaximalist game design encourages creation through reuse and combination. If one imagines its most powerful form, it would likely involve taking any mixture of information, pouring it into any game content cast, and reveling in its results. It would provide a freedom to interact with any data in the best, most personalized way possible.\n\nCurrent PCG techniques allow for unlimited playability for a large variety of games. However, they can lack a level of contemporaneity and relevance that could be provided by open data. Additionally, research has suggested that concepts can be effectively learned through gameplay . Using games as a method of interacting with open data may create a novel way for learning about the data in a fun way. Rather than use Wikipedia to learn about specific people and places for the first time, players could play games where they can talk to these people and visit these places.\n\nOpen data is available to all, to create as well as consume. Sometimes the data is inaccurate. The idea of visualizing this information in any form can provide means to \"debug\" the original data, in a more engaging way than just browsing Wikipedia or poring through a massive database.\n\n# Conclusion\n\nThis paper discussed an approach to game design inspired by the notion of maximalism in the arts. It encourages the reuse and combination of heterogeneous data sources in the creative design process. Maximalist game design embraces the generation of game content using different data sources, re-mixing them in order to achieve something new.\n\nWe drew from our experience with the Data Adventures series to propose a mapping of the maximalist game design space along two dimensions, *data transformation versus data fidelity* and *functionality versus decoration*. The former focuses on the extent that the data is transformed from its original form, while the latter refers to the actual role of the data in the game. Additionally, we described how maximalist game design can serve different purposes in the design process and which tradeoffs emerge from each purpose. Finally, we highlight issues and ethical concerns that may arise from and in maximalist games.\n\n# Acknowledgements\n\nGabriella Barros acknowledges financial support from CAPES and Science Without Borders program, BEX 1372713-3. Antonios Liapis has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 693150.\n\n[^1]: http:\/\/2pages.net\/wikirace.php","meta":{"dup_signals":{"dup_doc_count":11},"filename":"out\/1805.12475_extract_maximalism.tex.md"},"subset":"arxiv"} +{"text":"author: Toshiyuki Fukushige$^*$ \nPiet Hut$^+$ \nJunichiro Makino$^*$ \n$*$ Department of Systems Sciences, College of Arts and Sciences, \nUniversity of Tokyo \n$+$Institute for Advanced Study\ntitle: **High-Performance Special-Purpose Computers in Science**\n\npt\n\nThe next decade will be an exciting time for computational physicists. After 50 years of being forced to use standardized commercial equipment, it will finally become relatively straightforward to adapt one's computing tools to one's own needs. The breakthrough that opens this new era is the now wide-spread availability of *programmable chips* that allow virtually every computational scientist to design his or her own special-purpose computer.\n\n## Towards Real Numerical Laboratories\n\nUnlike real laboratories, numerical laboratories have been constructed almost exclusively from commercial products, which gave little flexibility. Starting in the late seventies, after the first microchips became available, there have been some exceptions. However, only the bravest souls dared to design their own equipment (Bakker and Bruin 1988).\n\nIn those days, speeding up the most compute-intensive few lines of FORTRAN code, in a large-scale simulation project, required building a bulky piece of electronic hardware consisting of tens of large circuit boards. By the late eighties, things looked a lot better already, since it had become possible to integrate such circuits into a single custom chip. However, the barrier, real or perceived, against building your own hardware was still substantial.\n\nBy now, after another ten years, the barrier has almost disappeared. With programmable chips, the question is not so much whether to make one's hands dirty building a machine, but rather how to program: whether to program the existing central processing unit of a commercial machine, or whether to program a more generic set of distributed processing units. In both cases, the trick is to find the best map between the scientific problem and the layout of the computational hardware. What favors the programmable option over the use of standard CPUs are the facts that: 1) supercomputers, optimized for scientific calculations, are rapidly disappearing, leaving us with less-than-optimal vanilla-flavored computers; 2) generic CPU chips are now becoming so complex that the overwhelming fraction of silicon real estate is dedicated to the electronic equivalent of bureaucracy rather than raw computing.\n\nOf course, there is still one major drawback to the use of programmable chips in computational science: habit. It takes a while for scientists to switch their approach to a problem, even when more efficient methods have become available. Therefore, in order to prepare for the future, it is useful to look back at the past, to see what has already been accomplished during the last ten years, using special-purpose computers. Anything that has been done in this area, relying on specially designed chips, can in principle be done now with programmable chips, at only a fraction of the effort involved. Let us focus on a specific case.\n\n## A Case Study: The GRAPE Project\n\nOur GRAPE project, started 10 years ago, is one example in which computational physicists developed special-purpose computers successfully. Here, success means that the developed machine made it possible to solve problems which were impossible to solve on general-purpose computers.\n\nOne of these projects has resulted in the GRAPE (short for GRAvity PipE) family of special-purpose hardware, designed and built by a small group of astrophysicists at the University of Tokyo (Makino and Taiji 1998). Like a graphics accelerator speeding up graphics calculations on a workstation, without changing the software running on that workstation, the GRAPE acts as a Newtonian force accelerator, in the form of an attached piece of hardware. In a large-scale gravitational N-body calculation, almost all instructions of the corresponding computer program are thus performed on a standard work station, while only the gravitational force calculations, in innermost loop, are replaced by a function call to the special-purpose hardware.\n\nThe GRAPE-4, which was completed in 1995 for the total budget of 240 M JYE (around 2 M dollars), offered a peak speed of 1.08 Tflops. On practical problems, a significant fraction of this speed can be actually used. For example, the Grape-4 developers have won the Gordon Bell prize for high-performance computing for two years in a row. In 1995, the prize was awarded to Junichiro Makino and Makoto Taiji for a sustained speed of 112 Gflops, achieved using one-sixth of the full machine on a 128k particle simulation of the evolution of a double black-hole system in the core of a galaxy. The 1996 prize was awarded to Toshiyuki Fukushige and Junichiro Makino for a 332 Gflops simulation of the formation of a cold dark matter halo around a galaxy, modeled using 768k particles on three-quarters of the full machine. The first general-purpose computer to offer a similar level of the performance is the 9000-processor ASCI Red machine, with a price tag around 50 M dollars, completed in late 1997.\n\nIn addition, more than 40 copies of small (5-30 Gflops) GRAPE-3 and GRAPE-4 versions are now being used in a major astrophysical institutes in many different countries.\n\nIn a year from now, the GRAPE-6 will become available, at a speed that will be at least 100 times faster than that of the GRAPE-4. In addition, single board versions will become available (the 'GRAPE-6 junior'), that can be purchased by individual astrophysicists, to run at a speed of 500 Gflops, coupled to a normal workstation as a front-end. Such a single board will thus provide a speed-up of well over a factor $1,000$ for a price comparable to that of the workstation.\n\nWhat is the main reason behind the success of the GRAPE? From a technological point of view, it is not overly difficult to design a special-purpose computer with a cost-performance better than that of commercially available general-purpose computers. The reason is simply that an ever diminishing fraction of the available transistors in the present-day microprocessors are actually used in arithmetic operations. In contrast, in GRAPE systems essentially all available transistors in a processor chip are used to implement arithmetic units.\n\nThis main reason behind the optimal usage of silicon real estate in the GRAPE is that the data flow in a GRAPE chip is fixed, while most of the transistors in present-day microprocessors are utilize to provide flexible data flow. In other words, the key trick to outperform general-purpose machines is to build a hardware accelerator for a specialized function, and not to build a programmable computer. To design a programmable computer is a difficult task, and you have a very little chance (unless you are a Seymour Cray) to outwit competent designers in large companies. However, designing a specialized piece of hardware for a single function is not something computer architects do. So there is essentially no competition.\n\n## Having it all\n\nThe interesting shift, alluded to in our opening statement, is that it is no longer necessary to choose between programming a computer and building a special-purpose computer. With the availability of increasingly efficient programmable chips, one can *program* an off-the-shelf chip, such as a FPGA (Field-Programmable Gate Array) chip, to *emulate a special-purpose chip*. It is like having your cake and eating it: you can emulate a fixed date flow. Before you program the chip, it is far more flexible than a standard CPU, in the sense that you are not bound by a given instruction set, providing all instructions yourself, from the bottom up. And after having programmed the chip, it has turned into a data flow machine, without any need to decode additional information on the fly.\n\nOf course, there is a drawback to any new development. With current FPGAs, flexibility comes with a cost: more than 90% of the silicon resource is used to provide programmability. Even so, custom computing machines based on FPGA have become a viable alternative for both general-purpose computers and specialized hardwares, given the fact that standard CPUs tend to have lower and lower efficiency as well, while their complexity keeps increasing.\n\nWe have developed a small system with two FPGA chips to evaluate the potential of FPGA technology. The current FPGAs turned out to be large enough to house a complete GRAPE-3 chip (110K transistors). The chips available in next year will be large enough to house a GRAPE-4 chip and the effective performance would exceed 1 Gflops per chip.\n\n## Outlook\n\nTo summarize, FPGAs offer the possibility of combining the flexibility of conventional programmable computer and the high throughput of special-purpose hardware. To play the devil's advocate, one might argue that FPGAs could combine the difficulty of design of special-purpose hardware and the low efficiency of the programmable computer. To be honest, at present there still is that danger. To continue the devil's advocate argument: implementing a function onto an FPGA is analogous to programming a universal Turing machine, in the sense that it offers the maximum flexibility at the lowest level. Clearly, a more sensible design methodology is necessary. On the bright side, rapid advances are being made in the development of higher level tools for implementing algorithms on FPGAs. And the more the parallelization bottleneck, using general-purpose computers, will be felt, the larger the incentive will become to switch to a use of programmable chips.\n\nThus FPGAs are not a universal solution for all problems in computational physics. A full-custom chip offers clear advantages when its high initial development cost can be amortized by mass production. General-purpose computers are still better in developing sophisticated algorithms, and experimenting with them. But the use of FPGAs can be expected to increase rapidly, in computational science, for a wide range of problems, that are too complex to 'put in stone' in the form of a special-purpose chip, but not too complex to program onto a FPGA. In that way, problems which cannot be solved in a practical time on programmable computers, do not have to be shelved until commercial computers catch up and deliver the required speed. As the example of the GRAPE has shown us, even a small group of computational scientists can solve their particular computational problems years ahead of (the commercial) schedule.\n\n## References\n\nBakker A. F. and Bruin C. (1988) Design and implementation of the Delft molecular-dynamics processor. In Alder B. J. (ed) Special Purpose Computers, pages 183\u2013232. (Academic Press, San Diego)\n\nMakino J. and Taiji M. (1998) Special Purpose Computers for Scientific Simulations \u2013 The GRAPE systems. (John Wiley and Sons, Chichester)","meta":{"dup_signals":{"dup_doc_count":14,"dup_dump_count":3,"dup_details":{"curated_sources":2,"2017-13":1,"unknown":11}},"filename":"out\/astro-ph9811419_extract_CiSElatex.tex.md"},"subset":"arxiv"} +{"text":"author: Marc Mosko \nPARC \nfirstname.lastname@example.com; Ignacio Solis[^1] \nLinkedIn \nfirstname.lastname@example.com; Christopher A. Wood \nUniversity of California Irvine \nfirstname.lastname@example.com\nbibliography: references.bib\ndate: 2024-10-01 \n Version: 1.0\ntitle: **Content-Centric Networking**\n .\n .\n ------------------------------------------------------------------------\n .\n \n Architectural Overview and Protocol Description\n\n# Overview\n\nCCNx is a request and response protocol to fetch chunks of data using a name. The integrity of each chunk may be directly asserted through a digital signature or message authentication code (MAC), or, alternatively, indirectly via hash chains. Chunks may also carry weaker message integrity checks (MICs) or no integrity protection mechanism at all. Because provenance information is carried with each chunk (or larger indirectly protected block), we no longer need to rely on host identities, such as those derived from TLS certificates, to ascertain the chunk legitimacy. Data integrity is therefore a core feature of CCNx; it does not rely on the data transmission channel. There are several options for data confidentiality, discussed later.\n\nAs a request and response protocol, CCNx may be carried over many different transports. In use today are Ethernet, TCP, UDP, 802.15.4, GTP, GRE, DTLS, TLS, and others. While the specific wire format of CCNx may vary to some extent based on transport, the core principles and behaviors of CCNx outlined in this document should remain fixed.\n\nCCNx uses hierarchical names to identify bytes of payload. The Name combines a routable prefix with an arbitrary application-dependent suffix assigned by the publisher to a piece of content. The result is a \"named payload\". This is different from other systems that use only self-certifying names, where the payload name is intrinsically derivable from the payload or its realization in a network object (e.g., a SHA-256 hash of the payload or network object). In human- readable form, we represent names as a \"ccnx:\" scheme URI , though the canonical encoding should be octet strings. In this respect, we speak of a name being made up of hierarchical path segments, which is the URI terminology.\n\nThis document only defines the general properties of CCNx names. In some isolated environments, CCNx users may be able to use any name they choose and either inject that name (or prefix) into a routing protocol or use other information foraging techniques. In the Internet environment, there will be policies around the formats of names and assignments of names to publishers, though those are not specified here.\n\nThe key concept of CCNx is that a subjective name is (cryptographically) bound to a fixed payload. These (publisher- generated) bindings can therefore be (cryptographically) verified. For example, a publisher could compute a cryptographic hash over the name and payload, sign the hash, and deliver the tuple Name, Payload, Validation. Consumers of this data can check the binding integrity by re-computing the same cryptographic hash and verifying the digital signature in Validation. Additional information would be included as needed by specific validation mechanisms. Therefore, we divide Validation in to a ValidationAlgorithm and a ValidationPayload. The ValidationAlgorithm has information about the crypto suite and parameters. In particular, the ValidationAlgorithm usually has a field called KeyId which identifies the public key used by the validation, when applicable. The ValidationPayload is the output of the validation algorithm, such as a CRC value, an HMAC output, or an RSA signature.\n\nIn addition to the essential Name, Payload, and Validation sections, a CCNx user may need to include some other signaling information. This could include a hint about the type of Payload (e.g., application data, a cryptographic key, etc.) or cache control directives, etc. We will call this extra signaling information ExtraFields.\n\nA named payload is thus the nested tuple $$((\\mathsf{Name}, \\mathsf{ExtraFields}, \\mathsf{Payload}, \\mathsf{ValidationAlgorithm}), \\mathsf{ValidationPayload}),$$ where all fields in the inner tuple are covered by the value in the validation payload.\n\nCCNx specifies a network protocol around Interests (request messages) and Content Objects (response messages) to move named payloads. An Interest includes the Name \u2013 which identifies the desired response \u2013 and two optional limiting restrictions. The first restriction on the KeyId to limit responses to those signed with a ValidationAlgorithm KeyId field equal to the restriction. The second is the ContentObjectHash restriction, which limits the response to one where the cryptographic hash of the entire named payload is equal to the restriction.\n\nThe hierarchy of a CCNx Name is used for routing via the longest matching prefix in a Forwarder. The longest matching prefix is computed name segment by name segment in the hierarchical path name, where each name segment must be exactly equal to match. There is no requirement that the prefix be globally routable. Within a deployment any local routing may be used, even one that only uses a single flat (non-hierarchical) name segment.\n\nAnother concept of CCNx is that there should be flow balance between Interest messages and Content Object messages. At the network level, an Interest traveling along a single path should elicit no more than one Content Object response. If some node sends the Interest along more than one path, that node should consolidate the responses such that only one Content Object flows back towards the requester. If an Interest is sent broadcast or multicast on a multiple-access media, the sender should be prepared for multiple responses unless some other media-dependent mechanism like gossip suppression or leader election is used.\n\nAs an Interest travels the forward path following the Forwarding Information Base (FIB), it establishes state at each forwarder such that a Content Object response can trace its way back to the original requester(s) without the requester needing to include a routable return address. We use the notional Pending Interest Table (PIT) as a method to store state that facilitates the return of a Content Object. The PIT table is not mandated by the specification.\n\nThe notional PIT table stores the last hop of an Interest plus its Name and optional restrictions. This is the data required to match a Content Object to an Interest (see Section ). When a Content Object arrives, it must be matched against the PIT to determine which entries it satisfies. For each such entry, at most one copy of the Content Object is sent to each listed last hop in the PIT entries.\n\nIf multiple Interests with the same Name, KeyIdRestriction, ContentObjectHashRestriction tuple arrive at a node before a Content Object matching the first Interest comes back, they are grouped in the same PIT entry and their last hops aggregated (see Section 2.4.2). Thus, one Content Object might satisfy multiple pending Interests in a PIT.\n\nIn CCNx, higher-layer protocols often become so-called \"name-based protocols\" because they operate on the CCNx Name. For example, a versioning protocol might append additional name segments to convey state about the version of payload. A content discovery protocol might append certain protocol-specific name segments to a prefix to discover content under that prefix. Many such protocols may exist and apply their own rules to Names. They may be layered with each protocol encapsulating (to the left) a higher layer's Name prefix.\n\nThis document also describes a control message called an InterestReturn. A network element may return an Interest message to a previous hop if there is an error processing the Interest. The returned Interest may be further processed at the previous hop or returned towards the Interest origin. When a node returns an Interest it indicates that the previous hop should not expect a response from that node for the Interest, i.e., there is no PIT entry left at the returning node for a Content Object to follow.\n\nThere are multiple ways to describe larger objects in CCNx. Some options may use the namespace while others may use a structure such as a Manifest. This document does not address these options at this time.\n\nThe remainder of this document describes a named payload as well as the Interest and Content Object network protocol behavior in detail.\n\n# Protocol\n\nCCNx is a request and response protocol. A request is called an Interest and a response is called a ContentObject. CCNx also uses a 1-hop control message called InterestReturn. These are, as a group, called CCNx Messages.\n\n## Message Grammar\n\nThe CCNx message ABNF grammar is show in Figure 1. The grammar does not include any encoding delimiters, such as TLVs. Specific wire encodings are given in a separate document. If a Validation section exists, the Validation Algorithm covers from the Body (BodyName or BodyOptName) through the end of the ValidationAlg section. The InterestLifetime, CacheTime, and Return Code fields exist outside of the validation envelope and may be modified.\n\nThe various fields \u2013 in alphabetical order \u2013 are defined as:\n\n- AbsTime: Absolute times are conveyed as the 64-bit UTC time in milliseconds since the epoch (standard POSIX time).\n\n- CacheTime: The absolute time after which the publisher believes there is low value in caching the content object. This is a recommendation to caches (see Section 4).\n\n- ConObjField: These are optional fields that may appear in a Content Object.\n\n- ConObjHash: The value of the Content Object Hash, which is the SHA256-32 over the message from the beginning of the body to the end of the message. Note that this coverage area is different from the ValidationAlg. This value SHOULD NOT be trusted across domains (see Section 5).\n\n- ExpiryTime: An absolute time after which the content object should be considered expired (see Section 4).\n\n- HopLimit: Interest messages may loop if there are loops in the forwarding plane. To eventually terminate loops, each Interest carries a HopLimit that is decremented after each hop and no longer forwarded when it reaches zero. See Section 2.4.\n\n- InterestField: These are optional fields that may appear in an Interest message.\n\n- KeyIdRestr: The KeyId Restriction. A Content Object must have a KeyId with the same value as the restriction.\n\n- ObjHashRestr: The Content Object Hash Restriction. A content object must hash to the same value as the restriction using the same HashType. The ObjHashRestr MUST use SHA256-32.\n\n- KeyId: An identifier for the key used in the ValidationAlg. For public key systems, this should be the SHA-256 hash of the public key. For symmetric key systems, it should be an identifer agreed upon by the parties.\n\n- KeyLink: A Link (see Section 6) that names how to retrieve the key used to verify the ValidationPayload. A message SHOULD NOT have both a KeyLink and a PublicKey.\n\n- Lifetime: The approximate time during which a requester is willing to wait for a response, usually measured in seconds. It is not strongly related to the network round trip time, though it must necessarily be larger.\n\n- Name: A name is made up of a non-empty first segment followed by zero or more additional segments, which may be of 0 length. Path segments are opaque octet strings, and are thus case-sensitive if encoding UTF-8. An Interest MUST have a Name. A ContentObject MAY have a Name (see Section 9). The segments of a name are said to be complete if its segments uniquely identify a single Content Object. A name is exact if its segments are complete. An Interest carrying a full name is one which specifies an exact name and the ObjHashRestr of the corresponding Content Object.\n\n- Payload: The message's data, as defined by PayloadType.\n\n- PayloadType: The format of the Payload. If missing, assume DataType. DataType means the payload is opaque application bytes. KeyType means the payload is a DER-encoded public key. LinkType means it is one or more Links (see Section 6).\n\n- PublicKey: Some applications may wish to embed the public key used to verify the signature within the message itself. The PublickKey is DER encoded. A message SHOULD NOT have both a KeyLink and a PublicKey.\n\n- RelTime: A relative time, measured in milli-seconds.\n\n- ReturnCode: States the reason an Interest message is being returned to the previous hop (see Section 10.2).\n\n- SigTime: The absolute time (UTC milliseconds) when the signature was generated.\n\n- Hash: Hash values carried in a Message carry a HashType to identify the algorithm used to generate the hash followed by the hash value. This form is to allow hash agility. Some fields may mandate a specific HashType.\n\n## Consumer Behavior\n\nTo request a piece of content for a given $$(\\mathsf{Name}, [\\mathsf{KeyIdRest}], [\\mathsf{ObjHashRestr}])$$ tuple, a consumer creates an Interest message with those values. It MAY add a validation section, typically only a CRC32C. A consumer MAY put a Payload field in an Interest to send additional data to the producer beyond what is in the Name. The Name is used for routing and may be remembered at each hop in the notional PIT table to facilitate returning a content object; Storing large amounts of state in the Name could lead to high memory requirements. Because the Payload is not considered when forwarding an Interest or matching a Content Object to an Interest, a consumer SHOULD put an Interest Payload ID (see Section Section 3.2) as part of the name to allow a forwarder to match Interests to content objects and avoid aggregating Interests with different payloads. Similarly, if a consumer uses a MAC or a signature, it SHOULD also include a unique segment as part of the name to prevent the Interest from being aggregated with other Interests or satisfied by a Content Object that has no relation to the validation.\n\nThe consumer SHOULD specify an InterestLifetime, which is the length of time the consumer is willing to wait for a response. The InterestLifetime is an application-scale time, not a network round trip time (see Section 2.4.2). If not present, the InterestLifetime will use a default value (TO_INTERESTLIFETIME).\n\nThe consumer SHOULD set the Interest HopLimit to a reasonable value or use the default 255. If the consumer knows the distances to the producer via routing, it SHOULD use that value.\n\nA consumer hands off the Interest to its first forwarder, which will then forward the Interest over the network to a publisher (or replica) that may satisfy it based on the name (see Section 2.4).\n\nInterest messages are unreliable. A consumer SHOULD run a transport protocol that will retry the Interest if it goes unanswered, up to the InterestLifetime. No transport protocol is specified in this document.\n\nThe network MAY send to the consumer an InterestReturn message that indicates the network cannot fulfill the Interest. The ReturnCode specifies the reason for the failure, such as no route or congestion. Depending on the ReturnCode, the consumer MAY retry the Interest or MAY return an error to the requesting application.\n\nIf the content was found and returned by the first forwarder, the consumer will receive a ContentObject. The consumer SHOULD:\n\n- Ensure the content object is properly formatted.\n\n- Verify that the returned Name matches a pending request. If the request also had KeyIdRestr and ObjHashRest, it should also validate those properties.\n\n- If the content object is signed, it SHOULD cryptographically verify the signature. If it does not have the corresponding key, it SHOULD fetch the key, such as from a key resolution service or via the KeyLink.\n\n- If the signature has a SigTime, the consumer MAY use that in considering if the signature is valid. For example, if the consumer is asking for dynamically generated content, it should expect the SigTime to not be before the time the Interest was generated.\n\n- If the content object is signed, it should assert the trustworthiness of the signing key to the namespace. Such an assertion is beyond the scope of this document, though one may use traditional PKI methods, a trusted key resolution service, or methods like schematized trust .\n\n- It MAY cache the content object for future use, up to the ExpiryTime if present.\n\n- A consumer MAY accept a content object off the wire that is expired. It may happen that a packet expires while in flight, and there is no requirement that forwarders drop expired packets in flight. The only requirement is that content stores, caches, or producers MUST NOT respond with an expired content object.\n\n## Publisher Behavior\n\nThis document does not specify the method by which names populate a Forwarding Information Base (FIB) table at forwarders (see Section 2.4). A publisher is either configured with one or more name prefixes under which it may create content, or it chooses its name prefixes and informs the routing layer to advertise those prefixes.\n\nWhen a publisher receives an Interest, it SHOULD:\n\n- Verify that the Interest is part of the publishers namespace(s).\n\n- If the Interest has a Validation section, verify the ValidationPayload. Usually an Interest will only have a CRC32C unless the publisher application specifically accommodates other validations. The publisher MAY choose to drop Interests that carry a Validation section if the publisher application does not expect those signatures as this could be a form of computational denial of service. If the signature requires a key that the publisher does not have, it is NOT RECOMMENDED that the publisher fetch the key over the network, unless it is part of the application's expected behavior.\n\n- Retrieve or generate the requested content object and return it to the Interest's previous hop. If the requested content cannot be returned, the publisher SHOULD reply with an InterestReturn or a content object with application payload that says the content is not available; this content object should have a short ExpiryTime in the future.\n\n## Forwarder Behavior\n\nA forwarder routes Interest messages based on a Forwarding Information Base (FIB), returns Content Objects that match Interests to the Interest's previous hop, and processes InterestReturn control messages. It may also keep a cache of Content Objects in the notional Content Store table. These functions are shown in Figure . These and other external behaviors are described in the remainder of this section.\n\nIn this document, we will use two processing pipelines, one for Interests and one for Content Objects. Interest processing is made up of checking for duplicate Interests in the PIT (see Section 2.4.2), checking for a cached Content Object in the Content Store (see Section 2.4.3), and forwarding an Interest via the FIB. Content Store processing is made up of checking for matching Interests in the PIT and forwarding to those previous hops.\n\n### Interest HopLimit\n\nInterest looping is not prevented in CCNx. An Interest traversing loops is eventually discarded using the hop-limit field of the Interest, which is decremented at each hop traversed by the Interest. Every Interest MUST carry a HopLimit.\n\nWhen an Interest is received from another forwarder, the HopLimit MUST be positive. A forwarder MUST decement the HopLimit of an Interest by at least 1 before it is forwarded. If the HopLimit equals 0, the Interest MUST NOT be forwarded to another forwarder; it MAY be sent to a publisher application or serviced from a local Content Store.\n\n### Interest Aggregation\n\nInterest aggregation is when a forwarder receives an Interest message that could be satisfied by another Interest message already forwarded by the node so the forwarder suppresses the new Interest; it only records the additional previous hop so a Content Object sent in response to the first Interest will satisfy both Interests.\n\nCCNx uses an interest aggregation rule that assumes the InterestLifetime is akin to a subscription time and is not a network round trip time. Some previous aggregation rules assumed the lifetime was a round trip time, but this leads to problems of expiring an Interest before a response comes if the RTT is estimated too short or interfering with an ARQ scheme that wants to re-transmit an Interest but a prior interest over-estimated the RTT.\n\nA forwarder MAY implement an Interest aggregation scheme. If it does not, then it will forward all Interest messages. This does not imply that multiple, possibly identical, Content Objects will come back. A forwarder MUST still satisfy all pending Interests, so one Content Object could satisfy multiple similar interests, even if the forwarded did not suppress duplicate Interest messages.\n\nA RECOMMENDED Interest aggregation scheme is:\n\n- Two Interests are considered 'similar' if they have the same Name, KeyIdRestr, and ObjHashRestr.\n\n- Let the notional value InterestExpiry (a local value at the forwarder) be equal to the receive time plus the InterestLifetime (or a platform-dependent default value if not present).\n\n- An Interest record (PIT entry) is considered invalid if its InterestExpiry time is in the past.\n\n- The first reception of an Interest MUST be forwarded.\n\n- A second or later reception of an Interest similar to a valid pending Interest from the same previous hop MUST be forwarded. We consider these a retransmission requests.\n\n- A second or later reception of an Interest similar to a valid pending Interest from a new previous hop MAY be aggregated (not forwarded).\n\n- Aggregating an Interest MUST extend the InterestExpiry time of the Interest record. An implementation MAY keep a single InterestExpiry time for all previous hops or MAY keep the InterestExpiry time per previous hop. In the first case, the forwarder might send a ContentObject down a path that is no longer waiting for it, in which case the previous hop (next hop of the Content Object) would drop it.\n\n### Content Store Behavior\n\nThe ContentStore is a special cache that sits on the fast path of a CCNx forwarder. It is an optional component. It serves to repair lost packets and handle flash requests for popular content. It could be pre-populated or use opportunistic caching. Because the Content Store could serve to amplify an attach via cache poisoning, there are special rules about how a Content Store behaves.\n\n1. A forwarder MAY implement a ContentStore. If it does, the Content Store matches a Content Object to an Interest via the normal matching rules (see Section 9).\n\n2. If an Interest has a KeyIdRestr, then the ContentStore MUST NOT reply unless it knows the signature on the matching ContentObject is correct. It may do this by external knowledge (i.e., in a managed system pre-populating the cachine) or by having the public key and cryptographically verifying the signature. If the public key is provided in the ContentObject itself (i.e., in the PublicKey field) or in the Interest, the ContentStore MUST verify that the public key's SHA-256 hash is equal to the KeyId and that it verifies the signature. A ContentStore MAY verify the digital signature of a Content Object before it is cached, but it is not required to do so. A ContentStore SHOULD NOT fetch keys over the network. If it cannot or has not yet verified the signature, it should treat the Interest as a cache miss.\n\n3. If an Interest has an ObjHashRestr, then the ContentStore MUST NOT reply unless it knows the the matching ContentObject has the correct hash. If it cannot verify the hash, then it should treat the Interest as a cache miss.\n\n4. It must object the Cache Control directives (see Section 4).\n\n### Interest Pipeline\n\n1. Perform the HopLimit check (see Section 2.4.1).\n\n2. Determine if the Interest can be aggregated, as per Section 2.4.2. If it can be, aggregate and do not forward the Interest.\n\n3. If forwarding the Interest, check for a hit in the Content Store, as per Section 2.4.3. If a matching Content Object is found, return it to the Interest's previous hop. This injects the ContentStore as per Section 2.4.5.\n\n4. Lookup the Interest in the FIB. Longest prefix match (LPM) is performed name segment by name segment (not byte or bit). It SHOULD exclude the Interest's previous hop. If a match is found, forward the Interest. If no match is found or the forwarder choses to not forward due to a local condition (e.g., congestion), it SHOULD send an InterestReturn message, as per Section 10.\n\n### Content Object Pipeline\n\n1. It is RECOMMENDED that a forwarder that receives a content object check that the ContentObject came from an expected previous hop. An expected previous hop is one pointed to by the FIB or one recorded in the PIT as having had a matching Interest sent that way.\n\n2. A Content Object MUST be matched to all pending Interests that satisfy the matching rules (see Section 9). Each satisfied pending Interest MUST then be removed from the set of pending Interests.\n\n3. A forwarder SHOULD NOT send more then one copy of the received Content Object to the same Interest previous hop. It may happen, for example, that two Interest ask for the same Content Object in different ways (e.g., by name and by name an KeyId) and that they both come from the same previous hop. It is normal to send the same content object multiple times on the same interface, such as Ethernet, if it is going to different previous hops.\n\n4. A Content Object SHOULD only be put in the Content Store if it satisfied an Interest (and passed rule \\#1 above). This is to reduce the chances of cache poisoning.\n\n# Names\n\nA CCNx name is a composition of name segments. Each name segment carries a label identifying the purpose of the name segment, and a value. For example, some name segments are general names and some serve specific purposes, such as carrying version information or the sequencing of many chunks of a large object into smaller, signed Content Objects.\n\nThere are three different types of names in CCNx: prefix, exact, and full names. A prefix name is simply a name that does not uniquely identify a single Content Object, but rather a namespace or prefix of an existing Content Object name. An exact name is one which uniquely identifies the name of a Content Object. A full name is one which is exact and is accompanied by an explicit or implicit ConObjHash. The ConObjHash is explicit in an Interest and implicit in a Content Object.\n\nThe name segment labels specified in this document are given in the table below. Name Segment is a general name segment, typically occurring in the routable prefix and user-specified content name. Other segment types are for functional name components that imply a specific purpose.\n\nA forwarding table entry may contain name segments of any type. Routing protocol policy and local system policy may limit what goes into forwarding entries, but there is no restriction at the core level. An Interest routing protocol, for example, may only allow binary name segments. A load balancer or compute cluster may route through additional component types, depending on their services.\n\n| **Name** | **Description** |\n|:---|:---|\n| Name Segment | A generic name segment that includes arbitrary octets. |\n| Interest Payload ID | An octet string that identifies the payload carried in an Interest. As an example, the Payload ID might be a hash of the Interest Payload. This provides a way to differentiate between Interests based on the Payload solely through a Name Segment without having to include all the extra bytes of the payload itself. |\n| Application Components | An application-specific payload in a name segment. An application may apply its own semantics to these components. A good practice is to identify the application in a Name segment prior to the application component segments. |\n\nCCNx Name Segment Types\n\nAt the lowest level, a Forwarder does not need to understand the semantics of name segments; it need only identify name segment boundaries and be able to compare two name segments (both label and value) for equality. The Forwarder matches paths segment-by-segment against its forwarding table to determine a next hop.\n\n## Name Examples\n\nThis section uses a URI representation of CCNx names. Each component of a name has a type and value. Examples of this encoding are in Table .\n\n| **Name** | **Description** |\n|:---|:---|\n| ccnx:\/ | A 0-length name, corresponds to a default route. |\n| ccnx:\/NAME= | A name with 1 segment of 0 length, distinct from ccnx:\/. |\n| ccnx:\/NAME=foo\/APP:0=bar | A 2-segment name, where the first segment is of type NAME and the second segment is of type APP:0. |\n\nCCNx Name Examples\n\n## Interest Payload ID\n\nAn Interest may also have a Payload which carries state about the Interest but is not used to match a Content Object. If an Interest contains a payload, the Interest name should contain an Interest Payload ID (IPID). The IPID allows a PIT table entry to correctly multiplex Content Objects in response to a specific Interest with a specific payload ID. The IPID could be derived from a hash of the payload or could be a GUID or a nonce. An optional Metadata field defines the IPID field so other systems could verify the IPID, such as when it is derived from a hash of the payload. No system is required to verify the IPID.\n\n# Cache Control\n\nCCNx supports two fields that affect cache control. These determine how a cache or Content Store handles a Content Object. They are not used in the fast path, but only to determine if a ContentObject can be injected on to the fast path in response to an Interest.\n\nThe ExpiryTime is a field that exists within the signature envelope of a Validation Algorithm. It is the UTC time in milliseconds after which the ContentObject is considered expired and MUST no longer be used to respond to an Interest from a cache. Stale content MAY be flushed from the cache.\n\nThe Recommended Cache Time (RCT) is a field that exists outside the signature envelope. It is the UTC time in milliseconds after which the publisher considers the Content Object to be of low value to cache. A cache SHOULD discard it after the RCT, though it MAY keep it and still respond with it. A cache is MAY discard the content object before the RCT time too; there is no contractual obligation to remember anything.\n\nThis formulation allows a producer to create a Content Object with a long ExpiryTime but short RCT and keep re-publishing the same, signed, Content Object over and over again by extending the RCT. This allows a form of \"phone home\" where the publisher wants to periodically see that the content is being used.\n\n# Restrictions\n\n## Content Object Hash\n\nCCNx allows an Interest to restrict a response to a specific hash. The hash covers the Content Object message body and the validation sections, if present. Thus, if a Content Object is signed, its hash includes that signature value. The hash does not include the fixed or hop-by-hop headers of a Content Object. Because it is part of the matching rules (see Section 9), the hash is used at every hop.\n\nThere are two options for matching the content object hash restriction in an Interest. First, a forwarder could compute for itself the hash value and compare it to the restriction. This is an expensive operation. The second option is for a border device to compute the hash once and place the value in a header (ConObjHash) that is carried through the network. The second option, of course, removes any security properties from matching the hash, so SHOULD only be used within a trusted domain. The header SHOULD be removed when crossing a trust boundary.\n\n## Key ID Restriction\n\nIn addition to content restrictions, CCNx allows an Interest to also restrict a response to a content object which can be authenticated using a specific public key. This is done by specifying the identity of the verifying public key in a header (KeyIdRestr) that is carried through the network. An Interest with a KeyIdRestr only matches a Content Object if the latter carries a public key whose identity matches the KeyIdRestr value. An Interest may carry both a content object hash restriction and a key ID restriction. The former simply subsumes the latter since, by design, the public key in a matching Content Object would be included in the hash computation input.\n\n# Link\n\nA Link is the tuple $${\\mathsf{Name}, [\\mathsf{KeyIdRestr}], [\\mathsf{ContentObjectHashRestr}]}.$$ The information in a Link comprises the fields the fields of an Interest which would retrieve the Link target. A Content Object with PayloadType = \"Link\" is an object whose payload is one or more Links. This tuple may be used as a KeyLink to identify a specific object with the certificate wrapped key. It is RECOMMENDED to include at least one of KeyIdRestr or ContentObjectHashRestr. If neither restriction is present, then any Content Object with a matching name from any publisher could be returned.\n\n# Hashes\n\nSeveral protocol fields use cryptographic hash functions, which must be secure against attack and collisions. Because these hash functions change over time, with better ones appearing and old ones falling victim to attacks, it is important that a CCNx protocol implementation support hash agility.\n\nIn this document, we suggest certain hashes (e.g., SHA-256), but a specific implementation may use what it deems best. The normative CCNx Messages specification should be taken as the definition of acceptable hash functions and uses.\n\n# Validation\n\nThe Validator consists of a ValidationAlgorithm that specifies how to verify the message and a ValidationPayload containing the validation output, e.g., the digital signature or MAC. The ValidationAlgorithm section defines the type of algorithm to use and includes any necessary additional information. The validation is calculated from the beginning of the CCNx Message through the end of the ValidationAlgorithm section. The ValidationPayload is the integrity value bytes, such as a MAC or signature.\n\nSome Validators contain a KeyId, identifying the publisher authenticating the Content Object. If an Interest carries a KeyIdRestriction, then that KeyIdRestriction MUST exactly match the Content Object's KeyId.\n\nValidation Algorithms fall into three categories: MICs, MACs, and Signatures. Validators using MIC algorithms do not need to provide any additional information; they may be computed and verified based only on the algorithm (e.g., CRC32C). MAC validators require the use of a KeyId identifying the secret key used by the authenticator. Because MACs are usually used between two parties that have already exchanged secret keys via a key exchange protocol, the KeyId may be any agreed-upon value to identify which key is used. Signature validators use public key cryptographic algorithms such as RSA, DSA, ECDSA. The KeyId field in the ValidationAlgorithm identifies the public key used to verify the signature. A signature may optionally include a KeyLocator, as described above, to bundle a Key or Certificate or KeyLink. MAC and Signature validators may also include a SignatureTime, as described above.\n\nA PublicKeyLocator KeyLink points to a Content Object with a DER- encoded X509 certificate in the payload. In this case, the target KeyId must equal the first object's KeyId. The target KeyLocator must include the public key corresponding to the KeyId. That key must validate the target Signature. The payload is an X.509 certificate whose public key must match the target KeyLocator's key. It must be issued by a trusted authority, preferably specifying the valid namespace of the key in the distinguished name.\n\n# Interest to Content Matching\n\nA Content Object satisfies an Interest if and only if (a) the Content Object name, if present, exactly matches the Interest name, and (b) the ValidationAlgorithm KeyId of the Content Object exactly equals the Interest KeyIdRestriction, if present, and (c) the computed ContentObjectHash exactly equals the Interest ContentObjectHashRestriction, if present.\n\nThe matching rules are given by this predicate, which if it evaluates true means the ContentObject matches the Interest. $N_i$ = Name in Interest (may not be empty), $K_i$ = KeyIdRestriction in the interest (may be empty), $H_i$ = ContentObjectHashRestriction in Interest (may be empty). Likewise, $N_o$, $K_o$, $H_o$ are those properties in the ContentObject, where $N_o$ and $K_o$ may be empty; $H_o$ always exists.\n\nAs a special case, if the ContentObjectHashRestriction in the Interest specifies an unsupported hash algorithm, then no ContentObject can match the Interest so the system should drop the Interest and MAY send an InterestReturn to the previous hop. In this case, the predicate below will never get executed because the Interest is never forwarded. If the system is using the optional behavior of having a different system calculate the hash for it, then the system may assume all hash functions are supported and leave it to the other system to accept or reject the Interest.\n\n$$(\\neg N_o \\lor (N_i=N_o)) \\land (\\neg K_i \\lor (K_i=K_o)) \\land (\\neg H_i \\lor (H_i=H_o)) \\land (\\exists N_o \\lor \\exists H_i)$$\n\nAs one can see, there are two types of attributes one can match. The first term depends on the existence of the attribute in the ContentObject while the next two terms depend on the existence of the attribute in the Interest. The last term is the \"Nameless Object\" restriction which states that if a Content Object does not have a Name, then it must match the Interest on at least the Hash restriction.\n\nIf a Content Object does not carry the ContentObjectHash as an expressed field, it must be calculated in network to match against. It is sufficient within an autonomous system to calculate a ContentObjectHash at a border router and carry it via trusted means within the autonomous system. If a Content Object ValidationAlgorithm does not have a KeyId then the Content Object cannot match an Interest with a KeyIdRestriction.\n\n# Interest Return\n\nThis section describes the process whereby a network element may return an Interest message to a previous hop if there is an error processing the Interest. The returned Interest may be further processed at the previous hop or returned towards the Interest origin. When a node returns an Interest it indicates that the previous hop should not expect a response from that node for the Interest \u2013 i.e., there is no PIT entry left at the returning node.\n\nThe returned message maintains compatibility with the existing TLV packet format (a fixed header, optional hop-by-hop headers, and the CCNx message body). The returned Interest packet is modified in only two ways:\n\n- The PacketType is set to InterestReturn to indicate a Feedback message.\n\n- The ReturnCode is set to the appropriate value to signal the reason for the return\n\nThe specific encodings of the Interest Return are specified in .\n\nA Forwarder is not required to send any Interest Return messages.\n\nA Forwarder is not required to process any received Interest Return message. If a Forwarder does not process Interest Return messages, it SHOULD silently drop them.\n\nThe Interest Return message does not apply to a Content Object or any other message type.\n\nAn Interest Return message is a 1-hop message between peers. It is not propagated multiple hops via the FIB. An intermediate node that receives an InterestReturn may take corrective actions or may propagate its own InterestReturn to previous hops as indicated in the reverse path of a PIT entry.\n\n## Message Format\n\nThe Interest Return message looks exactly like the original Interest message with the exception of the two modifications mentioned above. The PacketType is set to indicate the message is an InterestReturn and the reserved byte in the Interest header is used as a Return Code. The numeric values for the PacketType and ReturnCodes are in .\n\n## ReturnCode Types\n\nThis section defines the InterestReturn ReturnCode introduced in this RFC. The numeric values used in the packet are defined in .\n\n| **Name** | **Description** |\n|:---|:---|\n| No Route | The returning Forwarder has no route to the Interest name. |\n| HopLimit Exceeded | The HopLimit has decremented to 0 and need to forward the packet. |\n| Interest MTU too large | The Interest's MTU does not conform to the require minimum and would require fragmentation. |\n| No Resources | The node does not have the resources to process the Interest. |\n| Path error | There was a transmission error when forwarding the Interest along a route (a transient error). |\n| Prohibited | An administrative setting prohibits processing this Interest. |\n| Congestion | The Interest was dropped due to congestion (a transient error). |\n| Unsupported Content Object Hash Algorithm | The Interest was dropped because it requested a Content Object Hash Restriction using a hash algorithm that cannot be computed. |\n| Malformed Interest | The Interest was dropped beause it did not correctly parse. |\n\n## Interest Return Protocol\n\nThis section describes the Forwarder behavior for the various Reason codes for Interest Return. A Forwarder is not required to generate any of the codes, but if it does, it MUST conform to this specification.\n\nIf a Forwarder receives an Interest Return, it SHOULD take these standard corrective actions. A forwarder is allowed to ignore Interest Return messages, in which case its PIT entry would go through normal timeout processes.\n\n- Verify that the Interest Return came from a next-hop to which it actually sent the Interest.\n\n- If a PIT entry for the corresponding Interest does not exist, the Forwarder should ignore the Interest Return.\n\n- If a PIT entry for the corresponding Interest does exist, the Forwarder MAY do one of the following:\n\n - Try a different forwarding path, if one exists, and discard the Interest Return, or\n\n - Clear the PIT state and send an Interest Return along the reverse path.\n\nIf a forwarder tries alternate routes, it MUST ensure that it does not use same same path multiple times. For example, it could keep track of which next hops it has tried and not re-use them.\n\nIf a forwarder tries an alternate route, it may receive a second InterestReturn, possibly of a different type than the first InterestReturn. For example, node A sends an Interest to node B, which sends a No Route return. Node A then tries node C, which sends a Prohibited. Node A should choose what it thinks is the appropriate code to send back to its previous hop\n\nIf a forwarder tries an alternate route, it should decrement the Interest Lifetime to account for the time spent thus far processing the Interest.\n\n### No Route\n\nIf a Forwarder receives an Interest for which it has no route, or for which the only route is back towards the system that sent the Interest, the Forwarder SHOULD generate a \"No Route\" Interest Return message.\n\nHow a forwarder manages the FIB table when it receives a No Route message is implementation dependent. In general, receiving a No Route Interest Return should not cause a forwarder to remove a route. The dynamic routing protocol that installed the route should correct the route or the administrator who created a static route should correct the configuration. A forwarder could suppress using that next hop for some period of time.\n\n### HopLimit Exceeded\n\nA Forwarder MAY choose to send HopLimit Exceeded messages when it receives an Interest that must be forwarded off system and the HopLimit is 0.\n\n### Interest MTU Too Large\n\nIf a Forwarder receives an Interest whose MTU exceeds the prescribed minimum, it MAY send an \"Interest MTU Too Large\" message, or it may silently discard the Interest.\n\nIf a Forwarder receives an \"Interest MTU Too Large\" is SHOULD NOT try alternate paths. It SHOULD propagate the Interest Return to its previous hops.\n\n### No Resources\n\nIf a Forwarder receives an Interest and it cannot process the Interest due to lack of resources, it MAY send an InterestReturn. A lack of resources could be the PIT table is too large, or some other capacity limit.\n\n### Path Error\n\nIf a forwarder detects an error forwarding an Interest, such as over a reliable link, it MAY send a Path Error Interest Return indicating that it was not able to send or repair a forwarding error.\n\n### Prohibited\n\nA forwarder may have administrative policies, such as access control lists, that prohibit receiving or forwarding an Interest. If a forwarder discards an Interest due to a policy, it MAY send a Prohibited InterestReturn to the previous hop. For example, if there is an ACL that says \/parc\/private can only come from interface e0, but the Forwarder receives one from e1, the Forwarder must have a way to return the Interest with an explanation.\n\n### Congestion\n\nIf a forwarder discards an Interest due to congestion, it MAY send a Congestion InterestReturn to the previous hop.\n\n### Unsupported Content Object Hash Algorithm\n\nIf a Content Object Hash Restriction specifies a hash algorithm the forwarder cannot verify, the Interest should not be accepted and the forwarder MAY send an InterestReturn to the previous hop.\n\n### Malformed Interest\n\nIf a forwarder detects a structural or syntactical error in an Interest, it SHOULD drop the interest and MAY send an InterestReturn to the previous hop. This does not imply that any router must validate the entire structure of an Interest.\n\n[^1]: Work done while at PARC.","meta":{"dup_signals":{"dup_doc_count":17,"dup_dump_count":14,"dup_details":{"curated_sources":1,"2020-50":3,"2019-47":1,"2019-30":1,"2019-26":1,"2019-18":1,"2019-09":1,"2018-51":1,"2018-22":1,"2018-05":1,"2017-51":1,"2017-47":1,"2017-43":1,"2017-39":2}},"filename":"out\/1706.07165_extract_ccn.tex.md"},"subset":"arxiv"} +{"text":"abstract: #### Background:\n .\n Gene expression in a cell entails random reaction events occurring over disparate time scales. Thus, molecular noise that often results in phenotypic and population-dynamic consequences sets a fundamental limit to biochemical signaling. While there have been numerous studies correlating the architecture of cellular reaction networks with noise tolerance, only a limited effort has been made to understand the dynamic role of protein-protein interactions.\n .\n #### Results:\n .\n We have developed a fully stochastic model for the positive feedback control of a single gene, as well as a pair of genes (toggle switch), integrating quantitative results from previous *in vivo* and *in vitro* studies. In particular, we explicitly account for the fast binding-unbinding kinetics among proteins, RNA polymerases, and the promoter\/operator sequences of DNA. We find that the overall noise-level is reduced and the frequency content of the noise is dramatically shifted to the physiologically irrelevant high-frequency regime in the presence of protein dimerization. This is independent of the choice of monomer or dimer as transcription factor and persists throughout the multiple model topologies considered. For the toggle switch, we additionally find that the presence of a protein dimer, either homodimer or heterodimer, may significantly reduce its random switching rate. Hence, the dimer promotes the robust function of bistable switches by preventing the uninduced (induced) state from randomly being induced (uninduced).\n .\n #### Conclusions:\n .\n The specific binding between regulatory proteins provides a buffer that may prevent the propagation of fluctuations in genetic activity. The capacity of the buffer is a non-monotonic function of association-dissociation rates. Since the protein oligomerization *per se* does not require extra protein components to be expressed, it provides a basis for the rapid control of intrinsic or extrinsic noise. The stabilization of regulatory circuits and epigenetic memory in general is of direct implications to organism fitness. Our results also suggest possible avenues for the design of synthetic gene circuits with tunable robustness for a wide range of engineering purposes.\naddress: Microbial Systems Biology Group, Biosciences and Biotechnology Division, Lawrence Livermore National Laboratory, 7000 East Avenue Livermore, CA 94550, USA\nauthor: Cheol-Min Ghim and Eivind Almaas\nbibliography: dimer.bib\ntitle: Genetic noise control via protein oligomerization\n\n\\[1995\/12\/01\\]\n\n# Background\n\nRecent experiments on isogenic populations of microbes with single-cell resolution\u00a0 have demonstrated that stochastic fluctuations, or noise, can override genetic and environmental determinism. In fact, the presence of noise may significantly affect the fitness of an organism\u00a0. The traditional approach for modeling the process of molecular synthesis and degradation inside a cell is by deterministic rate equations, where the continuous change of arbitrarily small fractions of molecules is controlled instantaneously and frequently represented through sigmoidal dose-response relations. However, the rate-equation approaches can not explain the observed phenotypic variability in an isogenic population in stable environments. In particular, when molecules involved in feedback control exist in low copy numbers, noise may give rise to significant cell-to-cell variation as many regulatory events are triggered by molecules with very low copy numbers $\\lesssim100$\u00a0. A well known example is the regulation of inorganic trace elements\u00a0, such as iron, copper, and zinc. While these trace elements are essential for the activity of multiple enzymes, their presence may quickly turn cytotoxic unless their concentrations are carefully controlled.\n\nAlthough the presence of phenotypic variation due to stochastic fluctuations need not be detrimental for a population of cells\u00a0, elaborate regulatory mechanisms have evolved to attenuate noise\u00a0. Several systems-biology studies have recently focused on a select set gene-regulatory circuits, in particular those with feedback control. Feedback control circuits have been identified as important for multiple species and proven responsible for noise reduction and increased functional stability in many housekeeping genes through negative autoregulation\u00a0, long cascades of ultrasensitive signaling\u00a0, bacterial chemotaxis\u00a0, and the circadian clock\u00a0. Additionally, recent studies on iron homeostasis\u00a0 in *E. coli* highlight the noise-reducing capability mediated by small RNAs.\n\nHere, we study reversible protein-protein binding as a novel source for genetic noise control. In particular, we have quantitatively analyzed the effects of protein oligomerization on noise in positive autoregulatory circuits as well as a simple toggle-switch\u00a0. The all-or-none threshold behavior of positive-feedback circuits typically improves robustness against \"leaky\" switching. However, due to their functional purposes, gene circuits involved in developmental processes or stress responses that often accompany genome-wide changes in gene expression are intrinsically noisier than negative feedback circuits.\n\nIt is frequently observed that transcription factors exist in oligomeric form\u00a0, and protein oligomerization is an important subset of protein-protein interactions, constituting a recurring theme in enzymatic proteins as well as regulatory proteins. Well studied examples include the $\\lambda$-phage repressor, ${\\lambda}$CI (dimer), the TrpR (dimer), LacR (tetramer), and Lrp (hexadecamer or octamer). While many of the RNA-binding proteins dimerize exclusively in the cytosol, the LexA repressor\u00a0, the leucine-zipper activator\u00a0, and the Arc repressor\u00a0 have been shown to form an oligomer either in the cytosol (\"dimer path\") or on the DNA by sequential binding (\"monomer path\"). Previously, the efficacy of monomer and dimer transcription-regulation paths to reduce noise was separately studied for a negative-feedback autoregulatory circuit\u00a0. In contrast, we have focused on oligomerization in positive-feedback autoregulatory circuits, as well as genetic toggle switches based on the mutual repression of genes. We find that cytosolic transcription-factor oligomerization acts as a significant buffer for abundance-fluctuations in the monomer, overall reducing noise in the circuit. Additionally, the noise-power spectral density is shifted from the low- to the high-frequency regime. In the toggle switch, cytosolic oligomerization may significantly stabilize the functional state of the circuit. This is especially evident for heterodimerization.\n\nYet another interesting case of ligand-binding-mediated receptor oligomerization has been reported\u00a0, where the formation of various structures of oligomers may act to buffer the intracellular signaling against noise. Although our modeling and analysis is based on prokaryotic cells, we expect our main findings to be organism independent since protein oligomers, especially homodimers, is such a common occurrence across the species\u00a0, with homodimers comprising 12.6% of the high-fidelity human proteome\u00a0.\n\n# Results and Discussion\n\n## Dimerization breaks long-time noise correlations in autogenous circuit\n\nTo evaluate the dynamic effects of protein-protein binding in positive-autoregulation gene circuits, we construct several alternative models of positive autogenous circuits. Each model emphasizes a different combination of possible feedback mechanisms, and the network topologies considered can be grouped into the two classes of monomer-only (MO) and dimer-allowed (DA) circuits, according to the availability of a protein-dimer state (color coding in Fig.\u00a01). We further group the DA circuits into three variations, DA1 through DA3, depending on which form of the protein is the functional transcription factor (TF) and where the dimerization occurs. For DA1, we only allow the dimer to bind with the DNA-operator sequence (dimeric transcription factor, DTF), while for DA2 dimerization occurs through sequential binding of monomers on the DNA. In DA3, the protein-DNA binding kinetics is the same as in the MO circuit, hence monomeric transcription factor (MTF), with the addition of a cytosolic protein dimer state. While we will only present results for DA1 in this paper, there is no significant difference for DA2 and DA3 \\[Additional file 1\\].\n\nNote that the feedback loop is not explicit in Fig.\u00a01 but implicitly included through the dependence of RNAp-promoter binding equilibrium on the binding status of the TF-operator pair. The sign (positive or negative) and strength of the feedback control is determined by the relative magnitude of the dissociation constants between RNAp and DNA which is either free or TF-bound. For instance, topology DA1 has positive feedback control if $K_{30}= k_{30}\/q_{30} >\nK_{32} = k_{32}\/q_{32}$, and $K_{30}$ corresponds to the level of constitutive transcription (transcription initiation in the absence of bound transcription factor). For each topology, we study the dependence of noise characteristics on the kinetic rates by varying the dimer lifetime, binding affinity, and the individual association\/dissociation rates (see Table\u00a0 and Fig.\u00a01). While we only discuss positive feedback control of the autogenous circuit in this paper, we have obtained corresponding results for negative feedback control \\[Additional file 1\\].\n\nFig.\u00a02 shows a sample of ten representative time courses for the protein abundance. The effect of stochastic fluctuations is marked in the MO circuit. However, in all the DA circuits where the protein may form a cytosolic dimer we observe a significantly reduced level of noise in the monomer abundance. The suppression of fluctuations persists throughout the range of kinetic parameters that (so far) is known to be physiologically relevant (see Table\u00a01).\n\nCalculating the steady-state distribution for the monomer and dimer abundances (Fig.\u00a03) we observe a clear trend that the monomer Fano factor (variance-to-mean ratio) is reduced as the binding equilibrium is shifted towards the dimer. This trend is conserved for all the investigated DA topologies (see *Supplementary Information*). As long as dimerization is allowed in the cytosol, the fast-binding equilibrium absorbs long-time fluctuations stemming from bursty synthesis or decay of the monomer. When a random fluctuation brings about a sudden change in the monomer copy number, dimerization provides a buffering pool that absorbs the sudden change. Otherwise, random bursts in the monomer abundance will propagate to the transcriptional activity of the promoter, leading to erratic control of protein expression. It should be emphasized that this has nothing to do with the sign of regulation and is in agreement with the observations of Ref.\u00a0 for negative autoregulation. Surprisingly, the magnitude of noise reduction in the positive autoregulatory circuit is nearly the same as that for negative autoregulation which is typically considered a highly stable construct \\[Additional file 1\\].\n\nA heuristic explanation can be found from Jacobian analysis of a deterministic dynamical system, which is justified for small perturbations around a steady state. When a random fluctuation shifts the monomer copy number away from its steady-state value, the decay toward the steady state can be described by the system Jacobian. The disparity in the magnitude of the (negative) eigenvalues of the Jacobian matrix for the MO versus the DA circuits signifies that the perturbed state is buffered by fast settlement of the monomer-dimer equilibrium. This buffering occurs before random fluctuation can accumulate, possibly with catastrophic physiological effects, explaining the coarse long-time patterns observed in the MO model in contrast with the DA circuits (Fig.\u00a02).\n\n## Frequency-selective whitening of Brownian noise\n\nThe dimerization process itself generates stochastic fluctuations on a short time scale. However, since this time scale is essentially separated from that of monomer synthesis and decay (orders of magnitude faster), dimerization effectively mitigates monomer-level fluctuations. The frequency content of the fluctuations is best studied by an analysis of the power spectral density (PSD), which is defined as the Fourier transform of the autocorrelation function\u00a0, originally introduced for signal processing. Fig.\u00a04 shows the noise power spectra of DA1, and the distinction between the MO circuit and the DA topology is immediately evident. In particular, we note the following two features. (i) A power-law decay with increasing frequency and (ii) a horizontal plateau for the DA circuits. The power-law feature is explained by the \"random walk\" nature of protein synthesis and decay: The power-law exponent is approximately 2, which is reminiscent of Brownian motion (a Wiener process) in the limit of large molecular copy numbers. Compared to other commonly observed signals, such as white (uncorrelated) noise or $1\/f$ noise, protein synthesis\/decay has a longer correlation time. If the autocorrelation function of a time course is characterized by a single exponential decay, as is the case for Brownian noise, the PSD is given by a Lorentzian profile, and thus, well approximated by an inverse-square law in the low-frequency regime. We do not observe a saturation value for the MO circuit, and it is likely not in the frequency window of physiological interest. This may especially be the case for circuits where the correlation times are long.\n\nThe noise reduction is in the physiologically relevant low-frequency regime, and in Fig.\u00a04 we have indicated the typical values for a cell cycle and mRNA lifetime. Although stochastic fluctuations impose a fundamental limit in cellular information processing, multiple noise sources may affect cellular physiology non-additively. For a living cell, fluctuations are especially relevant when their correlation time is comparable to, or longer than, the cell cycle. At the same time, short-time scale fluctuations (relative to the cell cycle) are more easily attenuated or do not propagate\u00a0. Additionally, the observed flat region in the PSD of the DA circuits implies that as far as mid-range frequency fluctuations are concerned, we can safely approximate them as a white noise. This insight may shed light on the reliability of approximation schemes for effective stochastic dynamics in protein-only models.\n\n## Increased lifetime of dimer plays an important role\n\nThe virtue of the cytosolic dimer state is also directly related to the extended lifetime of proteins when in a complex. Except for the degradation tagging for active proteolysis, a much slower turnover of protein oligomers is the norm. This is partly explained by the common observation that monomers have largely unfolded structures, which are prone to be target of proteolysis\u00a0. It has also been pointed out that the prolonged lifetime of the oligomeric form is a critical factor for enhancing the feasible parameter ranges of gene circuits\u00a0. As seen from Fig.\u00a03 (also Table\u00a02), the fold change of the noise reduction, while still significant, is not as strong for the (hypothetical) case of dimer lifetime being the same as that of the monomer ($\\gamma_2\/\\gamma_1=1\/2$). However, the low-frequency power spectra still exhibit almost an order-of-magnitude smaller noise power than in the MO circuit with the same rate parameters (Fig.\u00a04). Hence, the noise reduction capability holds good as long as the dimer lifetime is kept sufficiently long compared with the monomer-dimer transition.\n\n## Effects of homo-dimerization in genetic toggle switch\n\nThe exceptionally stable lysogeny of the phage $\\lambda$, for which the spontaneous loss rate is $\\lesssim10^{-7}$ per cell per generation\u00a0, has motivated the synthesis of a genetic toggle switch\u00a0. Toggle switch is constructed from a pair of genes, which we will denote as gene $A$ and $B$, that transcriptionally repress each other's expression. This mutual negative regulation can be considered an effective positive feedback loop and provides the basis for the multiple steady states. The existence of multistability, in turn, may be exploited as a device for epigenetic memory or for decision making\u00a0.\n\nAs the general attributes of positive feedback with cooperativity suggest, a genetic toggle switch responds to external cues in an ultrasensitive way: When the strength of a signal approaches a threshold value, the gene expression state can be flipped by a small change in the signal. For example, the concentration of protein $A$ ($B$) may rapidly switch from high to low and vice versa. However, previous studies of a synthetic toggle switch have shown that the noise-induced state switching is a rare event\u00a0. In the ensuing analysis, we aim to delineate the origin of this exceptional stability.\n\nIn a simple model, the monomer-only (MO) toggle, regulatory proteins only exist in monomeric form. Although an external signal is not explicitly included, random fluctuations in the abundance of the circuit's molecular components will occasionally flip the toggle-state for the two protein species. Drawing on the results from our analysis of positive autoregulatory gene circuits, we hypothesize that dimerization in the regulatory proteins of the toggle switch will serve to stabilize its performance against noise. We allow the protein products of each gene to form a homodimer, being either $AA$ or $BB$, which is similar to the cI-cro system in phage $\\lambda$\u00a0. The dissociation constant for the dimers is defined as $K_1=q_1\/k_1$, where $k_1$ is the rate of two monomers forming a complex, and $q_1$ the rate of the complex breaking up into its two constituents.\n\nWe evaluate the effect of the fast protein binding-unbinding dynamics on the toggle switch performance by using either (i) the monomers or (ii) the homodimers as the functional form of the repressor. Fig.\u00a05 shows, for selected values of the dissociation constant $K_1$, representative time series of the protein monomer (left) and dimer (right) abundances for the case of (a) monomeric or (b) dimeric transcription factors, respectively. A careful analysis of the phase space (in presence of noise) for our chosen set of parameters confirms that the studied toggle-switch systems are in the bistable region\u00a0.\n\nWhen monomer is the functional form of the repressor molecule (Fig.\u00a05(a)) and $K_1$ is large (limit of low dimer affinity), the protein populations are dominated by monomers. Hence, the circuit effectively behaves as an MO toggle. As $K_1$ decreases, we see that the level of random switching is suppressed: Analogous to the autogenous circuit, the dimer pool stabilizes the protein monomer population. However, the noise suppression is not monotonic with increasing dimer binding affinity. Indeed, for very large binding affinities (small $K_1$), the number of random switching events is increased since the monomer is only available in low copy numbers. Consequently in this limit, it becomes more likely that a small fluctuation in the monomer abundance can cause a dramatic change in the overall gene expression profile. The noise-stabilizing effect of dimerization is also reflected in the corresponding PSDs \\[Additional file 1\\]. For instance, we observe a marked suppression of low-frequency fluctuations in the monomer abundance with increasing $K_1$.\n\nIn Fig.\u00a05(b) we show corresponding sample time series for the case of a dimeric repressor, all other properties being the same as in (a). While the overall trends are similar, we do note the following difference. Contrary to the monomeric repressor case, there are very few toggle events in the strong binding limit: Since the signaling molecules (dimers) of the dominant gene (the \"on\"-gene) tend to exist in large copy numbers, a significant fluctuation is needed to flip the state of the toggle switch. In the case of monomeric repression, the signaling molecule exists in low abundance in this limit. Thus, the dominant protein species in the dimeric-repressor system is able to maintain much better control over the state of the toggle switch.\n\nIn Fig.\u00a06, we show the distribution $(N_A-N_B)$, the difference in molecule abundance for the two protein species in the case of monomeric (left) and dimeric (right) transcription factor. The asymmetry with respect to the zero axis is caused by our choice of initial conditions (protein species $A$ in high concentration and species $B$ in low concentration), as well as the finite length of the time series. For monomeric transcription, the presence of dimers with moderate binding affinity sharpens the monomer abundance distribution while accentuating its bimodal character. This is in agreement with the qualitative observation from Fig.\u00a05 on switching stability. For dimeric transcription, we clearly observe that the symmetry of the system is broken for small values of $K_1$, indicating that the state of the toggle switch is extremely stable, and hence, likely determined by the choice of the initial conditions.\n\nTo systematically quantify our observations on the interplay between dimer-binding affinity and the functional stability of the toggle switch, we generated long time series ($\\approx3\\cdot10^7$ sec) to measure the average spontaneous switching rate. In Fig.\u00a07, we show the average toggle frequency relative to that of the MO toggle for the binding affinities $K_1\/\\textrm{nM}=\\{2, 20, 100,1000\\}$, and the average MO switching rate is $7.5\\times10^{-6}$\/hour. As expected, we find that intermediate values of $K_1$ are able to stabilize the toggle switch. Fig.\u00a07 also highlights the increased stability of the toggle switch for a dimeric versus monomeric transcription factor, the dimeric switching rates always being lower and approaching zero for strong dimer binding.\n\n## Heterodimerization in genetic toggle switch\n\nWe have also considered the case of heterodimerization in the toggle switch, since the noise- and functional stabilization of the switch may be directly affected by the composition and source of the dimers. Note that, the gene-regulation activity is conferred by the two monomer proteins $A$ and $B$ and not the heterodimer $AB$. However, we find that the presence of (inactive) heterodimers gives rise to very similar noise-stabilizing effects as that of homodimers (Fig.\u00a07). In fact, the existence of heterodimer state allows the dominant protein species to effectively suppress the (active) monomers of the minority species. Thus the heterodimer circuit shows dramatically enhanced functional stability as compared to the case of homodimeric repressors, not sharing the discussed vulnerability of MO circuit to intrinsic noise. Although, to our knowledge, this is a purely hypothetical toggle-switch design, it provides a general strategy for noise control in synthetic gene circuits, along with previously proposed approach of overlapping upstream regulatory domains\u00a0.\n\n# Conclusions\n\nCells have evolved distinct strategies to combat the fundamental limits imposed by intrinsic and environmental fluctuations. We investigated the role of protein oligomerization on noise originating from the random occurrence of reaction events and the discrete nature of molecules. Recent efforts to correlate network structure with functional aspects may provide valuable insights into approaches for network-level noise control\u00a0. While negative feedback is one of the most abundantly observed patterns to achieve the goal of stability, it begs the question of how cells reliably change the expression of genes from one state to another. The ultrasensitive response circuit, exemplified by the ubiquitous signal transduction cascades in eukaryotic cells, has been proposed as an answer to this question\u00a0.\n\nIn addition to the combinatorial expansion of functional specificity, we argue that the availability of oligomeric states contributes to the attenuation of stochastic fluctuations in protein abundance. In positive autoregulatory gene circuits, where the abundance of an expressed protein controls its own synthesis rate, dimerization provides a buffer serving to mitigate random fluctuations associated with the bursty transcription-translation process. We find that short-time binding-unbinding dynamics reduce the overall noise level by converting potentially pathological low-frequency noise to physiologically unimportant, and easily attenuated, high-frequency noise\u00a0.\n\nNoise-induced switching generally signals a defect in cellular information processing. Untimely exit from latency in the lambda-phage system directly implies, as the immediate consequence to viruses, increased chance of being targeted by a host immune system. In the case of a bacterium, the expression of a specific set of sugar uptake genes when the sugar is absent from the external medium is a considerable waste of cellular resources. For example, *lac* operon of *E. coli* can be considered to have the circuitry of mutual antagonism between the gene and lactose uptake-catabolic genes\u00a0. A difference lies in the non-transcriptional deactivation of the allosteric transcription factor LacI. LacY, lactose permease, indirectly regulates LacI by increasing lactose uptake, which in turn catalytically deactivates LacI. Likewise, many pili operons of Gram-negative bacteria are also known to utilize heritable expression states, which are of crucial role in pathogenesis\u00a0.\n\nWe expect that the random flipping of gene expression states in the examples of positive-feedback-based genetic switches may very well be closely coupled with the fitness of an organism. Phenomenological models relating the fitness of an organism to random phenotypic switching in fluctuating environments have provided important insights into the role of noise\u00a0, but still many questions remain unanswered.\n\nApplying these insights to the design of a synthetic gene switch demonstrates the potential use of affinity-manipulation for synthetic biology, where the construction of genetic circuits with tunable noise-resistance is of central importance. In particular, our analysis highlights the potential utility of heterodimerization to stabilize ultrasensitive switches against random fluctuations. In practice, small ligand molecules may be employed to regulate and tune the binding affinity of regulatory proteins, being either monomers or dimers. Our results further suggest that the structure of the protein-interaction network\u00a0 may provide important insights on methods for genome-level noise control in synthetic and natural systems.\n\n# Methods\n\n## Model construction\n\nTo evaluate the general role of protein oligomerization in a broad functional context, we studied the two most common motifs found in genetic regulatory circuits: positive autoregulation and the bistable switch. The reaction scheme studied is summarized in Fig.\u00a01, where the binding\/unbinding reactions between RNAp and promoter or between TF and operator are made explicit. Each distinct binding status of DNA is associated with a unique transcription initiation rate, and then the overall rate of mRNA synthesis is a weighted average of the initiation rates for distinct binding status, where the weights are given by the relative abundance of each configuration at equilibrium, determined by the calculation of binding energy\u00a0. Note that, neither binding equilibrium nor empirical Hill-type cooperativity is assumed *ad hoc*. In particular, we split the lumped transcription process into two separate events, (i) isomerization of closed RNAp-promoter complex to its open form and (ii) transcription elongation followed by termination. This is to reflect the availability of the free promoter while the transcription machinery proceeds along the coding sequence of a gene as soon as the promoter region is cleared of the RNAp holoenzyme. Otherwise, the promoter would be inaccessible during a whole transcription event, altering the random mRNA synthesis dynamics.\n\nTo realize the genetic toggle switch in a stochastic setting, we keep track of the microscopic origin of cooperativity that gives rise to bistability. Among various strategies, we employ multiple operator sites which have the same binding affinity with the repressor. The resultant circuitry is, in essence, two autogenous circuits, $A$ and $B$, which are connected through the active form of their expressed proteins (the active form being either monomer or dimer). The connection is implemented by allowing the active form of proteins $A$ ($B$) to bind the operator sites of gene $B$ ($A$). In order to make the interaction between the two genes repressive, unlike positive autogenous circuits, $K_{31}$ and $K_{32}$ in Fig.\u00a01 are now greater than $K_{30}$, making the protein transcriptional repressors. For reasons of analytical simplicity, we have studied the symmetric toggle switch, where the reaction descriptions of each component follow those of the autogenous circuit. Again, the quantitative characteristics of macromolecular binding-unbinding are chosen based on the phage lambda-*E. coli* system. The only exception is related to the multiple operator sites, where the second repressor binds an operator site with higher binding affinity when the first site is already occupied by the repressor protein\u00a0. We introduce three different dimerization schemes. Three different dimerization scheme have been introduced: (i) homodimerization with monomeric repressor, (ii) homodimerization with dimeric repressor, and (iii) heterodimerization with monomeric repressor. By solving for the stationary states of the deterministic rate equations, we could identify the bistability region in parameter space to which all the model systems under consideration belong.\n\n## Stochastic simulation\n\nWhile the deterministic rate equation approach or Langevin dynamics explicitly gives the time-evolution of molecular concentration in the form of ordinary differential equations, chemical master equation (CME) describes the evolution of a molecular number state as a continuous-time jump Markov process. To generate the statistically correct trajectories dictated by CME, we used the Gillespie direct\u00a0 and Next Reaction (Gibson-Bruck)\u00a0 algorithms, both based on the exact chemical master equation. The Dizzy package\u00a0 were used as the core engine of the simulations. To ensure that calculations were undertaken in a steady state, we solved the deterministic set of equations for steady state using every combination of parameters investigated. We employed these deterministic steady-state solutions as initial conditions for the stochastic simulations. For each model system, we generated $10^5$ ensemble runs with identical initial conditions and used the instantaneous protein copy number at a fixed time point $t=5000$ sec. To achieve high-quality power spectra in the low- and high-frequency limits, we ran time courses ($\\sim 10^5$ sec) with higher sampling frequency (20 measure points per sec).\n\nTo calculate the average switching rate, we generated time series of minimum length $3\\cdot10^7$ sec (approximately corresponding to 1 year). We identify a state change in the toggle switch by monitoring the ratio of the monomer and dimer abundance for the two protein species. In order to avoid counting short-time fluctuations that do not correspond to a prolonged change of the toggle state, we a applied sliding-window average to the time series, using a window size of $1000$ sec.\n\n# Authors' contributions\n\nCMG and EA designed the study. CMG performed the computations. CMG and EA analyzed the results and wrote the paper. Both authors have read and approved the final version of the paper.\n\n# Acknowledgments\n\nThe authors thank Dr. Navid for thoughtful discussion and suggestions. This work was performed under the auspices of the U. S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and funded by the Laboratory Directed Research and Development Program (project 06-ERD-061) at LLNL.\n\nTables\n\n**Table 1 - Probability rates for positive autogenous circuit.** \n\n| Category | Symbol | \u00a0\u00a0\u00a0\u00a0\u00a0Reaction | \u00a0\u00a0Value (s$^{-1}$) | \u00a0\u00a0\u00a0Ref. |\n|:--:|:--:|:---|:---|:---|\n| protein dimerization | $k_1$ | P$_1$ $+$ P$_1 \\rightarrow$ P$_2$ | 0.001-0.1 | |\n| | $q_1$ | P$_2 \\rightarrow$ P$_1$ $+$ P$_1$ | 0.1-1 | |\n| TF-operator int | $k_{20}$ | P$_2$ $+$ D00 $\\rightarrow$ D20 | 0.012 | |\n| | $q_{20}$ | D20 $\\rightarrow$ P$_2$ $+$ D00 | 0.9 | |\n| | $k_{21}$ | P$_1$ $+$ D00 $\\rightarrow$ D10 | 0.038 | |\n| | $q_{21}$ | D10 $\\rightarrow$ P$_1$ $+$ D00 | 0.3 | |\n| | $k_{22}$ | P$_1$ $+$ D10 $\\rightarrow$ D20 | 0.011 | |\n| | $q_{22}$ | D20 $\\rightarrow$ P$_1$ $+$ D10 | 0.9 | |\n| RNAp-promoter int | $k_{30}$ | R $+$ D00 $\\rightarrow$ D01 | 0.038 | |\n| | $q_{30}$ | D01 $\\rightarrow$ R $+$ D00 | 0.3 | |\n| | $k_{31}$ | R $+$ D10 $\\rightarrow$ D11 | 0.038$^\\dagger$, 0.38$^\\ddagger$ | |\n| | $q_{31}$ | D11 $\\rightarrow$ R $+$ D10 | 0.3$^\\dagger$, 0.03$^\\ddagger$ | |\n| | $k_{32}$ | R $+$ D20 $\\rightarrow$ D21 | 0.38$^{*\\dagger}$ | |\n| | $q_{32}$ | D21 $\\rightarrow$ R $+$ D20 | 0.03$^{*\\dagger}$ | |\n| Isomerization | $v$ | Dx1 $\\rightarrow$ C $+$ Dx0 | 0.0078 | |\n| tsx-tsl elongation & decay | $\\alpha$ | C $\\rightarrow$ M $+$ R | 0.03 | |\n| | $\\beta$ | M $\\rightarrow$ P$_1$ $+$ M | 0.044 | |\n| | $\\gamma_0$ | M $\\rightarrow \\varnothing$ | 0.0039 | |\n| | $\\gamma_1$ | P$_1 \\rightarrow \\varnothing$ | 7$\\times$``{=html}10$^{-4}$ | |\n| | $\\gamma_2$ | P$_2 \\rightarrow \\varnothing$ | 0.7-3.5$\\times$``{=html}10$^{-4}$ | |\n\nKinetic rates for the positive autogenous circuit. Experimentally available rates are all taken from lambda phage-*E. coli* complex. The values with superscript correspond to the circuit topologies DA1 (\\*), DA2 ($\\dagger$), and DA3 ($\\ddagger$) in Fig.\u00a01.\n\n**Table 2 - Relative Fano factors of protein abundance distributions** \n\n| $K_1$ (nM) | $\\gamma_2=\\gamma_1\/10$ | | $\\gamma_2=\\gamma_1\/2$ | |\n|:----------:|:----------------------:|:-----:|:---------------------:|:-----:|\n| 2-5 | monomer | dimer | monomer | dimer |\n| 1 | 0.127 | 0.809 | 0.132 | 0.679 |\n| 20 | 0.209 | 0.936 | 0.230 | 0.716 |\n| 500 | 0.866 | 0.478 | 0.826 | 0.426 |\n\nThe Fano factor of protein abundance distribution for the autogenous circuits (topology DA1), relative to that of the monomer-only (MO) circuit, $8.729$.\n\nFigures\n\n**Figure 1. Schematic of model autoregulation gene circuit.** \nThe DNA binding status is indicated by Dxy, where x corresponds to the operator region (empty=0, monomer=1, dimer=2), and y to the promoter region (empty=0, RNA polymerase bound=1). C represents the open complex of DNA-RNAp holoenzyme with the promoter sequence just cleared of RNAp and is subject to transcription elongation. Finally, M, P1 and P2 correspond to mRNA, protein monomer, and dimer, respectively. The network topologies can be grouped into two classes, monomer-only (MO) or dimer-allowed (DA) circuits. We have studied DA1 (red lines), which only allows the dimer to bind with the DNA-operator sequence, DA2 (green) with sequential binding of monomers on the DNA, and DA3 (blue), which shares protein-DNA binding kinetics with MO while allowing dimerization in the cytosol. Note that for topology DA2, we have chosen $K_{31}=K_{30}$ (see text for details) We have assumed cells to be in the exponential growth phase and the number of RNAp (R) constant.\n\n**Figure 2. Ten independent time courses of the abundance of protein monomers in the (positive) autoregulatory circuit.** \nThe availability of a cytosolic dimer state (red, using circuit topology DA1) significantly reduces the copy-number fluctuations of the monomer compared to the monomer-only (MO) circuit (blue). All corresponding MO and DA1 parameters have the same values. In the ensuing simulations initial conditions are chosen to be the steady state solution of the corresponding deterministic rate equation so that the transient behavior should be minimized.\n\n**Figure 3. Stationary state distribution of monomer (black) and dimer (orange) protein abundance in the positive autogenous circuits.** \nThe left (right) column corresponds to a ratio of the dimer and monomer decay rates of $\\gamma_2\/\\gamma_1=1\/10$ ($\\gamma_2\/\\gamma_1=1\/2$). The molecular copy numbers are collected at a fixed time interval ($5\\cdot10^3$ sec) after the steady state has been reached. Here $K_1\\equiv q_1\/k_1$ is the dissociation constant of the protein dimer. As the binding equilibrium is shifted towards the dimer state (decreasing $K_1$), the noise level is monotonically reduced (see Table 2). Note that the prolonged protein lifetime due to the complex formation (left column) affects the noise level.\n\n**Figure 4. Power spectral density (PSD) of fluctuations in protein abundance.** \nThe PSD of the MO circuit clearly displays a power-law behavior. All other model systems with an available cytosolic protein dimer state (DA1 shown here) develop a plateau in the mid-frequency region regardless of the model details (see *Supplementary Information*). As the dimer binding affinity increases, the noise level is further reduced. We have included the MO result in the dimer panel (right) for reference. Datasets with solid (empty) symbols correspond to $\\gamma_2\/\\gamma_1=1\/10$ ($\\gamma_2\/\\gamma_1=1\/2$).\n\n**Figure 5. Sample time series of monomer and dimer copy numbers in genetic toggle switch.** \n(a) MTF circuit, where monomer is the functional form of the repressor. (b) DTF circuit, where dimer is the functional form of the repressor. The left (right) column shows the number of the two monomer molecules $A$ and $B$ (dimers $AA$ and $BB$), and the initial state is always with species $A$ (red) in high abundance. Note that the switching frequency depends on the binding affinity of protein dimer.\n\n**Figure 6. Distribution of monomer abundance differences between protein species $A$ and $B$.** \nThe asymmetry with respect to the zero axis is due to the choice of initial state (species $A$ high) and the finite time span of simulations.\n\n**Figure 7. Random switching rates of genetic toggle switches.** \nOrdinate is the ratio of the random switching rates of various toggle switches to that of the monomer-only (MO) circuit, $7.5\\times 10^{-6}$\/hour. MTF, monomeric transcription factor; DTF, dimeric transcription factor; Het-MTF, monomeric transcription factor with deactivated heterodimer state.\n\nAdditional Files\n\n**Additional file 1. Supplementary results for the positive autoregulatory circuits with various topology.** \nProtein abundance distribution and power spectral density of autogenous DA2 and DA3 circuits are presented.","meta":{"dup_signals":{"dup_doc_count":12},"filename":"out\/0812.0841_extract_BMC_qbio.tex.md"},"subset":"arxiv"} +{"text":"abstract: The Internet not only has changed the dynamics of our collective attention, but also through the transactional log of online activities, provides us with the opportunity to study attention dynamics at scale. In this paper, we particularly study attention to aircraft incidents and accidents using Wikipedia transactional data in two different language editions, English and Spanish. We study both the editorial activities on and the viewership of the articles about airline crashes. We analyse how the level of attention is influenced by different parameters such as number of deaths, airline region, and event locale and date. We find evidence that the attention given by Wikipedia editors to pre-Wikipedia aircraft incidents and accidents depends on the region of the airline for both English and Spanish editions. North American airline companies receive more prompt coverage in English Wikipedia. We also observe that the attention given by Wikipedia visitors is influenced by the airline region but only for events with high number of deaths. Finally we show that the rate and time span of the decay of attention is independent of the number of deaths and a fast decay within about a week seems to be universal. We discuss the implications of these findings in the context of attention bias.\naddress: Oxford Internet Institute, University of Oxford, U.K\nauthor: Ruth Garc\u00eda-Gavilanes, Milena Tsvetkova and Taha Yasseri\nbibliography: main.bib\nsubject: behaviour, complexity, human-computer interaction\ntitle: Dynamics and Biases of Online Attention: The Case of Aircraft Crashes\n\n# Introduction\n\nThe Internet has drastically changed the flow of information in our society. Online technologies enable us to have direct access to much of the world's established knowledge through services such as Wikipedia and to informal user-generated content through social media. There is no theoretical limit to the information bandwidth on the Internet but human attention has its own limits. Public attention to emerging topics decays over time or suffers the so called memory buoyancy from users, which is a metaphor of information objects sinking down in the digital memory with decreasing importance and usage, increasing their distance to the user .\n\nNowadays, the online footprints of users have rendered the level of attention given to new and past events and its decay an observable phenomenon. The digital nature of Internet-based technologies enables us to analyse the variances of attention at a scale and with an accuracy that have not been feasible in relation to other communication technologies. Researchers have used logs generated by online users' activities such as tweets, search queries, and web navigation paths to cover a wide range of topics on attention. For example, Lehmann\u00a0*et al.*\u00a0 characterize attention by analysing the time-series of tweets with popular tags from a data set of 130 million tweets from 6.1 million users and found four clusters based on dynamics, semantics, and information spread. Yeung\u00a0*et al.*\u00a0 focus on how events are remembered for specific years by looking at temporal expressions in the text of 2.4 million articles in English from Google news archive; they find more references to more recent events. Other studies have concentrated on attention decay. Wu and Huberman discover a very short time span of collective attention with regard to news items on the digg.com linksharing website. Simkin and Roychowdhury\u00a0 study blogs and news from more than 100 websites and find that decay in accessibility is due to aspects of visibility such as link positioning and attractiveness. Researchers have also linked online attention to more practical matters, from predicting election outcomes and detecting memory patterns in human activities\u00a0, all the way to analysing trading behaviour in financial markets or the appropriate time when to publish news to gain more attention\u00a0.\n\nWhile several aspects of online attention increase and decay have been fairly well investigated, much less is known about how geography, event impact, and differences across populations with different languages affect attention. Thus, the question whether online technologies have improved or worsen the fairness and equality with which news are released to the public, influencing their attention, is still open. The question is particularly important to investigate with regard to high impact events such as the terrorist attacks in Paris and Beirut in November 2015. It was reported that only 11% of the top media outlets covered the Beirut attacks in the first 24 hours in comparison to 51% for Paris. Furthermore, user attention for the Beirut bombings within the first hour was only 5% of what Paris achieved within the same time period in spite of the Paris attacks starting almost 15 hours after Beirut. What determines what is covered by the media and when? What determines the level of public attention to new events? Does the decay of public attention varies depending on the event? In this paper, we answer these questions at scale by analysing editorial and traffic information on a set of articles in two different language editions of Wikipedia. We study how events are covered, what aspects determine attention to them, how attention decays, and whether there are differences between languages. Focusing on depth rather than breadth, we limit our analyses to one specific type of event\u2014aircraft incidents and accidents\u2014and to the two most popular Wikipedia language editions by number of active users\u2014English and Spanish.\n\nWikipedia is a unique resource to study collective attention. Written and edited by volunteers from all around the world, it has become the number one source of online information in many languages, with close to 40 Million articles in around 300 language editions (and counting) and with open access to logs and metadata. There is a high correlation between search volume on Google and visits to the Wikipedia articles related to the search keywords . This indicates that Wikipedia traffic data is a reliable reflection of web users' behaviour in general. The high response rate and pace of coverage in Wikipedia in relation to breaking news is another feature that makes Wikipedia a good research platform to address questions related to collective attention. For instance, researchers have analysed Wikipedia edit records to identify and model the most controversial topics in different languages , to study the European food culture , and to highlight entanglement of cultures by ranking historical figures . Wikipedia traffic data has also been used to predict movie box office revenues\u00a0, stock market moves , electoral popularity\u00a0, and influenza outbreaks\u00a0.\n\nTo answer our research questions, we develop an automatic system to extract editorial and traffic information on the Wikipedia articles about aircraft incidents and accidents and factual information about the events. By comparing the English and the Spanish Wikipedia, we contribute to this research field in the following ways:\n\n- We study the coverage of the events in Wikipedia and its dynamics over time considering the airline region, the event locale, and the number of deaths.\n\n- We analyse the role of the airline region and number of deaths on the viewership data to Wikipedia articles.\n\n- We model attention decay over time.\n\nWe present the results from our study in the next section, after which we continue with discussion and conclude with implications. Details for our data collection and analysis strategy can be found in the last section, Section .\n\n# Results\n\nFigure\u00a0 shows a map of all the aircraft incidents and accidents from English Wikipedia coloured according to the airline region, which is where the airline company for the flight is located, and sized according to the number of deaths caused by the event. For simplicity, we divide the Americas into two regions: North America and Latin America. Latin America includes all countries or territories in the Americas where Romance languages are spoken as first language (in this case, Spanish, Portuguese, and French) and all Caribbean islands, while North America includes the rest (i.e., mostly United States and Canada). Furthermore, all headquarters in the EuroAsia region are labeled as Asia (e.g., Russia and Turkey). We observe that the locales of the events overlap most of the time with the airline regions.\n\nOur results are divided in three sections: the first part deals with the editorial coverage of the events, the second with the immediate collective attention quantified by viewership statistics, and the third with the modelling of attention decay.\n\n## Editorial Coverage\n\nTable compares the number of aircraft accidents and incidents covered in English and Spanish Wikipedias with cases reported by the Aviation Safety Network (ASN)[^1] in different continents. While ASN provides data from 1945, excluding military accidents, corporate jets, and hijackings, our dataset includes these cases and dates back to the year 1897. There are 1,081 articles in English Wikipedia that do not have a Spanish equivalent and most of them are about events that happened in North America (265), Asia (261), and Europe (252). On the other hand, there are 71 articles in Spanish Wikipedia with no English equivalent and most of them are about events that happened in Latin America (39).\n\n```latex\n\\begin{table*}[t]\\scriptsize\n\\centering\n\\begin{tabular}{llrllrllrl}\n \n \\multicolumn{1}{c}{} & \\multicolumn{6}{c}{\\textbf{ Wikipedia}}&\\multicolumn{3}{c}{\\textbf{ ASN}}\\\\ \\cline{2-7}\n\\multicolumn{1}{c}{} & \\multicolumn{3}{c}{\\textbf{ English}} & \\multicolumn{3}{c}{\\textbf{ Spanish}} & \\multicolumn{3}{c}{} \\\\\n& Events & \\multicolumn{2}{c}{Deaths} & Events & \\multicolumn{2}{c}{Deaths} & Events & \\multicolumn{2}{c}{Deaths} \\\\\nContinent & & avg & total & & avg & total & & avg & total \\\\ \n \\hline\nAfrica & 0.08 & 49 & 5,967 & 0.07 & 58 & 1,981 & 0.10 & 20 & 8,108 \\\\ \n Asia & 0.24 & 50 & 17,987 & 0.22 & 61 & 6,618 & 0.17 & 27 & 19,351 \\\\ \n Australia & 0.03 & 21 & 873 & 0.01 & 52 & 260 & 0.03 & 12 & 1,448 \\\\ \n Europe & 0.22 & 36 & 11,818 & 0.17 & 59 & 4,963 & 0.24 & 23 & 23,423 \\\\ \n L. America & 0.08 & 47 & 5,789 & 0.24 & 40 & 4,695 & 0.19 & 16 & 12,942 \\\\ \n N. America & 0.23 & 27 & 9,052 & 0.16 & 45 & 3,517 & 0.23 & 13 & 12,958 \\\\ \n Others & 0.12 & 45 & 8,353 & 0.13 & 80 & 4,941 & 0.02 & 32 & 2,712 \\\\ \n Total & 1,496 & 40 & 59,839 & 488 & 55 & 26,975 & 4,223 & 19 & 80,942 \\\\ \n \\hline\n\\end{tabular}\n\\caption{Breakdown by region of the number of aircraft incidents and accidents covered in Wikipedia compared to the data available at The Aviation Safety Network (ASN) website.\nThe column \\emph{Events} is the ratio with regard to the row \\emph{Total}.}\\label{table_groundtruth}\n\\end{table*}\n```\n\nWith regard to the number of deaths, the lowest average numbers correspond to Australia, North America, and Europe respectively for English Wikipedia, whereas Latin America and North America have the lowest average number of deaths for Spanish Wikipedia. This is because some low impact events (many with 0 deaths) that occurred in Australia, North America, and Europe are only included in English Wikipedia and some low impact events in Latin America are only considered notable in Spanish Wikipedia. With regard to the articles in English that do not have a Spanish equivalent, the average number of deaths is 39 and for those that do not have an English equivalent the average is 12. These numbers indicate that the articles in Spanish without an English equivalent are low impact events concentrated in Latin America.\n\nWe also investigate the time lag between the occurrence of the event and the creation of the corresponding Wikipedia article. Our dataset contains articles about events that happened before and after Wikipedia was launched (see Fig. in the Appendix). Post-Wikipedia events (399 for English and 224 for Spanish) are shown on the upper row panels of Figure\u00a0, where the horizontal and vertical axes show the time of the occurrence of the event and the creation of the corresponding Wikipedia page respectively. The convergence of the data points towards the diagonal line indicates that the community of Wikipedia editors reacts increasingly fast to this kind of events. English Wikipedia has been faster at covering events since the diagonal trend starts earlier. A possible explanation is the larger number of users in English Wikipedia compared with the Spanish version.\n\nThe lower row panels of Figure\u00a0 show the coverage of the pre-Wikipedia events. The colour of the curve corresponds to the airline's region and the x-axis shows the year of the Wikipedia page creation. For English Wikipedia (1,078 cases) a quicker coverage of North American events is evident. African, Australian, and South American events exhibit sharp increases as the addition of these articles was concentrated in specific periods. On the other hand, Spanish Wikipedia (264 cases) shows a slightly faster coverage for events related to European companies with sharp jumps for African and Australian companies (there are only 34 and 5 cases respectively). Most importantly, however, not only did English Wikipedia cover more pre-Wikipedia events, but it also did it faster. Again, this can be explained considering the arger size of the editorial community of English Wikipedia.\n\n## Immediate attention\n\nNow we turn to the viewership data. To capture the immediate attention to an event right after its occurrence, we choose the articles that were created up to 3 days after the event and extract the maximum number of views within 7 days after the page was created (see figure\u00a0 for an example). We discuss the choice of 7 days in section .\n\nA baseline hypothesis would be that the larger the number of deaths the event caused, the more attention it attracts. However, this is not always the case; attention is driven by other factors such as media coverage, location, people involved, etc. This is reflected in Figure . The plot shows the normalized maximum daily views versus the number of deaths in log scale for the English and Spanish Wikipedias.\n\nIn English Wikipedia, we have identified two regimes: low-impact events ($< 40$ deaths), where there is no correlation between impact and attention, and high-impact events ($\\geqslant 40$ deaths), where the maximum number of daily page views increases proportionally to the event impact with $r=0.71$, $p<0.001$. To separate these two regimes, we used visual inspection to accommodate the largest empty square on the lower-right region of the diagram. Regardless of the high correlation of this region, impact does not always reflect attention: the plot shows two African outliers with less attention than expected from the overall trend. In Spanish Wikipedia, the separation of the two phases at around 70 deaths is less evident but still exists. The correlation in the high impact regime is $r=0.67$, $p<0.005$. Also note that in the high impact regime, the level of attention increases almost quadratically with the number of deaths. However, we hesitate fitting a function here due to the small number of data points.\n\nTo analyse the importance of the airline's region and number of deaths on level of attention, we use linear regression models. We have removed the two outlier events from the English sample shown in Figure . We then model all the data points using a simple linear model considering the number of deaths as the only parameter (see Table\u00a0). In the English case, the number of deaths alone can only explain around 22% of the variation in the level of the immediate attention. If we add the airline region as a categorical variable using Africa as the reference category, we increase the explanatory power to 28%. Here, we observe that events related to North American companies attract more views than companies from other regions ($\\beta_{1}=1.67$). On the other hand, Latin American companies play the same role in Spanish Wikipedia ($\\beta_{1}=1.68$).\n\n```latex\n\\begin{table*}[t]\\centering\n\\begin{tabular}{lrrrrrrrr}\n\\multicolumn{1}{c}{}&\\multicolumn{8}{c}{ \\textbf{ All events}}\\\\\\cline{2-9}\n\\multicolumn{1}{c}{}&\\multicolumn{4}{c}{ \\textbf{ English (n = 204)}}&\\multicolumn{4}{c}{ \\textbf{Spanish (n = 80)}} \\\\\n&\\multicolumn{1}{c}{$\\beta_{1}$} && \\multicolumn{1}{c}{$\\beta_{2}$}& &\\multicolumn{1}{c}{$\\beta_{1}$} && \\multicolumn{1}{c}{$\\beta_{2}$}& \\\\\\hline\n \\hline\nIntercept & -12.18 & *** & -13.19 & *** & -13.89 & *** & -15.24 & *** \\\\ \n Deaths & 0.61 & *** & 0.69 & *** & 0.41 & ** & 0.53 & *** \\\\ \n Asia & & & 0.79 & * & & & 0.7 & \\\\ \n Australia & & & 0.22 & & & & 0.99 & \\\\ \n Europe & & & 1.42 & ** & & & 1.21 & \\\\ \n Latin America & & & 0.23 & & & & 1.68 & * \\\\ \n North America & & & 1.67 & *** & & & 0.96 & \\\\ \n Adj. $R^2$ & 0.22 & *** & 0.28 & *** & 0.11 & ** & 0.12 & * \\\\ \n\n\\hline\n \\multicolumn{1}{c}{}&\\multicolumn{8}{c}{ \\textbf{ Low-impact}}\\\\\\cline{2-9}\n\\multicolumn{1}{c}{}&\\multicolumn{4}{c}{ \\textbf{ English (n = 166)}}&\\multicolumn{4}{c}{ \\textbf{Spanish (n = 60)}} \\\\\n&\\multicolumn{1}{c}{$\\beta_{1}$} && \\multicolumn{1}{c}{$\\beta_{2}$}& &\\multicolumn{1}{c}{$\\beta_{1}$} && \\multicolumn{1}{c}{$\\beta_{2}$}& \\\\\\hline\nIntercept & -11.44 & *** & -12.27 & *** & -13.3 & *** & -15.87 & *** \\\\ \n Deaths & 0.04 & & 0.14 & & 0.1 & & 0.14 & \\\\ \n Asia & & & 0.47 & & & & 2.42 & \\\\ \n Australia & & & 0.07 & & & & 2.95 & \\\\ \n Europe & & & 0.99 & * & & & 2.1 & \\\\ \n Latin America & & & 0.46 & & & & 3.18 & * \\\\ \n North America & & & 1.39 & ** & & & 2.3 & \\\\ \n Adj. $R^2$ & -0.01 & & 0.04 & & -0.01 & & 0.02 & \\\\ \n \\hline\n\\multicolumn{1}{c}{}&\\multicolumn{8}{c}{ \\textbf{ High-impact}}\\\\\\cline{2-9}\n\\multicolumn{1}{c}{}&\\multicolumn{4}{c}{ \\textbf{ English (n = 38)}}&\\multicolumn{4}{c}{ \\textbf{Spanish (n = 20)}} \\\\\n&\\multicolumn{1}{c}{$\\beta_{1}$} && \\multicolumn{1}{c}{$\\beta_{2}$}& &\\multicolumn{1}{c}{$\\beta_{1}$} && \\multicolumn{1}{c}{$\\beta_{2}$}& \\\\\\hline\n Intercept & -12.61 & *** & -12.95 & *** & -18.03 & *** & -18.73 & *** \\\\ \n Deaths & 0.97 & *** & 0.92 & *** & 1.33 & ** & 1.45 & ** \\\\ \n Asia & & & 0.49 & & & & -0.22 & \\\\ \n Australia & & & & & & & & \\\\ \n Europe & & & 1.01 & * & & & 0.88 & \\\\ \n Latin America & & & -0.21 & & & & & \\\\ \n North America & & & 1.72 & * & & & & \\\\ \n Adj. $R^2$ & 0.38 & *** & 0.48 & *** & 0.28 & ** & 0.50 & ** \\\\ \n \\hline\n\\end{tabular}\n\\caption{Results from regression analyses with logarithm of the maximum number of page views as dependent variable. The column for $\\beta{1}$ corresponds to a model that only considers the number of deaths (log-transformed) as the independent variable, whereas $\\beta{2}$ reports a model which considers log(deaths) and the airline region as independent variables. Significance codes: *** $<0.001$, ** $<0.01$, * $<0.05$.} \n\n \\label{table_regression}\n\\end{table*}\n```\n\nIf we split the data points into high- and low-impact events and recalculate the linear model separately for each regime, we see that the addition of the airline region in cases with high number of deaths increases the explanatory power of the regression. In both language editions, the proportion variance explained increases considerably. The explanatory power we obtain for the low-impact events, however, is negligibly small.\n\nBased on the results of the categorical regression analysis including the location of the operating companies, one can estimate the relative level of attention paid to pairs of events from different regions on average. These ratios are reported in Table\u00a0. For instance, controlling for the number of deaths, a North American event triggers about 50 times more attention among English Wikipedia readers compared to an African event. This ratio for North American versus European is about two. In Spanish Wikipedia however, a Latin American event triggers about 50 times more attention than an African and 5 times more than a North American event.\n\n| | **English Wikipedia** | | | | | |\n|:---|:--:|:--:|:--:|:--:|:--:|:--:|\n| 2-7 | Africa | Australia | Latin America | Asia | Europe | North America |\n| Africa | 1 | 2 | 2 | 6 | 26 | 47 |\n| Australia | | 1 | 1 | 4 | 16 | 28 |\n| Latin America | | | 1 | 4 | 16 | 28 |\n| Asia | | | | 1 | 4 | 8 |\n| Europe | | | | | 1 | 2 |\n| North America | | | | | | 1 |\n| | | | | | | |\n| | **Spanish Wikipedia** | | | | | |\n| 2-7 | Africa | Asia | North America | Australia | Europe | Latin America |\n| Africa | 1 | 5 | 10 | 10 | 16 | 48 |\n| Asia | | 1 | 2 | 2 | 3 | 10 |\n| North America | | | 1 | 1 | 2 | 5 |\n| Australia | | | | 1 | 2 | 5 |\n| Europe | | | | | 1 | 3 |\n| Latin America | | | | | | 1 |\n\nDeath equivalence ratios based on the viewership data from English and Spanish Wikipedias. The matrix is calculated according to the coefficients reported on the upper part of Table . For 6 different airline continents, the matrix shows the ratio of triggered attention, controlling for the number of deaths. For example, the attention given to events caused by a North American Airline in English Wikipedia is on average 2 and 47 times larger than to the events caused by European and African companies respectively. In Spanish Wikipedia, the level of attention given to events related to Latin America is 3 times larger than the European events, 5 times larger than North American, and 10 times larger than Asian events.\n\n## Modeling attention decay\n\nNow we focus on attention decay by analysing the viewership time-series after the event. After the initial boost in viewership, which in 73% of the cases happens in less than 5 days after the date of the page creation, an exponential decay follows (see Figure\u00a0 for an example). This phenomenon occurs both due to the decay of novelty as well as limitations in human capacity to pay attention to older items in competition with newer ones\u00a0.\n\nTo model the attention decay, we use a segmented regression model with two break points to fit the normalized daily page-view counts in logarithmic scale (see Section for details). Figure\u00a0 shows a typical example of the time series of the viewership of an article and the fit of the segmented regression model.\n\nThe distributions of fit parameters are reported in Table\u00a0. These distributions confirm the assumptions that we make in developing our segmented regression model with two break points as well as similarities between the two language editions that we study. For instance, in both cases the half-life of the attention in the first phase and the detected position of the first break point show similar patterns.\n\nIn Figure\u00a0, we show the distribution of the location of the first break point in larger scale. This parameter indicates the time span of the initial attention paid to the event. The first break point is localized around 3-10 days for both English and Spanish Wikipedia.\n\nIn Figure\u00a0 we consider other parameters that the best fit of the model assigns to each event. We observe that there is no significant correlation between the position and the value of attention at the first break point and the number of deaths, meaning that the rate of decay in attention and the first attention phase time span are independent of the impact of the event (upper and middle rows). However, in the lower row of the same figure we show that the relation between the level of attention at the second break point, which can be interpreted as the level of the long-lasting attention, and the immediate attention in the initial phase, is similar to what is observed in Figure\u00a0, i.e., for low impact events, the long lasting attention is independent of the initial attention, whereas for high impact events, the initial attention is a good predictor of the long term attention to the event.\n\n# Discussion and Conclusion\n\nWe studied online attention to aircraft incidents and accidents using editorial and viewership data for the English and Spanish editions of Wikipedia. Overall, we found certain universal patterns.\n\nWe found some differences in event coverage between the two languages but often, they can be attributed to the same underlying biases. For example, attention on English Wikipedia is more focused on events concerning North American and European airlines while attention on Spanish Wikipedia gives priority to Latin American airlines. English Wikipedia tends to cover more events in North America, while Spanish Wikipedia tends to cover more events in Latin America.\n\nOur findings suggest that crashes of flights operated by North American companies, which mostly happened also in North America, receive higher publishing priority in English Wikipedia regardless of the impact, while accidents from other locales, especially older accidents, are published later and have to be more impactful to receive the same level of editorial attention. Similar editorial biases in different contexts have been studied and reported before . Although one can argue that English Wikipedia is mostly edited and used by North American users, previous research has shown that only about half of the editorial activity on English Wikipedia originates from North America and English should be considered as the *lingua franca* of Wikipedia . Also note that the difference that we see within each Wikipedia language edition is consistent regardless of the language of the study and hence the origin of viewers.\n\nThese biases in Wikipedia can be driven by the biases in mainstream media . Previous research has shown that a considerable dominance of references to Western media exists in Wikipedia and therefore, events of less importance for the Western media are more sparsely covered in Wikipedia. In the case of aircraft crashes, for example, in 1981, 10 people died in the controversial flight *FAB 001* belonging to the Ecuadorian Air Force. It is a controversial flight because the former president of Ecuador Jaime Rold\u00f3s was among the victims and the cause of the crash is still a mystery. Although there are articles in several languages in Wikipedia covering the biography of Jaime Rold\u00f3s and the type of airplane used in the crash, there is no article equivalent to the specific flight that caused his death and thus this case is missing in our dataset. The same happens for the flight that killed the former president of the Philipines Ram\u00f3n Magsaysay or the Iraqi former president Abdul Salam Arif, among others.\n\nIn both languages, we observed two attention regimes for events \u2013 low-impact regime, where the level of maximum attention is independent of the number of deaths and high-impact regime, where the airline region and the impact of the event significantly influence attention. In addition, focusing on the immediate attention to the event, we found that the time span and rate of the exponential decay (the slope of the fit to the first segment exemplified in the semi-log diagram of Figure\u00a0) is independent of the impact of the event and the language of the article. The short span of attention that we observed (on the order of a few days) is in accordance with previous findings by other researchers .\n\nOur study needs further generalization to include other type of events, such as natural disasters, political, and cultural events. Moreover, our analysis has been limited to the English and Spanish editions of Wikipedia. Although these two are among the largest Wikipedia language editions, we might see variations in results studying attention patterns in different language editions.\n\n# Materials and Methods\n\n## Data collection\n\nWe collect data from Wikipedia using two main sources: the MediaWiki API and Wikidata. Wikidata[^2] is a Wikipedia partner project that aims to extract facts included in Wikipedia articles and fix inconsistencies across different editions\u00a0. Although content in Wikidata is still somewhat limited, the availability of such structured information makes it easier for researchers to obtain data from a set of Wikipedia articles in a systematic way.\n\nTo complete the data missing from Wikidata, we automatically crawl Wikipedia infoboxes[^3] and collect features of events (see below).\n\nWe first focus on a set of articles classified as aircraft accidents or incidents in English Wikipedia, belonging to the categories *Aviation accidents and incidents by country* and *Aviation accidents and incidents by year*, and their subcategories, which cover all airline accidents and incidents in different countries and throughout history available in Wikipedia. In total we obtain 1606 articles from which 1496 are specifically about aircraft crashes or incidents (we discard articles of biographies, airport attacks, etc). From the 1496 articles, we obtain the following: date of the event, number of deaths, coordinates of the event, and airline region.\n\nWe extract all editorial information for the articles in the sample using the MediaWiki API. We extract the date when the article was created and alternative names for the article. We use the latter to merge all traffic statistics to the main title. Next, we extract all available articles in the same categories considered in English Wikipedia from Spanish and follow the same procedure to extract the features of the articles in the Spanish edition. In total, we obtain 525 articles in Spanish Wikipedia from which 488 are about aircraft incidents or accidents.\n\nFinally, we extract the daily traffic to the articles in English and Spanish from the Wikipedia pageview dumps[^4] through a third party interface.[^5]\n\n## Data analysis\n\nTo control for the changes in the overall popularity of Wikipedia, we normalize the viewership counts by the overall monthly traffic to Wikipedia.[^6] To numerically model attention dynamics, we apply segmented regression analysis to viewership data during 50 days after the first pick due to the occurrence of the event. We use segmented regression as implemented in the R package \"segmented\". [^7] Segmented regression models are models where the relationship between the response and one or more explanatory variables are piecewise linear, represented by two or more straight lines connected at values called breakpoints\u00a0. To find those breakpoints, the algorithm first fits a generic linear model then fits the piecewise regression through an iterative procedure that uses starting break point values given by us at the beginning. In our specific case, three piecewise regressions are fit in each iteration and the two break point values are updated accordingly as to minimize the gap\u00a0$\\gamma$\u00a0between the segments. The model converges when the gap between the segments is minimized. We refer the reader to the paper by Muggeo\u00a0 for a detailed explanation. Additionally, the package description explains that bootstrap restarting is used to make the algorithm less sensitive to starting values.\n\nAlthough alternative approaches could be undertaken to model nonlinear relationships, for instance via splines, the main appeal of the segmented model lies in its simplicity and the interpretability of the parameters.\n\nWe have chosen two break points (three segments) for the analysis but our main results are robust against changing this number (see Figure in the Appendix). This choice is informed by previous research that identifies three phases in the evolution of collective reactions to events: communicative interaction, floating gap, and cultural memory (stabilization phase)\u00a0.\n\nWe find that most of the events are fitted well, with high adjusted $R^2$ (average 0.84 for English and 0.80 for Spanish). However, in some cases, this model is not able to capture the overall dynamics, mostly due to secondary shocks driven by new triggering factors that are too close to the event, e.g., the discovery of the corresponding airplane black box or other related newsworthy events.\n\n\n\n# Data Accessibility\n\nThe datasets supporting this article have been uploaded to DRYAD system and is available via https:\/\/www.doi.org\/10.5061\/dryad.34mn3.\n\n# Competing interests\n\nThe authors declare no competing interests.\n\n# Authors' contributions\n\nRG-G collected and analysed the data, participated in the design of the study, and drafted the manuscript; MT participated in the design of the study and helped draft the manuscript; TY conceived, designed, and coordinated the study, and helped draft the manuscript. All authors gave final approval for publication.\n\n# Funding\n\nThis research is part of the project *Collective Memory in the Digital Age: Understanding Forgetting on the Internet* funded by Google.\n\n1.5em\n\n# Appendix\n\n[^1]: \n\n[^2]: Using .\n\n[^3]: Using .\n\n[^4]: \n\n[^5]: \n\n[^6]: The data are obtained from \n\n[^7]: We use the R package *segmented*: ","meta":{"dup_signals":{"dup_doc_count":24,"dup_dump_count":20,"dup_details":{"curated_sources":2,"2018-43":1,"2018-34":2,"2018-26":1,"2018-17":1,"2018-09":1,"2017-51":1,"2017-47":1,"2017-39":1,"2017-34":1,"2017-30":2,"2017-26":1,"2017-22":1,"2017-17":1,"2017-09":1,"2017-04":1,"2016-50":1,"2016-44":2,"2018-47":1,"2017-13":1}},"filename":"out\/1606.08829_extract_main.tex.md"},"subset":"arxiv"} +{"text":"abstract: The steady growth of digitized historical information is continuously stimulating new different approaches to the fields of Digital Humanities and Computational Social Science<\/span>. In this work we use Natural Language Processing techniques to retrieve large amounts of historical information from Wikipedia. In particular, the pages of a set of historically notable individuals<\/span> are processed to catch the locations and the date of people's movements. This information is then structured in a geographical network of mobility patterns<\/span>.\n .\n We analyze the mobility of historically notable individuals<\/span> from different perspectives to better understand the role of migrations and international collaborations in the context of innovation and cultural development. In this work, we first present some general characteristics of the dataset from a social and geographical perspective. Then, we build a spatial network of cities, and we model and quantify the tendency to explore of a set of people that can be considered as historically and culturally notable<\/span>. In this framework, we show that by using a multilevel radiation model for human mobility, we are able to catch important features of migration's behavior. Results show that the choice of the target migration place for historically and<\/span> culturally relevant people is limited to a small number of locations and that it depends on the discipline a notable is interested in and on the number of opportunities she\/he can find there<\/span>.\naddress: , , , ; , , ,\nauthor: ; ; \nbibliography: bmc_article.bib\ntitle: Following the footsteps of giants: Modeling the mobility of historically notable individuals using Wikipedia\n\n# Introduction\n\nEver since the first villages were built by primitive people, humankind has moved from one community to another, in search of better life conditions or new opportunities . These conditions can be represented by different factors, such as the opportunity of a job, better living standards, or the distance from the home country . The set of all these factors is difficult to define a-priori.\n\nFor example, according to the dominant neo-classical theory, people tend to make choices in order to maximize their income or level of well-being . Thus, the search for better economic conditions is one of the most important factors in the decision of moving from one location to a more attractive one. However, there are other factors that could play an important role in the decision-making process of a specific group of people. The attractiveness of an opportunity can also depend on cultural and linguistic barriers, and on the presence of particular communities at the destination . Hence, in order to define a complete migration framework it is important to consider all the linked aspects that intervene in modifying the attractiveness of a location (e.g. economical, environmental, cultural, and political aspects) . As a consequence, we believe that, in the same way as the economic conditions play a central role for those seeking employment, it is very relevant to investigate if there are other factors playing a role for specific kinds of migration, for example the migration of notable people and intellectuals in the course of history .\n\nFor the general problem of human migration different mathematical models have been built both in a descriptive and predictive framework. However, these models have never been applied to the specific scenario of modeling the historical movements and migrations of intellectual figures. Instead, mobility and migrations historically played, and still play, an important role in the process of cultural evolution introducing seeds of change in different places around the world . Thus, understanding the patterns that historically notable individuals<\/span> followed during their lives and how these affected the cultural evolution and the human history<\/span> is an intriguing and still open research question . This approach introduces new challenges because of the specificity of the problem and the relatively small number of people that had an impact on cultural evolution and human history<\/span>. New perspectives were opened by a recent approach proposed by Schich *et al.* . By analyzing birth and death locations historically notable individuals<\/span> they captured, from a network perspective, the key characteristics of their exploratory behavior.\n\nIn our paper, we propose a way of using Natural Language Processing (NLP) and Network Science techniques to further characterize and model the mobility of historically and culturally notable individuals<\/span> and to investigate the factors playing a role in their migration patterns.\n\nIn the last decade, the NLP community has developed technologies for extracting information from unstructured texts, thus enabling their application also to interdisciplinary research areas. Understanding and modeling historical migration phenomena require specific historical data. To this end, we propose to use NLP techniques to process the digital biographies contained in Wikipedia and to extract migratory events from its encyclopedic information. In particular, a subset of the biographies available in the English version of Wikipedia is used as raw data source. From this source, for each notable person we search for her\/his footsteps hidden in her\/his Wikipedia page and collect<\/span> the following information: (i) the place and date of birth, (ii) the place and date of death, and (iii) the place and date of the various in-life migrations (e.g. moving from one city to a different one). This results in a more complete set of data with respect to , enriching the global picture of the mobility of notable individuals<\/span> with a finer temporal granularity, while enabling, to the best of our knowledge, to model this process from a historical point of view for the first time.\n\nAs proposed in , we use the term *culture* to focus our attention on the set of notable contributions to the development of human history in its broadest sense: from poetry to sports, from music to physics and mathematics. In particular, we model the mobility dynamics of notable people, namely those people whose cultural production is known at a global level. To this end, we introduce a modification to the *radiation model* for human mobility . The assumption behind the *radiation model* resides in using the city size (i.e. the city population) as a proxy of the number of job opportunities. In our paper, we modify the *radiation model* to take into account, in addition to the role played by the city size, also the attractive role played by the different disciplines and by the number of notable people as proxies of cultural opportunities.\n\nThen, we compare the predictive performances of our *cultural-based radiation model* and of the state-of-the art population-based radiation model on three main aspects of the migration processes of notable people: (i) the radius of gyration of each notable person, (ii) the number of different cities the notable people lived in during their lives, and (iii) the distances between the source of a migration and its destination (jump lengths). Interestingly, our results show that the radius of gyration and the jump lengths are best modeled considering three different factors: (i) the population of the city, as a proxy for the economical wealth and job opportunities, (ii) the number of notable individuals<\/span> that spent some time of their lives in a given city, as a proxy for the role played by the city as a cultural attractor, and (iii) the specific discipline an historically and culturally notable person<\/span> is working in, as a proxy both for the interests a city has in investing on a specific cultural area and for the tendency that people, interested or working on that discipline, have to follow notable figures from the same domain.\n\nOur results pave the way for further investigations on the historical role played by places and cities (e.g. ancient Athens, Renaissance Florence, Song Dynasty Hangzhou, Vienna of 1900, Silicon Valley, etc.) in becoming cultural attractors for historically and culturally notable individuals<\/span> and, thus, making them grow into flourishing places for novel artistic and literary movements, for new scientific and philosophical theories, for social, technological and political innovations .\n\n# Materials and Methods\n\n## Data\n\nIn this section, we present and<\/span> discuss how we build a dataset containing biographical information on thousands of historically relevant people in the context of cultural production and innovation. Together with information about their field of influence, we also extract information about their mobility patterns.\n\nWe start from the set of notable people identified by the Pantheon project . This project collected the biographies of $11,341$ historically and<\/span> culturally relevant people that lived from $3500$ B.C. to $2010$ A.D. More precisely, the dataset was built by extracting information from the Wikipedia biography and info-boxes. Here, a person is defined as *notable* if the corresponding Wikipedia page is translated in 25 or more different languages, since the focus is on global historical and<\/span> cultural contributions.\n\nFor each notable person, several features were annotated and manually verified, among which we consider the following ones:\n\n- birth place (geo-localized), state, and date;\n\n- occupation, work area, and discipline.\n\n| Work Area | Discipline | Percentage (%) |\n|:-----------------:|:----------------------:|---------------:|\n| Film and Theatre | Arts | 10.35 |\n| Music | Arts | 8.81 |\n| Fine Arts | Arts | 5.61 |\n| Design | Arts | 1.68 |\n| Natural Sciences | Science and Technology | 13.74 |\n| Social Sciences | Science and Technology | 3.92 |\n| Medicine | Science and Technology | 2.25 |\n| Math | Science and Technology | 2.04 |\n| Language | Humanities | 17.39 |\n| Philosophy | Humanities | 3.51 |\n| Government | Institutions | 15.90 |\n| Military | Institutions | 3.81 |\n| Activism | Public Figure | 1.23 |\n| Individual Sports | Sports | 1.28 |\n| Business | Business and Law | 1.07 |\n\nComposition of the selected subset of notable people, organized by the different work areas and disciplines present in the Pantheon dataset . In particular, 15 work areas are presented, sorted by discipline.\n\nWhile the process of selection of notable people in Yu *et al.* resulted in $11,341$ biographies selected from more than one million biographies available in Wikipedia, we further restrict our analyses on the notable people that were actively working and producing cultural outcomes in a specific time-window, the first fifty years of the 20th century. The focus on this specific time-window is justified by two combined needs. The first is to have a sufficiently short time span so that the properties of the global mobility do not change. The second is to have the highest possible number of notable people with complete and precise migratory information, i.e. with birth, in-life, and death locations. The first requirement reduces the width of the time-window, while the second one requires to consider relatively recent years. We therefore consider active the notable people that were at least 20 years old by the end of the considered time-window to ensure that their historical and<\/span> cultural contribution was made during this time span, and that no bias in the type of migratory event (birth, in-life, and death) is introduced.\n\nBy applying this filtering procedure based on the time-window of interest, we reduce the number of Wikipedia biographies to be processed to $2,407$. We report in Table the distribution of work areas and disciplines present in this subset of Wikipedia biographies, which will be further processed as described in the next Section.\n\n### Extracting migration footsteps\n\nFor the purposes of our study, we are interested in identifying the different locations that were visited by the selected set of notable people during their lives and the year of their visits, in order to build a trajectory (made of multiple footsteps) for each notable person. This kind of information is not present in the Pantheon dataset and we therefore need to extract it automatically. The approach we adopt follows the one recently proposed with Ramble-On , a text processing pipeline dealing with two main tasks: (i) the identification of predicates of migration and their arguments (i.e. the subject of the migration frame) in Wikipedia biography pages, and (ii) the recognition and classification of dates, places, and mentions. We focus our attention on migration processes because they are more likely to describe a motion action that resulted in a long time permanence in a new location. As a consequence, if the permanence in a specific location is long, it is more likely that the notable person had the time to provide his\/her cultural contribution there. While there have been recent attempts to automatically extract people's trajectories and an associated time period from Wikipedia biographical pages , these rely on shallow NLP approaches based on the presence of keywords and geo-links in the pages. Furthermore, trajectories are coupled with time spans and not time points like in our study, leading to a lower granularity of the extracted information. In our case, the use of semantic parsing associated with a selection of predicates describing possible trajectories enables a very precise analysis of the resulting data.\n\nTo identify the predicates related to migration events, the Ramble-On application[^1] calls PIKES , a suite for NLP that extracts information from English plain texts, which in turn automatically assigns to each predicate a semantic frame based on the FrameNet classification . Also the arguments attached to each predicate are automatically labeled with semantic roles, relying on the frame-semantic parser Semafor . To better distinguish migration predicates, we again follow the approach proposed in , thus removing $16$ motion frames (e.g. *Escaping*, *Getting\\_underway*, *Touring*) out of $45$ because of the high number of false positives found during the identification. Hence, $29$ motion frames (e.g. *Arriving*, *Being\\_employed*, *Transfer*, *Travel*) were used for the identification of notable people's migration actions described in their biographies.\n\nOnce a migration frame was identified in a sentence extracted from a Wikipedia biography page, three elements are required to be present in order to extract a migration trajectory: (i) the time\/date of the motion, (ii) the traveler, and (iii) the destination. Again, the Ramble-On application selects only sentences satisfying these constraints, where the date, the notable person (or a reference to him\/her) and the destination have been identified. With this approach, precision is favoured over recall, requiring that these three elements are explicitly mentioned in the same sentence.\n\nIn Table we show as an example the snippet of a sentence identified as a movement. In this case the predicate \"moved\" is assigned to the frame \"Motion\" and it is selected as a migration frame. Then, the different arguments of the sentence are identified and labelled depending on their role in the sentence (e.g. the time, the traveller and the destination\/place).\n\n```latex\n\\begin{table*}[!t]\\caption{Example of identification and classification of a sentence using Semaphor and FrameNet.}\n \\label{tab2}\n\\centering\n \\begin{tabular}{ccccc}\n \\textbf{Snippet} & \\textbf{Predicate} & \\textbf{Frame} & \\textbf{Place} & \\textbf{Time}\\\\\n \\toprule\n \"Paul moved to Chicago in 1934, &&&&\\\\ \n where he continued to & moved & Motion & Chicago & in 1936 \\\\\n perform on radio.\"&&&&\\\\\n \\bottomrule\n \\end{tabular}\n\\end{table*}\n```\n\nOnce the movements have been identified and the information about the date and the location have been extracted, Ramble-On geo-locates each word related to the identified destinations using OpenStreetMap *Nominatim*[^2], a search engine for geo-referenced OpenStreetMap locations. Destinations that lack coordinates are discarded from the movements' list since they may be erroneously annotated as destinations. Besides, place and date of death are retrieved for each biography by Ramble-On using *DBpedia*[^3], where structured data about each notable person are stored.\n\nIn Table we present the information retrieved by processing the same biography as in Table . Together with the birth information retrieved from Pantheon, other important information such as date and place of death are extracted from *DBpedia* as discussed above. The *Migration 1* element, $M_1$, is the first additional trajectory retrieved by running the Ramble-On tool on the biography page. Merging the information retrieved about the inventor and musician Les Paul (see Table ), we identify two jumps, i.e. two migratory events: (i) the first one from the birth location to *Chicago*, and (ii) the second one from *Chicago* to the death location.\n\n| | Birth | Movement 1 | Death |\n|:---------------|:---------|:-----------|:-------------|\n| date | 19150000 | 19340000 | 20091231 |\n| place | Waukesha | Chicago | White Plains |\n| latitude | 43.0117 | 41.8369 | 41.0400 |\n| longitude | -88.2317 | -87.6847 | -73.7786 |\n| predicate | null | moved | null |\n| resource | dbpedia | FrameNet | dbpedia |\n| place frame | null | @Goal | null |\n| resource frame | Birth | Motion | Death |\n\nResults obtained by processing Les Paul's biography using the Ramble-On pipeline.\n\nWe refer to for a detailed discussion on the performance of the extraction process. In brief, Ramble-On has a precision of $0.86$ in correctly identifying migration frames. As mentioned before, however, the strategy adopted to identify trajectories may penalize recall, failing to extract movements whose date or destination are mentioned implicitly or in two different sentences. In order to estimate the amount of locations visited (birth and death locations included) that we are not able to capture in our study, we manually annotate the trajectories in the biographies of $50$ notable people randomly sampled stratifying over the number of locations found. In this way, we have estimated that the recall of Ramble-on approach is equal to $0.59$.\n\n### Dataset composition\n\nThe processing of the $2,407$ biographies results in a set of $7,240$ locations connected with notable persons' trajectories. Among these, we consider the $4,028$ movements taking place in the 1900-1950 time-window. Each movement with the associated date and destination was then manually checked by comparing the extracted information with the source Wikipedia sentence, and corrected if necessary. Also the coordinates associated with the extracted locations by *Nominatim* were manually checked, since the geographical information associated with trajectories is at the core of our migration model and possible errors must be minimized. These are then collapsed to the nearest *great city*, where we adopted as a definition of *great cities* the list proposed in . In their work, Reba *et al.* collected also precious historical demographic data for most of these cities, that we used to test our baseline for the migration model. More specifically, geo-localized locations are merged based on a Voronoi tessellation of the Earth. In this framework each cell is built from a list of cities for which historical population data was available . The space is built using the great-circle approximation to associate each identified location with the corresponding Voronoi cell. The distribution of the clustering process is reported in Fig. SM7 of the Supplementary Materials (SM). In Fig. we present two examples of trajectories for Albert Einstein, the famous physicist, and Maria Montessori, the renowned physician and educator. The arcs connects different locations where these two notable figures spent a part of their lives. The blue coloured side of an arc indicates the origin of the migration while the red one its destination. For example, the extraction well captures Einstein's movements from Zurich to Berlin and from Berlin to US. His first movement from Ulm, his home town, is missing since it happened before the beginning of 20th century. We also notice that his short period as visiting professor to Caltech is detected by Ramble-On. Similarly, Maria Montessori's experiences around Europe (i.e. Barcelona, Amsterdam, Vienna, Rome) are correctly identified. In contrast, we stress that, due to the lack of population data, her trips to Sri-Lanka are collapsed to cities in India. This shows how the collapsing process might impact the actual migration distribution.<\/span>\n\nThe merging step results in a set of $629$ different cities visited by our notable people during their lifetime. Figure .A shows the members of our set of notable individuals listed by discipline (as labelled by ); while Figure .B shows the distribution of different visited cities for the top two discipline communities, namely \"Arts\" and \"Science and Technology\". The colored dots represent the data while the lines the geometrical fit to the data. Both the distributions can be described using a geometrical distribution with parameter $p$, representing the probability of successfully settling in a city, $p\\sim\\frac{1}{2}$.\n\n## Cultural network and migration modeling\n\nIn our framework, we assume that a culturally<\/span> notable person living in a place for a certain period of time contributed in some way to make such place a cultural attractor for other people interested in cultural innovation and development. At the same time, when a culturally notable person was moving from one place to another, s\/he linked the cultural relations s\/he had and the work s\/he did in the first place with the relations and work in the second place. In this way, each movement creates a cultural connection between two different places around the world. Depending on the number of notable people moving from one place to another, we can add a weight to the links of this cultural network. Thus, the nodes correspond to the different cities visited by our set of notable people, while the edges are cultural links, built by interpersonal relations together with the cultural contamination a person brings with herself\/himself while migrating from one place to another, weighted by the number of occurrences.\n\nIn Figure we show a representation of this weighted directed network, where the weight is the percentage of notable people migrating from a location to their destination. In the left panel and right panel we show a sector of the network considering different time frames. On the left, we restrict our focus on a 3-year time-window centering the map to highlight the connections between Eastern Europe and Asia (Union of Soviet Socialist Republics in particular) during the October revolution in 1917. It is interesting to notice that the structure of the network nicely catches the known phenomenon of Siberian exile of aristocratic families and important political personalities such as representatives of the previous tsarists' power and relevant persons not aligned with the current regime. In this example, we are capturing the movements of people that were forced to migrate to specific locations. Hence, our notion of cultural attractor both includes locations to which people moved voluntarily and locations to which culturally notable people were forced to move. In particular, the number of forced movements (i.e. people sent to concentration camps and imprisoned people) is 36 out of 3474 identified movements. Top right panel shows the strong migratory flux of intellectuals from Europe to the US during the Second World War.\n\nAs previously said, we are interested in modeling the mobility of culturally relevant figures and in investigating the factors playing a role in their migration patterns. To this end, we modify the radiation model to take into account the role played by the city size (i.e. proxy for job opportunities) as well as the ones played by the different disciplines and by the number of notable people (i.e. proxies for cultural opportunities). The radiation model describes the mobility of people seeking job opportunities in terms of job openings per number of inhabitants. The model is developed in the framework of network theory since it treats cities as nodes of a completely connected weighted network. Specifically, the radiation model describes the human mobility behavior at long distances, e.g. at the country or global scale, better than other often used models (e.g. gravity model) . It is also important to notice that it undershoots the real flows. Moreover, its performances are dependent on the structure of the system even though it directly accounts for variations of the population between the source and the destination of a migration, i.e. the less population you have between two cities the more probable is to migrate from one to the other.\n\nSimini *et al.* show that, using this formulation of the problem, the flow of people between cities only depends on the population of the two cities (namely $m_{i}$ and $n_{i}$), and the population living in the circle of radius $r_{ij}$ is equal to the distance between the two cities (namely $s_{ij}$). The relation can be summarized in a simple and parameter-free equation. We report here the formula for the probability to move from city $i$ to city $j$: $$P_{ij} = P_i\\frac{n_j}{(m_i+s_{ij})(m_i+n_j+s_{ij})},\n\\label{eq:radiation}$$ where $P_i$ is the normalization coefficient for city $i$ that ensures that $P_{ij}$ is the probability of moving from $i$ to every city ($i$ included): $P_{i} = \\sum_{j\\in N}\\frac{(m_i+s_{ij})(m_i+n_j+s_{ij})}{n_j}$, where $N$ is the set of all nodes present in the network. In our work we make use of these concepts to model in a similar way the mobility of culturally relevant people. In particular, inspired by multidimensional network theory and it recent applications in modeling human mobility , we propose a multilevel approach to cultural mobility. In this framework, every<\/span> cultural discipline works as a separate system described by a cultural radiation model. Formally, a level is a fully connected weighted and directed network in which the nodes are the cities visited by all the notables of a specific discipline and the links represent the probability of migrating from a city to a different one. Each node has also a link pointing to itself, representing the probability of remaining in the same city instead of moving to a different place. The different levels, $l\\in L$ where $L$ is the set of all disciplines, do not interact with each other but their contribution to the overall migratory exploration sums up. Each level contributes to the global migration model with a factor proportional to the share of notable people the discipline has, $NS$. Thus, the probability of this multilevel migratory network can be described by $$P_{ij} = \\sum_{\\forall l\\in L}NS_lP_{i_l}\\frac{n_{j_l}}{(m_{i_l}+s_{ij_l})(m_{i_l}+n_{j_l}+s_{ij_l})},$$ where $NS_l$ is the notable share of discipline $l\\in L$, $m_{i_l}$ and $n_{j_l}$ are the population of locations $i$ and $j$ respectively in the discipline level $l$, and $s_{ij_l}$ is equivalent to $s_{ij}$ for the specific level $l$. In a similar way, the generalized $P_{i}$ normalizes $P_{ij}$ to a probability following the idea of equation (): $$P_{i_l} = \\sum_{j\\in N}\\frac{(m_{i_l}+s_{ij_l})(m_{i_l}+n_{j_l}+s_{ij_l})}{n_{j_l} }$$\n\nStarting from these two equations, we propose different implementations of the radiation model. In particular, we stress that the radiation model introduces the concept of attractiveness of a city based on the number of job opportunities that a city can provide. The assumption that this number is directly proportional to the number of people living in a city directly connects the concept of attractiveness with the population size of a city. Here, with similar assumptions we propose different formulations of the model based on different possible ways of modeling cultural attractiveness. In particular, we use, as possible alternatives to the standard formulation of the model, the number of notable people that visited a city and its combination with the population of the city. In the case of notable people that visited a city, we count this quantity considering all the visits during the whole time-window. As a consequence, we model cultural dynamics of individuals within this period of observation considering the effects of the notable people distribution as a constant feature of our model, as we do for general population, which is not updated after every step of the dynamics. This is equivalent to assume that a single step of the dynamics has a latency larger than the size of the considered time-window in affecting the importance of cultural attractors. Moreover, counting notable people in this way also relies on the simplifying assumption that they all equally contributed to the importance of a city from a cultural perspective. More realistic modeling will require a *relevance score* based on the historical relevance of the notable people.\n\nWe test each of these three possible definitions, namely the standard one based on population size, the one based only on the number of notable people, and the combination of the two, using both a single level formulation of the radiation model and a multilevel formulation. Using this probabilistic model, our aim is to understand if the radiation model abstraction can be used to describe the level of exploration of the historically and culturally relevant figures (namely, the radius of gyration of each notable people, the number of different cities visited, and the distance distribution of the migration jumps) and which formulation better captures these properties. In the next Section, we discuss first the general information that can be obtained by analyzing the system in terms of network theory metrics and then the comparison between the different formulations proposed.\n\n# Results\n\n## Properties of the migration network\n\nOne of the most interesting characteristics of cultural migration patterns is the tendency of notable figures to explore different cities. To study this property, we can define $S(t)$ as the number of different cities and $N(t)$ as the number of notable people's birth locations, the number of their death locations, and the number of their jumps during the selected time-window, as displayed in Figure .A. by the curves for *Birth*, *Death* and *In-life* respectively. The growth of $S(t)$ is modeled as a function of $N(t)$ using a Heap's law $S(t)=N(t)^{\\alpha}$. Our result is consistent with the estimate of the parameter $\\alpha$ for the *Birth* curve obtained by Schich *et al.* in . A similar result is obtained also for the *Death* and *In-life* curves representing the growth of the location for which the exponent, $\\alpha = 0.85$, suggests a tendency to migrate to a smaller number of cities with respect to the number of different cities where notable individuals were born. This finding may be interpreted as a general and global tendency of notable figures to migrate to a more culturally renowned subset of cities with respect to all the possible available locations.\n\nFocusing on the migration jumps that notable people made during their life, we can study the most central cities both from a global and discipline-based perspective. Here, we use the Page-Rank centrality to measure the importance in terms of the number of incoming links that point to a city and the relative importance of the cities from which these links are coming. In Figure .B, we measure Page-Rank centrality for two different time-windows, namely 1900-1925 and 1926-1950, to show the structural changes of the network during the first half of the 20th century. It is interesting to notice how the development of the film industry in Los Angeles attracted several figures to the city. It is also worth noticing how, due to the Second World War (WW2), Berlin loses positions in the ranking of the *more central* cities. In Figure SM4, we highlight the specific effect of WW2, evaluating Page-Rank centrality before and after the rise of the Nazism regime in Germany, showing the overall loss in cultural centrality for most of the European cities.\n\nWe also evaluated Page-Rank centrality for the sub-network built only by considering the migration jumps of the four top disciplines in those years. Figure shows how notable people from different disciplines migrated to different cities, suggesting that the cultural centrality of a city depends on its cultural characteristics, e.g. Los Angeles and the film industry. This, indeed, results in Los Angeles being a central node for Arts (in particular for *actors*) and Sports, but a more peripheral one for Institutions, having their more central nodes in capital cities such as London, Paris and Moscow. In Fig. SM3 we performed a discipline Page-Rank analysis for two time windows as in Fig. , showing that an important change is present also at the discipline level. An example is given by the dramatic change (i.e. a decrease) in the centrality of Berlin for the scientific community before and after 1933.<\/span>\n\n## Cultural attractiveness in a multilevel radiation model\n\nWith the results obtained so far we aim at modeling notable figures' migratory patterns to better understand what is the process driving the choice of the location where to migrate. The radiation model proposed in finds the motivating factor of mobility for those seeking a job in the number of opportunities a city can provide. Following this idea and using the results obtained by Simini *et al.* , we propose a similar approach to understand if such a model can catch the main factors of cultural mobility. We assume that the number of opportunities that are available in the selected time-window is directly proportional to the number of notable figures that lived in a city during the same time interval. In particular, we explore the following different configurations:\n\n- cultural opportunities are uniformly distributed among cities;\n\n- cultural opportunities are proportional to the population of a city;\n\n- cultural opportunities of a city are directly proportional to the number of notable figures that lived in that city;\n\n- cultural opportunities are directly proportional both to the population of a city and to the number of notable figures that lived in that city.\n\nIn addition, we also want to check whether cultural opportunities depend on the discipline a notable individual is part of. We study all these possibilities using the formulation proposed in Section .\n\nTo find which model better describes the historical mobility of the first half of 20th century, we simulate mobility using a set of walkers that can move following a radiation model over the cultural network, based on different equations depending on the model we are simulating. Thus, depending on the selected configuration, we are using respectively (i) a random walker dynamics' model over the network, (ii) a standard (population-related) radiation model, and (iii) an implementation of the radiation model that specifically considers the cultural opportunities as a subset of job opportunities not necessarily driven by the same factors. The starting location of each simulated notable individual is chosen based on a random choice weighted by the population size of each city.\n\nTo compare the models we analyze their impact on predicting (i) the number of different cities visited, as a proxy of the availability to explore different and new destinations, (ii) the radius of gyration of each notable figure simulated, and (iii) the distribution of the length of the migration jumps. We report in Table the results obtained for five representative models: (i) the random walker model, (ii) the notable-based jump probability on a single-level structure, (iii) the notable-based jump probability on a multilevel structure, (iv) the population-based jump probability on a multilevel structure, and (v) the mixed population-notable-based jump probability on a multilevel structure. Results for the other models are reported in the SM, specifically in Tables SM1, Table SM2, and Table SM3.\n\n```latex\n\\begin{table*}[!t]\\caption{Models' performances. Results obtained for the five models on predicting the number of destinations, the radius of gyration, and the distribution of the length of the migration jumps. The metrics used are the adjusted-$R^2$, the Pearson correlation coefficient, $\\rho$, between models and data, the Kullback-Leibler distance (K-L dist), and the first Wasserstein distance (Wasserstein dist).}\n \\centering\n \\resizebox{\\textwidth}{!}{\n \\begin{tabular}{rcccc}\n \\textbf{Model} & $\\mathbf{adj-R^2}$ & \\textbf{Pearson }$\\mathbf{\\rho}$ & \\textbf{K-L dist} & \\textbf{Wass. dist}\\\\\n \\toprule\n \\textbf{\\textit{Radius of gyration}}&&&&\\\\\n \\textbf{pop-notable-multilevel}&$0.2414\\pm0.0027$&$0.962^{***}$&$\\mathbf{0.00554\\pm0.00004}$&$\\mathbf{0.000100\\pm 2e-7}$\\\\\n pop-multilevel&$-0.2004\\pm0.0034$&$0.953^{***}$&$0.00655\\pm0.00005$&$0.000125\\pm1.e-7$\\\\\n notable-multilevel&$-0.6849\\pm0.0041$&$0.947^{***}$&$0.00836\\pm0.00005$&$0.000139\\pm1e-7$\\\\\n notable-singlelevel&$-1.0249\\pm0.0048$&$0.923^{***}$&$0.01006\\pm0.00006$&$0.000143\\pm1e-7$\\\\\n random-singlelevel&$-2.2673\\pm0.0054$&$0.886^{***}$&$0.01559\\pm0.00009$&$0.000173\\pm1e-7$\\\\\n \\midrule\n \\textbf{\\textit{Different destinations}}&&&\\\\\n \\textbf{pop-notable-multilevel}&$0.9547\\pm0.0004$&$0.978^{***}$&$0.0649\\pm 0.002$&$\\mathbf{0.0150\\pm0.0001}$\\\\\n pop-multilevel&$0.9612\\pm0.0004$&$0.981^{***}$&$\\mathbf{0.0561\\pm0.001}$&$0.0154\\pm0.0001$\\\\\n notable-multilevel&$0.9619\\pm0.0003$&$0.982^{***}$&$0.0570\\pm0.001$&$0.0155\\pm0.0001$\\\\\n notable-singlelevel&$0.9624\\pm0.0003$&$0.982^{***}$&$0.0623\\pm0.002$&$0.0159\\pm0.0001$\\\\\n random-singlelevel&$0.9606\\pm0.0004$&$0.982^{***}$&$0.0724\\pm0.002$&$0.0163\\pm0.0001$\\\\\n \\midrule\n \\textbf{\\textit{Length of migration jumps}}&\\\\\n \\textbf{pop-notable-multilevel}&$0.5104\\pm0.0019$&$0.982^{***}$&$\\mathbf{0.00533\\pm0.00005}$&$\\mathbf{0.000080\\pm1e-7}$\\\\\n pop-multilevel&$0.2249\\pm0.0023$&$0.974^{***}$&$0.00686\\pm0.00005$&$0.000099\\pm1e-7$\\\\\n notable-multilevel&$-0.0640\\pm0.0029$&$0.967^{***}$&$0.00795\\pm0.00005$&$0.000109\\pm1e-7$\\\\\n notable-singlelevel&$-0.2192\\pm0.0029$&$0.962^{***}$&$0.00790\\pm0.00006$&$0.000112\\pm1e-7$\\\\\n random-singlelevel&$-0.8313\\pm0.0034$&$0.947^{***}$&$0.01265\\pm0.00006$&$0.000131\\pm1e-7$\\\\\n \\bottomrule\n \\end{tabular}}\n \\label{tab4}\n \\end{table*}\n```\n\nAmong the different possibilities tested, we found that a modeling approach considering cultural attractors as a product of both job-opportunities and cultural interests, by means of the population number and the effective number of notable people migrated in the time-window under investigation, better captures key features of notables' mobility. Moreover, we stress that the model treating different disciplines as different dynamics outperforms the single-level models in terms of Kullback-Leibler divergence and first Wasserstein distance .\n\nThese quantities are measured after simulating the mobility for $2,000$ notable people, whose number of migration jumps was randomly sampled from a geometric distribution with parameter $p\\sim0.5$. We repeated the simulation for 500 times to estimate the stability and the standard error of the different metrics. Figure .A shows the distribution of the mean radius of gyration of the $500$ simulations against the data (blue-stepped line). Figure .B shows the distribution of different cities visited by a notable person during her\/his lifetime, while the distribution presented in Figure .C shows the probability of jumping to a destination that is at a specific distance from the origin. These distributions are highly dependent on the geographical distance. Short distance migrations are largely privileged while at $\\sim 2500 km$ and $\\sim 5500 km$ respectively a second smaller peak appears capturing the overseas migrations (across the Atlantic Ocean) mainly from Europe to US. We stress that the effect of slightly underestimating the log-distance trips, which also affects the radius-of-gyration distribution, has been proven to be a structural feature of the radiation model under irregular geographical configurations such as those imposed by oceans .\n\n# Discussion and Conclusions\n\nA complex question has been posed in , i.e. whether it is possible to describe the dynamical properties of the cultural migration phenomenon. Starting from their idea of using network theory to tackle the problem, we make some further steps in understanding this issue. First of all, we use NLP tools to capture a more detailed representation of the lives of historically<\/span> notable people that can be considered as cultural developers or important actors in the evolutionary process of culture. Our approach gathers information not only from the birth and death events but also from in-life migratory events, enabling us to study in a more detailed way the cultural migration processes and to include in our model the years in which a person is professionally more active. Indeed, there is a difference between the birth location of a person and the locations s\/he migrates to or where s\/he decide to spend her\/his last years. The first birth location is not determined by a decision of the born individual, while the in-life migrations and the death places are more likely to be chosen following some precise interests and motivations. Using our data we are able to capture this difference and quantify the contribution of the exploration level of notable people during these phases of life.\n\nMoreover, we focus our attention on understanding the main features that drive this kind of mobility. Our results provide evidence that the mobility of historically and culturally notable individuals is best described by simultaneously<\/span> considering three different factors: (i) the population of a city, as a proxy of economic wealth and generic job opportunities; (ii) the number of culturally notable people that spent some time of their lives there, as a proxy of the attractive role played by this city as a cultural hub and of the proneness of this city to invest in culture; and (iii) the discipline a culturally notable person is working in, as a proxy both of the interest a city has in investing on a specific cultural area and the tendency of people, interested or working on a given discipline, to follow notable ones from the same discipline. The solution proposed in our work represent a functional integration, in a quantitative theoretical model, of these components.<\/span>\n\nIt is also worth highlighting some limitations of our work. First of all, we rely on Wikipedia as data source, which shows a clear bias towards the Western culture and male figures. This limitation is even more relevant since we focus only on pages written in English<\/span>. Then, we consider a specific time-window in the cultural history, i.e. the first half of the 20th century. So, our results may be dependent on the time chosen (for example, we may observe different behaviors during wider time-windows) and on the small available dataset of historically and<\/span> culturally notable personalities. Besides, while our data on trajectories were extracted automatically but manually revised, we estimate a recall of our information extraction pipeline of $0.59$, as pointed out in Section . This implies that some migration destinations that are mentioned in the Wikipedia biographies should be added (e.g. by improving the extraction performances of Ramble-On) in order to make our set of data even richer. A richer dataset will also help in stabilizing, constructing and precisely characterizing the structure of the network. However, we also stress that the extension to the radiation model proposed here only uses the visited locations and not the migration timelines of notables. Thus, the present recall level in extracting complete timelines does not directly affect the dynamical structure of our model. Thanks to these considerations, it is also interesting to discuss the specific limitations of the mobility model proposed. In particular, while the fit of the number of trips might depend on the recall limitations discussed above and thus affect the number of different visited locations, this does not explain the systematical underestimation of the probability of *high-distance jumps* and *high-radius of gyration*. We expect the geographical constraints (as discussed in Section ) and the specific time-window we selected (e.g. WW2 forced to migrate many notable people whose choice was biased towards US) to be two determinants of these discrepancies.\n\nOverall, our results open interesting possibilities on further investigating the historical role played by places and cities in attracting culturally relevant figures as well as on better analyzing the level of contribution of each of the factors identified by our approach (i.e. city's population, number of intellectuals living in the city, and strength of a specific cultural discipline in the city).Similarly, changing the perspective, it will become possible to quantify the impact of cultural communities on local well-being, helping our understanding on how individuals from similar or different disciplines combine and collaborate to seed the vital growth of cities' economies.<\/span>\n\n```latex\n\\begin{backmatter}\n \\section*{Availability of data and material}\n The data used in this work are available at:\n \\textcolor{black}{\\url{https:\/\/doi.org\/10.7910\/DVN\/PJS21L} or\\newline}\n \\url{https:\/\/figshare.com\/articles\/Following_the_footsteps_of_giants\/7352987}. \\newline\n The code used to extract the destinations from the Wikipedia biographies is publicly released at \\url{https:\/\/github.com\/dhfbk\/rambleon}.\n\n \\section*{Competing interests}\n The authors declare no competing financial or non-financial interests.\n \n \\section*{Author's contributions}\n All authors conceptualized the project. L.L acquired and cleaned the data, performed the investigation, the statistical analyses and drafted the original manuscript. All authors contributed revising the manuscript and gave final approval for publication.\n \n \\section*{Funding}\n No funding supported our research.\n \n \\section*{Abbreviations}\n NLP: Natural Language Processing; WW2: World War 2; US: United States; SM: Supplementary Material.\n\n %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n %% The Bibliography %%\n %% %%\n %% Bmc_mathpys.bst will be used to %%\n %% create a .BBL file for submission. %%\n %% After submission of the .TEX file, %%\n %% you will be prompted to submit your .BBL file. %%\n %% %%\n %% %%\n %% Note that the displayed Bibliography will not %%\n %% necessarily be rendered by Latex exactly as specified %%\n %% in the online Instructions for Authors. %%\n %% %%\n %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n \n % if your bibliography is in bibtex format, use those commands:\n \\bibliographystyle{bmc-mathphys} % Style BST file (bmc-mathphys, vancouver, spbasic).\n \\bibliography{bmc_article} % Bibliography file (usually '*.bib' )\n % for author-year bibliography (bmc-mathphys or spbasic)\n % a) write to bib file (bmc-mathphys only)\n % @settings{label, options=\"nameyear\"}\n % b) uncomment next line\n %\\nocite{label}\n \n % or include bibliography directly:\n % \\begin{thebibliography}\n % \\bibitem{b1}\n % \\end{thebibliography}\n \n %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n %% %%\n %% Figures %%\n %% %%\n %% NB: this is for captions and %%\n %% Titles. All graphics must be %%\n %% submitted separately and NOT %%\n %% included in the Tex document %%\n %% %%\n %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n \n %%\n %% Do not use \\listoffigures as most will included as separate files\n% \\clearpage \n\n% \\section*{Figures}\n\n% \\begin{figure}[h]\n% \\centering\n% % COMMENT THIS LATER\n% % \\includegraphics[width=.99\\textwidth,keepaspectratio]{graphs\/Figure1.pdf}\n% \\caption{Two examples of the trajectories obtained after the extraction. Panel A on the left shows the travels made by Albert Einstein while panel B those made by Maria Montessori.}\n% \\label{fig1}\n% \\end{figure}\n\n% \\begin{figure}[ht]\n% \\centering\n% % COMMENT THIS LATER\n% % \\includegraphics[width=0.99\\textwidth,keepaspectratio]{graphs\/Figure2.pdf}\n% \\caption{\\textbf{A}: Size of disciplinary communities present in the dataset. From the barplot it is evident the difference in the number of notable individuals between arts, science\\&technology, humanities, institutions and public figures, sports, business\\&law, exploration. \\textbf{B}: Distribution of the number of trajectories per person in the set of notable people. The lines represent the geometric fit to the distributions of the two most numerous disciplines, i.e. \\textit{Arts} and \\textit{Science and Technology}. Both distributions can be described in terms of a geometric distribution with parameter $p\\sim \\frac{1}{2}$.}\n% \\label{fig2}\n% \\end{figure}\n\n% \\begin{figure}[ht]\n% \\centering\n% % COMMENT THIS LATER\n% % \\includegraphics[width=0.99\\textwidth,keepaspectratio]{graphs\/Figure3.pdf}\n% \\caption{A representation of the spatial directed network encoding the mobility information of the chosen set of notable people during the first half of the 20th century. We use the blue hexagon to indicate the source point and the red to indicate the destination of the migratory jump. In this figure, we present the network structure considering the migratory events during two different time snapshots. Figure \\ref{fig3}.a on the left shows the escaping and exiling from Saint Petersburg of the aristocratic families during the years of the red revolution. Similarly, Figure \\ref{fig3}.b on the left shows the migration flows from Europe to North America during World War 2, due to the fascist regimes and persecutions.}\n% \\label{fig3}\n% \\end{figure}\n\n% \\begin{figure}[ht]\n% \\centering\n% % COMMENT THIS LATER\n% % \\includegraphics[width=0.99\\textwidth,keepaspectratio]{graphs\/Figure4.pdf}\n% \\caption{\\textbf{A}: Number of different visited cities as a function of the notable birth\/death\/in-life visited locations. In the case of in-life migrations the number of visited locations does not necessarily corresponds to the number of notable figures in the dataset, but it represents the different cities visited in time as a function of the number of trips made by the notable figures. \\textbf{B}: Page-rank centrality for the global mobility network evaluated in two different time-windows. In blue we report the centrality values for the 1900--1925 window, while in red the centrality in the window 1926--1950.}\n% \\label{fig4}\n% \\end{figure}\n\n% \\begin{figure}[ht]\n% \\centering\n% % \\includegraphics[width=\\linewidth]{graphs\/Figure5.pdf}\n% \\caption{Page rank centrality per discipline for the five most visited cities. This figures shows the different importance of cities in terms of discipline specific attractiveness.}\n% \\label{fig5}\n% \\end{figure}\n\n% \\begin{figure}[ht]\n% \\centering\n% % COMMENT THIS LATER\n% % \\includegraphics[width=0.9\\linewidth]{graphs\/Figure6.pdf}\n% \\caption{\\textbf{A}: The distribution of the radius of gyration per notable person; \\textbf{B}: The distributions of the data and the five different models for the number of different cities visited per notable person; \\textbf{C}: The distribution of the lengths of the all the jumps simulated for the different model (lines) and for the data (stepped density histogram).}\n% \\label{fig6}\n% \\end{figure}\n\n\n% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n% %% %%\n% %% Tables %%\n% %% %%\n% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n \n% %% Use of \\listoftables is discouraged.\n% %%\n% \\clearpage\n\n% \\section*{Tables}\n\n% \\begin{table}[h]\n% \\centering\n% \\caption{Composition of the selected subset of notable people, organized by the different work areas and disciplines present in the Pantheon dataset \\cite{pantheon_yu}. In particular, 15 work areas are presented, sorted by discipline.}\n% \\vspace{0.5cm}\n% \\begin{tabular}{ccr}\n% \\toprule\n% Work Area & Discipline & Percentage (\\%)\\\\\n% \\midrule\n% Film and Theatre & Arts & 10.35\\\\\n% Music & Arts & 8.81\\\\\n% Fine Arts & Arts & 5.61\\\\\n% Design & Arts & 1.68\\\\\n% \\midrule\n% Natural Sciences & Science and Technology & 13.74\\\\\n% Social Sciences & Science and Technology & 3.92\\\\\n% Medicine & Science and Technology & 2.25\\\\\n% Math & Science and Technology & 2.04\\\\\n% \\midrule\n% Language & Humanities & 17.39\\\\\n% Philosophy & Humanities & 3.51\\\\\n% \\midrule\n% Government & Institutions & 15.90\\\\\n% Military & Institutions & 3.81\\\\\n% \\midrule\n% Activism & Public Figure & 1.23\\\\\n% \\midrule\n% Individual Sports & Sports & 1.28\\\\\n% \\midrule\n% Business & Business and Law & 1.07\\\\\n% \\bottomrule\n% \\end{tabular}\\label{tab1}\n% \\end{table}\n\n\n% \\begin{table*}[ht]\n% \\caption{Example of identification and classification of a sentence using Semaphor and FrameNet.}\n% \\label{tab2}\n% \\centering\n% \\begin{tabular}{ccccc}\n% \\textbf{Snippet} & \\textbf{Predicate} & \\textbf{Frame} & \\textbf{Place} & \\textbf{Time}\\\\\n% \\toprule\n% \"Paul moved to Chicago in 1934, &&&&\\\\ \n% where he continued to & moved & Motion & Chicago & in 1936 \\\\\n% perform on radio.\"&&&&\\\\\n% \\bottomrule\n% \\end{tabular}\n% \\end{table*}\n\n\n% \\begin{table}[ht]\n% \\caption{Results obtained by processing Les Paul's biography using the Ramble-On pipeline.}\n% \\label{tab3}\n% \\centering\n% \\begin{tabular}{l|lll}\n% & Birth & Movement 1 & Death\\\\\n% \\toprule\n% date & 19150000 & 19340000 & 20091231\\\\\n% place & Waukesha & Chicago & White Plains\\\\\n% latitude & 43.0117 & 41.8369 & 41.0400\\\\\n% longitude & -88.2317 & -87.6847 & -73.7786\\\\\n% predicate & null & moved & null\\\\\n% resource & dbpedia & FrameNet & dbpedia\\\\\n% place frame & null & @Goal & null\\\\\n% resource frame & Birth & Motion & Death\\\\\n% \\bottomrule\n% \\end{tabular}\n% \\end{table}\n \n% \\begin{table*}[ht]\n% \\caption{Models' performances. Results obtained for the five models on predicting the number of destinations, the radius of gyration, and the distribution of the length of the migration jumps. The metrics used are the adjusted-$R^2$, the Pearson correlation coefficient, $\\rho$, between models and data, the Kullback-Leibler distance (K-L dist), and the first Wasserstein distance (Wasserstein dist).}\n% \\centering\n% \\resizebox{\\textwidth}{!}{\n% \\begin{tabular}{rcccc}\n% \\textbf{Model} & $\\mathbf{adj-R^2}$ & \\textbf{Pearson }$\\mathbf{\\rho}$ & \\textbf{K-L dist} & \\textbf{Wass. dist}\\\\\n% \\toprule\n% \\textbf{\\textit{Radius of gyration}}&&&&\\\\\n% \\textbf{pop-notable-multilevel}&$0.2414\\pm0.0027$&$0.962^{***}$&$\\mathbf{0.00554\\pm0.00004}$&$\\mathbf{0.000100\\pm 2e-7}$\\\\\n% pop-multilevel&$-0.2004\\pm0.0034$&$0.953^{***}$&$0.00655\\pm0.00005$&$0.000125\\pm1.e-7$\\\\\n% notable-multilevel&$-0.6849\\pm0.0041$&$0.947^{***}$&$0.00836\\pm0.00005$&$0.000139\\pm1e-7$\\\\\n% notable-singlelevel&$-1.0249\\pm0.0048$&$0.923^{***}$&$0.01006\\pm0.00006$&$0.000143\\pm1e-7$\\\\\n% random-singlelevel&$-2.2673\\pm0.0054$&$0.886^{***}$&$0.01559\\pm0.00009$&$0.000173\\pm1e-7$\\\\\n% \\midrule\n% \\textbf{\\textit{Different destinations}}&&&\\\\\n% \\textbf{pop-notable-multilevel}&$0.9547\\pm0.0004$&$0.978^{***}$&$0.0649\\pm 0.002$&$\\mathbf{0.0150\\pm0.0001}$\\\\\n% pop-multilevel&$0.9612\\pm0.0004$&$0.981^{***}$&$\\mathbf{0.0561\\pm0.001}$&$0.0154\\pm0.0001$\\\\\n% notable-multilevel&$0.9619\\pm0.0003$&$0.982^{***}$&$0.0570\\pm0.001$&$0.0155\\pm0.0001$\\\\\n% notable-singlelevel&$0.9624\\pm0.0003$&$0.982^{***}$&$0.0623\\pm0.002$&$0.0159\\pm0.0001$\\\\\n% random-singlelevel&$0.9606\\pm0.0004$&$0.982^{***}$&$0.0724\\pm0.002$&$0.0163\\pm0.0001$\\\\\n% \\midrule\n% \\textbf{\\textit{Length of migration jumps}}&\\\\\n% \\textbf{pop-notable-multilevel}&$0.5104\\pm0.0019$&$0.982^{***}$&$\\mathbf{0.00533\\pm0.00005}$&$\\mathbf{0.000080\\pm1e-7}$\\\\\n% pop-multilevel&$0.2249\\pm0.0023$&$0.974^{***}$&$0.00686\\pm0.00005$&$0.000099\\pm1e-7$\\\\\n% notable-multilevel&$-0.0640\\pm0.0029$&$0.967^{***}$&$0.00795\\pm0.00005$&$0.000109\\pm1e-7$\\\\\n% notable-singlelevel&$-0.2192\\pm0.0029$&$0.962^{***}$&$0.00790\\pm0.00006$&$0.000112\\pm1e-7$\\\\\n% random-singlelevel&$-0.8313\\pm0.0034$&$0.947^{***}$&$0.01265\\pm0.00006$&$0.000131\\pm1e-7$\\\\\n% \\bottomrule\n% \\end{tabular}}\n% \\label{tab4}\n% \\end{table*} \n \n\n %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n %% %%\n %% Additional Files %%\n %% %%\n %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n \n\n \n\\end{backmatter}\n```\n\n[^1]: Code available here: \n\n[^2]: \n\n[^3]: ","meta":{"dup_signals":{"dup_doc_count":15,"dup_dump_count":13,"dup_details":{"curated_sources":2,"2022-40":1,"2022-21":1,"2021-43":1,"2021-31":1,"2021-17":1,"2021-04":1,"2020-40":1,"2020-29":1,"2020-16":1,"2020-05":2,"2023-50":1,"2024-18":1}},"filename":"out\/1912.07551_extract_bmc_article.tex.md"},"subset":"arxiv"} +{"text":"abstract: There is increasing circumstantial evidence that the cuprate superconductors, and correlated-electron materials generally, defy simple materials categorization because of their proximity to one or more continuous zero-temperature phase transitions. This implies that the fifteen-year confusion about the cuprates is not fundamental at all but simply overinterpreted quantum criticality\u2014an effect that seems mysterious by virtue of its hypersensitivity to perturbations, *i.e.* to sample imperfections in experiment and small modifications of approximation schemes in theoretical modeling, but is really just an unremarkable phase transition of some kind masquerading as something important, a sheep in wolf's clothing. This conclusion is extremely difficult for most physicists even to think about because it requires admitting that an identifiable physical phenomenon might cause the scientific method to fail in some cases. For this reason I have decided to explain the problem in a way that is nonthreatening, easy to read, and fun\u2014as a satire modeled after a similar piece of Lewis Carroll's I once read. My story is humorous fiction. Any similarity of the characters to living persons is accidental. My apologies to Henry W. Longfellow. \\[Published as Annals of Improbable Research **10**, No. 6 (May\/June 2004), p. 8.\\]\nauthor: R. B. Laughlin\ndate: January 1, 2004\ntitle: Hiawatha's Valence Bonding\n\n# Introduction\n\n> Since all men have imperfections \n> Hanging bones inside their closets \n> That they trust no one will notice \n> Absent tips on where to find them, \n> It will shock no one to learn that \n> Even mighty Hiawatha \n> Famous Chief of myth and legend \n> Did some things he was not proud of \n> While a brother in a frat house \n> With a surly reputation \n> At an unknown little college \n> That his father helped to finance \n> So that he would get admitted \n> By the shores of Gitche-Gumee. \n\n> Far from loving fields and flowers \n> And the odor of the forest \n> As one reads in all the textbooks \n> Hiawatha hated woodlands \n> And the animals one finds there, \n> Whom he felt were always pooping, \n> And the plants the critters fed on \n> Down in dank and swampy bottoms, \n> Nearly perfect grounds for breeding \n> Mighty hordes of great mosquitoes \n> Who were always lean and hungry \n> And equipped with maps and radar \n> Could detect where you were hiding \n> To inflict their bites and torments, \n> With their sneaky friends the black flies, \n> And their angry friends the green flies, \n> And the rocks ensnared by tree roots \n> That existed just to trip you \n> And would look improved as concrete \n> In foundation for a condo. \n\n> Thus the kindly, thoughtful image \n> Of a noble man of Nature \n> Was a total fabrication \n> Of a team of gifted spin docs \n> Hired later for this purpose. \n> He was really just a tech nerd \n> Who cared only for equations \n> And explaining all behavior \n> From the basic laws of physics \n> Armed with only mathematics. \n\n> Thus, instead of lakes and forests, \n> Hiawatha worshipped Newton, \n> Whose account of Kepler's orbits \n> Built on rules that Galileo \n> Had inferred from observation \n> Plus the innocent assumption \n> Of a law of gravitation \n> Was a cosmic inspiration; \n> And the brilliant Sadie Carnot, \n> Whose insightful laws of heat flow \n> Were deduced from working engines \n> Absent microscopic theories; \n> And the tragic Ludwig Boltzmann \n> Who ascribed these laws to counting \n> But fell victim to depression \n> When he found no one believed him \n> And so killed himself by jumping \n> From an Adriatic tower. \n> Hiawatha saw that Maxwell's \n> Guessing missing laws of motion \n> Needed for predicting light waves, \n> Was the most transcendent genius, \n> As was Albert Einstein's insight \n> That the speed of light being constant \n> Must mean time was not consistent \n> And that mass could be converted \n> Into heat and vice versa. \n> Just as clear was that the Planck law \n> Must imply DeBroglie's wavelength \n> Was in force in any matter \n> So that sharp atomic spectra \n> And distinct atomic sizes \n> And the laws of bond formation \n> Came from quantum interference. \n\n# Hiawatha's Mistake\n\n> Thus it was that Hiawatha \n> Came to be infatuated \n> With the laws of quantum matter, \n> Which means liquid noble gases, \n> Neutrons in a burnt-out star core, \n> Or just rocks so cryogenic \n> They cannot get any colder, \n> Even with improved equipment, \n> Like the state of too much sliding \n> On the ice of Gitche-Gumee \n> After dark in dead of winter \n> In an inexpensive loincloth. \n> Pain and danger notwithstanding \n> Quantum matter's simple structure \n> Makes the eager physics tyro \n> Quite unable to resist it. \n> Hiawatha learned how atoms \n> Self-assemble into crystals, \n> How electrons move right through them, \n> Waving past the rigid ions \n> Thereby making them metallic \n> In the absence of a bandgap \n> Which arises from diffraction \n> And prevents the charge from moving \n> Thereby causing insulation, \n> But by means of wires and doping \n> With atomic imperfections \n> When the bandgap is a small one \n> Can be used to make transistors. \n> In addition to the basics \n> He learned how electric forces \n> Like those seen in clinging woolens \n> Cause some things to be magnetic \n> Up until the lowly phonon, \n> Quantum particle of sound wave, \n> Storing heat the way that light does \n> Mediates a strong attraction \n> That can pair up two electrons \n> Causing them to move together \n> Overcoming all resistance \n> And producing other magic \n> Such as quantum oscillations. \n\n> At this quite untimely moment \n> Of his fragile student history \n> When his mind was most suggestive \n> Our poor hapless Hiawatha \n> Had the terrible misfortune \n> To fall in with wicked people \n> Who were little more than con men \n> And advanced in their profession \n> Making theories of such matter \n> That were not at all deductive \n> But instead used mathematics \n> As a way to sow confusion \n> So that no one would discover \n> That their stuff was pure opinion \n> Spiced with politics and chutzpah \n> So it looked somewhat like science \n> Even though it really wasn't. \n\n> How they did this was ingenious \n> For it's not a simple matter \n> To produce concrete equations \n> That are absolutely hokum \n> And escape without detection \n> When they represent relations \n> Of some quantities one measures \n> Written down as abstract symbols \n> That could easily be tested. \n> What they did was deftly prey on \n> Prejudicial ways of thinking \n> That their colleagues thought were reasoned \n> But were simply misconceptions, \n> Generated during training \n> They had all received as students, \n> That the properties one wanted \n> Were completely universal \n> So details did not matter. \n> But the data did not say this \n> And, moreover, had they done so \n> There would have been no good reason \n> To think any more about it. \n> So, while everyone was watching, \n> They swapped in some new equations \n> That they said would solve the problem \n> On account of being much simpler \n> But in fact described a system \n> Very different from the first one \n> And, moreover, was unstable, \n> Balanced at competing phases, \n> So that nobody could solve it \n> Thus betraying the deception. \n\n> Adding to the dazzling brilliance \n> Of this coldly thought-out swindle \n> They declared it *fundamental* \n> So that all the strange creations \n> Made by people trying to solve it \n> And quite clearly not succeeding \n> Proved it was a fount of deepness \n> One should struggle to unravel \n> Even if it took a lifetime. \n> As a nifty added bonus \n> Any hint you dropped in public \n> That it might have no solution \n> Simply meant you weren't a genius, \n> Told the world that you were stupid, \n> That you were a hopeless failure \n> Who should not command a pencil. \n> No one wanted to admit this \n> So they'd cover up their failure \n> And pretend that they had solved it \n> Even though they clearly hadn't. \n> This succeeded, for the most part, \n> But in one respect it didn't, \n> For their desperate need to publish \n> And thereby maintain their funding \n> Caused a massive flood of papers, \n> Each quite different from the others, \n> To descend upon the journals \n> And to overwhelm and clog them. \n> This would have been very funny \n> Had it not been so pathetic. \n\n> Hiawatha bought the story \n> Took the bait, hook, line, and sinker \n> And, like many other students \n> Who'd been victimized before him, \n> Got convinced that his strong math skills, \n> Far exceeding those of others, \n> Would reveal nature's mysteries \n> When he solved the Hubbard model \n> And its child the t-J model \n> And the lattice Kondo model \n> And the quantum spin glass model, \n> All of which possessed the feature \n> That no human being could solve them. \n\n# Hiawatha Meets the Cuprates\n\n> Nature has a sense of humor, \n> As one learns by working with it, \n> But it is an opportunist, \n> So that life's most bitter lessons \n> Often wind up learned the hard way \n> When it moves to take advantage \n> Of a single bad decision \n> And compound it with some mischief \n> Custom made for the occasion. \n\n> Just when he'd resolved to strike out \n> On his suicidal mission \n> There occurred a bold announcement \n> In a well-known German journal \n> That a tiny lab near Z\u00fcrich \n> Had discovered a material \n> With the structure of perovskite \n> Made of oxygen and copper \n> And some other stuff like strontium \n> That when cooled to thirty kelvin \n> Lost all traces of resistance. \n> This event was simply shocking \n> For existing quantum theory \n> Said it had to get much colder \n> For this special thing to happen, \n> As did all the careful surveys \n> Of the properties of metals, \n> Which were very comprehensive \n> And agreed well with the theory. \n> Since the chemists were ambitious \n> To somehow transcend this limit, \n> Which they thought too academic, \n> And someday kill all resistance \n> Using no refrigeration, \n> There ensued a feeding frenzy \n> Worthy of a horror movie, \n> Like what happens when a trawler \n> Dumps its hold of tuna entrails \n> Off a reef in north Australia. \n\n> One example of this madness \n> Was the *Physics Woodstock* conference \n> That took place in mid-Manhattan \n> Shortly after the announcement \n> Where attendees got together, \n> Comandeered a giant ballroom, \n> And gave talks not on the program \n> In a special all-night session \n> Dedicated to the cuprates \n> Which was packed to overflowing. \n> There was talk of maglev transport, \n> New kinds of computer circuit, \n> Mighty, compact little motors \n> And efficient power cables, \n> All of which would soon be coming \n> Thanks to this momentous breakthrough. \n> But it turns out we don't have them \n> For they weren't a big improvement \n> Over things we had already \n> And were hopelessly expensive. \n\n> Then there were the frantic searches \n> To find compounds that were better, \n> Which one knew could be accomplished \n> If one spent enough time looking, \n> Since this stuff had lots of phases \n> Subtly different from each other, \n> And there had to be a best one. \n> There was very rapid progress \n> Culminating in a patent \n> For a more complex material \n> In the same broad class of structure \n> Which performed at ninety kelvin, \n> So much higher than the theory \n> Would allow to ever happen \n> Even with extreme assumptions \n> That one knew it was in trouble. \n\n> Almost overnight one found that \n> Every spectrum known to science \n> Had been taken on a cuprate. \n> Their alleged profound importance \n> Was, of course, a major factor, \n> But what mattered most was tactics. \n> Without need to tell one's funders, \n> Since it could be done so quickly, \n> One could telephone a chemist, \n> Cut a deal to get some samples, \n> Put them in one's apparatus\u2014 \n> Presto! Out would come a paper \n> That would instantly get published \n> Even if it was a stinker. \n> This produced a pile of data, \n> Growing without bound, like cancer, \n> That completely overwhelmed you \n> By being mostly unimportant, \n> Like the growing list of options \n> Coming from your cable service. \n\n> Often spectra weren't consistent, \n> But, instead of getting angry \n> As one would have in the old days, \n> One would handle it maturely \n> And just chalk it up to errors \n> That occur when one is hasty \n> Or has had bad luck with samples. \n> But this tolerance, it turns out, \n> Was a bargain with the devil \n> For it later was discovered \n> That enormous variation \n> Was endemic to the cuprates, \n> And that things not reproducing \n> Due to complex phase inclusions, \n> Foreign atoms in the sample, \n> Careless oxygen annealing, \n> Surface preparation methods, \n> And a thousand other factors \n> Was essential to their nature. \n\n> Sadly, by the time this surfaced \n> Shameful habits of denying \n> That the differences existed \n> Had become enshrined in writing, \n> And so wedded to the culture, \n> That they could not be corrected. \n> It was now accepted practice \n> In a public presentation \n> Of experimental findings \n> Not to mention other data \n> Even if your own group took them. \n> Grounds for this were rarely stated, \n> Other than the innuendo \n> That one's sorry competition \n> Were a hopeless bunch of bozos \n> Who did not know how to measure \n> And therefore could not be trusted. \n> It was likewise viewed as kosher \n> To make up a little theory \n> Or adopt somebody else's \n> That gave all your findings meaning\u2014 \n> Although not those of your colleagues, \n> Which were, sadly, so imperfect \n> They were simply inconsistent. \n> But one never heard recanting, \n> Since it would have meant admission \n> That one's judgement had been faulty. \n\n> Thus the cuprates' weird caprices \n> Long escaping understanding \n> Transformed into pseudotheories \n> That, like gods on Mount Olympus, \n> Were political creations \n> That could not be killed with reason \n> And, empowered as immortals, \n> Took control of their creators, \n> Warred among themselves for power, \n> Schemed to have a lot of children, \n> And, in general, made a circus \n> Of the scientific method. \n\n> Hiawatha, being a student, \n> And, quite frankly, rather callow \n> Did not have the slightest inkling \n> That such nonsense ever happened. \n> He believed the claims of science \n> To be rather more objective \n> Than competing kinds of knowledge \n> On account of its precision \n> And the fact that you could test it. \n> Rather than the yawning snake pit \n> Seething with disinformation \n> That was really there before him, \n> Certain death for young beginners, \n> He saw just a chance for glory \n> Something of immense importance, \n> Judging from the acrimony \n> Coursing through the talks and papers, \n> And a vast supply of data \n> On which one could build a theory \n> And thereby become a hero, \n> Much the way the dumber brother \n> Of the famous brave Odysseus \n> That no one has ever heard of, \n> Sure he could outfox the sirens, \n> Ordered that the men unbind him \n> And, of course, succumbing quickly \n> Dove right in and bashed his brains out. \n\n# Hiawatha Escapes Reality\n\n> Hiawatha's misconceptions \n> Of the nature of the problem \n> He was setting out to conquer \n> Were not shared by everybody. \n> Just as buzzards, with keen noses, \n> Circling high above their breakfast \n> Wait until it cannot hurt them, \n> To swoop down and get to business, \n> And ichneuman wasps impregnate \n> Larval caterpillar victims \n> With some eggs that grow to eat them, \n> Thus not let them reach adulthood \n> When they might be hard to handle, \n> Hiawatha's crafty mentors \n> Sensed that science had stopped working \n> In the sub-field of the cuprates, \n> As it had before in others \n> Where their scams had been successful. \n> Smelling death was close upon it, \n> They resolved the time was ready. \n\n> What ensued was simply awesome, \n> Destined to go down in legend. \n> They proposed a cuprate theory \n> So magnificent in concept, \n> So much bolder than the others \n> That it blasted them to pieces \n> Like some big atomic warhead, \n> So outshined them in its glory \n> Like a nova in the heavens \n> That it blinded any person \n> Who would dare to gaze upon it. \n> Cuprates did these things, it stated, \n> Just because a quirk of nature \n> Made them like the *Hubbard model*, \n> Which, as had been long established, \n> Did some things quite fundamental, \n> Not yet known to modern science, \n> Which explained the crazy data, \n> So to understand the cuprates \n> One would have to solve this model. \n> How colossal! How stupendous! \n> It was absolutely foolproof! \n> No one could disprove this theory \n> With existing mathematics \n> Or experimental data \n> For exactly the same reasons \n> Nor could they admit they couldn't, \n> So they'd spend their whole lives trying, \n> Blame themselves for being so stupid, \n> And pay homage in each paper \n> With the requisite citation! \n\n> They left clues in great abundance \n> That they'd made a vast deception \n> Far surpassing P. T. Barnum's \n> Most creative whims and musings \n> Trusting that no one would catch them \n> On account of being so guileless, \n> Which they knew was part of science, \n> Rather like the clever killer, \n> Sure he can outsmart Columbo, \n> Leaving marks upon the crime scene \n> Then in later verbal sparring \n> Hints at them in brazen taunting. \n> One was that its short description, \n> Resonating bonds of valence, \n> Was the name that Linus Pauling \n> Used for common bonds of benzene, \n> Something so profoundly different \n> From the physics of the cuprates \n> That its use on this occasion \n> Seemed to show a lousy word sense. \n> But, in fact, it was inspired, \n> For the permanent confusion \n> Left by its uncertain meaning \n> Like the data it reflected, \n> Was defense against attackers, \n> Made it very hard to target, \n> Left its enemies bewildered. \n> And the thoughtful usurpation \n> Of a well-established brand name \n> Had the lovely added feature \n> Of dispatching pesky Pauling, \n> Who had always been a nuisance, \n> Down to Davy Jones's locker \n> In the minds of younger people. \n>\n> There was also the assertion \n> Running rampant through the theory \n> That the essence of the cuprates \n> Was coulombic insulation, \n> Which, on close inspection, turned out \n> No one could define precisely, \n> With a few concrete equations, \n> But was nonetheless a concept \n> People thought they comprehended, \n> Like the fancy secret contents \n> Of competing brands of toothpaste \n> That, of course, are total fictions \n> Made up during lunch by ad guys. \n> But the best clue by some margin \n> Was the *deus ex machina* \n> Known as Gutzwiller Projection, \n> Which began life as a method \n> For controlling the equations \n> But was morphed on this occasion \n> To a monsterous distortion \n> Of the basic mathematics \n> On the grounds it was insightful. \n> But, in fact, it came from nowhere, \n> And was just a simple dictat \n> That an off-the-shelf conductor \n> Could not be a quantum magnet \n> While one forced it to become one \n> Thus creating awful conflict \n> When, in fact, there simply was none. \n\n> Hiawatha, being clever, \n> Quickly saw that he could do this, \n> Saw that such manipulations \n> Were, in fact, extremely easy, \n> That a high school kid could do them, \n> Once he got the key idea \n> That one should evade the problem \n> Of deducing the behavior \n> From the actual equations \n> By declaring that some answer \n> Was correct because one said so \n> And proceeding to defend it \n> With a lot of complex symbols \n> Simply cooked up to confuse things. \n> Thus emboldened to abandon \n> His perverse outdated fear of \n> Uncontrolled approximations \n> Hiawatha bit the bullet \n> And jumped into cuprate theory \n> With the fury of a madman, \n> Doing reckless calculations \n> Based on nothing but some gas fumes \n> That produced some fine predictions, \n> As one was inclined to call them, \n> Matching some existing data \n> But, of course, not matching others, \n> Since they were not all consistent. \n> He would then just pick and choose them \n> As one would an orange or lemon \n> In the local supermarket \n> And declare the rest defective. \n> Then he wrote up his conclusions \n> In a little physics paper \n> Loaded up with fearsome symbols \n> Proving that he had credentials \n> To make all these speculations, \n> Sent it in for publication \n> And then found an awful problem \n> He had not anticipated. \n> For the paper to be published \n> It must get past refereeing \n> Which, in theory, was for stopping \n> False results from being reported \n> But, in practice, was to censor \n> Anyone whose work you hated, \n> Somewhat of a sticky wicket \n> For someone who's main objective \n> Was to publish speculation. \n> Hiawatha soon discovered \n> Though the process of rejection \n> That his papers could not make it \n> If they championed new ideas \n> Or in any way conflicted \n> With the viewpoints of the experts \n> Which, of course, were simply made up. \n\n> Thus the mighty Hiawatha \n> Found his plans to be a scholar \n> Had an unexpected down side \n> That would later prove quite fatal \n> In that he was forced to pander \n> In his writing for the public \n> To a set of flakey concepts \n> That he'd found extremely useful \n> But had not had time to question, \n> In exchange for recognition \n> Needed for career advancement. \n> For a while it did not matter \n> But the problem slowly festered \n> And one day poor Hiawatha, \n> Waking to a huge disaster, \n> Found himself up to his eyeballs \n> In a soup of black corruption. \n\n# Hiawatha and the Experiments\n\n> Hiawatha's revelation \n> Took a while to find its footing \n> For, as happens in such cases, \n> Many awful misconceptions \n> Were embedded in his thinking \n> Where they had been put on purpose \n> And could only be uncovered, \n> If at all, through painful hours \n> Scrutinizing tiny details, \n> Contemplating reams of data, \n> Finding out who's stuff was careful, \n> Tracking down suspicious rumors, \n> Reading through a mass of papers, \n> Slowly tossing out the bad ones, \n> Racking up the airline mileage \n> Going to humongous meetings, \n> Thereby building up a fact base \n> Cleansed of all manipulations. \n> Over time, as things got clearer, \n> Hiawatha grew unhappy \n> Trying to reconcile his viewpoint \n> With the facts that he had winnowed, \n> Always finding that he couldn't. \n\n> Hiawatha studied transport \n> Both electrical and thermal \n> That, one argued, showed the absence \n> Of the Landau fermi surface \n> Symptomatic of a metal \n> Thereby proving one was dealing \n> With a strange new state of matter. \n> But he found in every instance \n> That a sample made its coldest, \n> So one knew what one was doing, \n> Either showed disorder problems \n> Generated by the chemists \n> Or agreed with classic theory. \n> Thus, like all those dot-com profits \n> That they claimed would make you wealthy, \n> But, in fact, were nonexistent, \n> Arguments for novel physics \n> Built upon the facts of transport \n> Did not hold up on inspection. \n\n> Hiawatha studied optics \n> By and large his favorite spectrum \n> For he knew that light reflection \n> Measured dielectric functions \n> In a way that used no theory, \n> And it showed how loose electrons \n> Moved about and caused the bonding. \n> But, alas, the data varied \n> From one sample to another \n> Even after years of efforts \n> To ensure that they were stable! \n> This left lack of clear consensus \n> Even over things that mattered. \n> Understanding why this happened \n> Was not really rocket science, \n> For the Kramers-Kr\u00f6nig process \n> Amplified the defect signals \n> That were there in great abundance, \n> Even though they all denied it, \n> And depended on the process \n> By which one prepared the sample, \n> Something different for each grower \n> And a closely-guarded secret. \n> Also, things would change with doping, \n> Something very hard to measure \n> And which often wasn't constant \n> As one moved across the sample \n> Due to troubles in the furnace \n> Which they claimed they'd licked but hadn't. \n> Thus the stories of new physics \n> Built upon results of optics, \n> Like the troubled U. S. census \n> Or the the streets of downtown Boston \n> After weeks of too much snowing, \n> Were polluted by disorder, \n> And, moreover, were deceptive \n> In that aspects of the spectra \n> That were reasonably stable \n> Like the strange non-Drude lineshapes \n> Happened at such tiny wavelengths \n> One could plausibly ascribe them \n> To a nearby phase transition \n> Rather than the state in question. \n> Thus the stories were fantastic, \n> And, like those that Richard Nixon \n> Told while he was in the White House, \n> Or that pop star Michael Jackson \n> Claimed occurred in Los Olivos \n> For the pleasure of the children, \n> In the end would not hold water. \n\n> Hiawatha studied neutrons \n> Which he found he liked immensely \n> Since they flowed from a reactor \n> With big purple signs upon it \n> Warning you of radiation \n> That would kill you if allowed to, \n> Since the neutrons went right through you \n> But would sometimes choose to stop there \n> And decay like little time bombs, \n> Thus inducing stomach cancer. \n> But they went through cuprates also, \n> And that made them very useful, \n> Since a few of them would scatter, \n> And detecting those that did so \n> Gave you lots of information \n> From down deep inside the sample, \n> Such as how the atoms ordered, \n> How they moved when something hit them \n> And if they were little magnets. \n> But the bad news was the signal \n> Was quite small and hard to measure, \n> So one needed a detector \n> Bigger than a Dempsey Dumpster \n> And a truly mammoth sample, \n> Leading to big compromises \n> In the sample growing process \n> They preferred deemphasizing \n> But one knew was wreaking havoc \n> On the meaning of the data. \n> They would also never tell you \n> What the measurement itself was, \n> Since the neutron kinematics \n> Made it sensitive to factors \n> Like the speed spread of the neutrons \n> And the tip of the detector \n> And the path on which one moved things \n> To survey deflection angles \n> That were messy and annoying, \n> So they'd first massage the data \n> Using big computer programs \n> To remove these nasty factors \n> And report the program output, \n> Representing you should trust it \n> Just because they were the experts. \n> But, of course, there were those upgrades \n> And the quiet little tweaking \n> That one always did at run time \n> That one never heard reported. \n> Once he caught these key omissions \n> Hiawatha got suspicious, \n> And quite quickly found the practice \n> Of reporting neutron spectra \n> In some secret custom units \n> Given names like \"counts\" to fool you, \n> Like those helpful content labels \n> Found on packs of sandwich slices \n> Listing salt and beef by-products, \n> Thus preventing one from telling \n> There was very poor agreement. \n> All this made a clearer picture \n> But it also meant the data \n> Like the air-brushed prints in *Playboy* \n> Were, in fact, manipulated, \n> And that many strange behaviors \n> Like the famous funny phonon \n> Dogma said was nonexistent \n> Got removed as standard practice \n> On the grounds they should not be there. \n> Thus his plan to use those spectra \n> To pin down the magnetism \n> Present sometimes in the cuprates \n> On account of all the errors \n> Ended up a dismal failure. \n\n> Hiawatha studied currents \n> Made when cold electrons tunnel \n> Right across an insulator \n> Where they should have been forbidden, \n> Something very close to magic \n> Rather like the twinkly transport \n> People undergo on Star Trek, \n> And it's also quite revealing \n> Of important quantum pairing \n> That goes on inside the cuprate. \n> In the old days one would simply \n> Oxidize a thin-film sample, \n> Coat the oxide with another, \n> Solder on two tiny contacts, \n> Dunk the whole thing into vapors \n> Made so cold that they were liquid, \n> Then just measure plain resistance \n> Of the two protruding wires, \n> Which would vary with the voltage \n> Thus producing useful data. \n> Hiawtha read these papers \n> With a mounting sense of horror, \n> For the wild disagreement \n> Even in the basic features \n> From one sample to another \n> Was so large it left one breathless. \n> And, of course, the accusations \n> That the other guys were morons \n> Who just could not make good junctions \n> Rose to unmatched heights of grandeur \n> Even though the real villain, \n> Obvious from spectral sharpness, \n> Was the sample variation. \n> Hiawatha's indignation \n> Escalated when he found that \n> Over time this fact got buried \n> Since each group soon found a method \n> Of preparing stable samples \n> Different from that used by others \n> And producing different spectra \n> That they marketed as products, \n> Thus evading any need to \n> Answer penetrating questions. \n> An important fact, however, \n> That emerged from all these studies, \n> Was that steady lossless currents \n> Could indeed be made to flow from \n> Films of lead into the cuprates \n> If one made a pitted surface, \n> Proving that the state of matter \n> Operating in the cuprates \n> Was not new and was not different. \n\n> Hiawatha studied spectra \n> Made when light shined on a sample \n> Causes it to lose electrons \n> Which fly out in all directions \n> And one can detect by counting, \n> Thus obtaining information \n> Of their status in the sample \n> Just before the light removed them. \n> Hiawatha saw at once that \n> Peaks for plain undressed electrons \n> That were not supposed to be there \n> In this great new state of matter \n> Always were and had a sharpness \n> At the resolution limit \n> Of the latest new detector \n> For the special ones at threshold, \n> Where one knew what one was doing. \n> In addition they were beaming \n> In a lovely fourfold pattern \n> With the symmetry of d-wave, \n> Something that had been suggested \n> They might do if they were simple, \n> Just like those in other metals. \n> Thus the arguments for strangeness \n> Based on counting these electrons \n> Lost their force as things got better, \n> And in time were proved a failure. \n\n> Hiawatha studied muons, \n> Which he thought were even neater \n> Than the more prosaic neutrons, \n> Since they came from atom smashers \n> That could also quickly kill you \n> If you chose to be so careless, \n> But they'd stop inside much better \n> And once there, decay to gammas \n> That were easily detected \n> Since they'd even go through concrete, \n> And, moreover, they'd be beaming \n> In the muon's spin direction \n> Just before it went to heaven. \n> Thus, implanted in a cuprate \n> They'd arrest at some location \n> Known to no one but their Maker \n> And precess like little searchlights, \n> If there was some magnetism, \n> Thus allowing you to see it \n> Way deep down inside the sample. \n> Thus with knowledge of their trapping \n> And a batch of big detectors \n> One could then back out the distance \n> Of magnetic penetration. \n> Hiawatha found this distance \n> Shortened with increasing doping \n> Just as theory said should happen, \n> If one forced the hubbard model \n> Not to be a quantum magnet \n> By just saying that it wasn't, \n> Which might well have been important \n> Had it not been for the problem \n> That this depth would not continue \n> To decline with increased doping \n> But instead would turn and lengthen. \n> This effect was quite perplexing, \n> Since no theory of the cuprates \n> Even twisted hubbard models, \n> Could account for such behavior, \n> For it violated sum rules, \n> Hence one just did not discuss it. \n> But the meaning was transparent \n> If one faced the facts with courage, \n> For the samples were degrading \n> In extremes of overdoping \n> In some ways that weren't predicted \n> And, moreover, weren't detected \n> By techniques except for this one. \n> This, in turn, implied these problems \n> Might occur at other dopings \n> And likewise escape detection \n> Or, what's worse, be used to argue \n> That new physics was occurring \n> When, in fact, it was just garbage. \n> Thus the trail blazed by muons \n> Led out in the woods to nowhere. \n\n> Hiawatha studied spin flips \n> That the nuclei of atoms \n> Undergo in great big magnets \n> Near a radio transmitter \n> Causing them to be antennas, \n> Which absorb with complex lineshapes \n> One can read if one's a genius \n> But not, sadly, if one isn't, \n> Since they, by and large, consist of \n> Just a simple blobby bell curve \n> With a width and displaced center, \n> To which one must give some meaning\u2014 \n> Not a simple undertaking. \n> Thus the all-important Knight shift \n> And spin-lattice relaxation, \n> Noms de plume for width and center, \n> Vastly different for the copper \n> And the oxygen of cuprates, \n> Were the source of endless theories, \n> Often very thought-provoking, \n> Stunning in sophistication, \n> But, like all those glossy pamphlets \n> Found in waiting rooms of dentists \n> Urging you to practice flossing, \n> Soon began to make you tired, \n> Since the data mainly showed you \n> That the stuff was not a metal \n> In the sense of gold or iron \n> Which, in fact, one knew already \n> And was not a revelation. \n\n> Hiawatha studied structure \n> Of the surfaces of cuprates \n> Freshly cleaved inside a vacuum \n> So that air would not get on them \n> And then probed with tiny needles \n> One could move with great precision, \n> By adjusting some piezos \n> On which everything was standing. \n> What he found was quite disturbing, \n> For while atoms at the surface \n> All had unperturbed positions, \n> Showing that the cleave succeeded, \n> There were also complex patterns \n> On the scale of twenty atoms \n> That appeared to be diffraction. \n> This behavior might have come from \n> Atoms underneath the surface \n> That were missing or defective \n> Or some novel magnetism \n> Of a kind unknown to science, \n> But the thing that so upset him \n> Was that quantum interference \n> Of the kind that he was seeing \n> Could not happen if the lifetimes \n> Were as short as he had thought them, \n> And which had been used to argue \n> For a brand new state of matter. \n> Thus he soberly concluded \n> That this matter wasn't different \n> And the whole confounded story \n> Was a misinterpretation \n> Of a plain materials problem. \n\n> Thus the Mighty Hiawatha \n> Through the patient application \n> Of the practices of science \n> Tested over generations \n> Slowly sloughed off misconceptions \n> And, in face of mounting failure, \n> Sadly came to the conclusion \n> He'd been taken to the cleaners. \n\n# Hiawatha Befriends the Robots\n\n> Given all the clever swindles \n> Lurking there to take our money, \n> That, of course, are part of living, \n> Like a virus for pneumonia \n> Or a hungry venus fly trap, \n> We must all be very thankful \n> That the celebrated Law of Murphy \n> Strikes at random without warning \n> Causing even brilliant concepts, \n> That appear completely foolproof \n> Like distributing tobacco \n> Or the business plan of Enron, \n> To sometimes become derailed \n> Due to something unexpected \n> One was sure could never happen, \n> Like a lawsuit from consumers, \n> That requires intervention \n> Of the most creative nature \n> To prevent strategic meltdown. \n\n> As it turns out, the idea \n> That the conflict in the models \n> One was using for the cuprates \n> Due to nearby phase transitions \n> Would both hamper their solution \n> And engender rampant fibbing, \n> Thus enshrining mass confusion \n> One would then call proof of meaning \n> With no need to fear exposure \n> Had the unexpected weakness \n> That someone might *solve* the model \n> Using tons and tons of money \n> And some capable computers \n> To a crude degree sufficient \n> To unmask the real problem \n> Thus revealing the deception. \n\n> Sure enough, that's just what happened. \n> When the cuprates were discovered \n> And the whole endeavor started \n> One had not the slightest worry \n> That these guys would ever solve it, \n> Since the accuracy needed \n> Was not clear in the beginning, \n> So they uniformly low-balled \n> With the too-familiar outcome \n> That results were inconsistent. \n> So they quarrelled over method \n> And who had convergence problems \n> And whose code was most clairvoyant \n> Even though a child could see that \n> They were different apparati, \n> So the test that they were working \n> Was agreement with each other. \n> But, unlike the other issues \n> That had come and gone before it, \n> Cuprates lingered on as timely \n> Long enough to cause a shake-out, \n> For the money kept increasing \n> Even as machines got cheaper \n> And their power kept on growing\u2014 \n> Due, of course, to needs of gaming, \n> Rather than the ones of Lanczos \n> Or the quantum monte carlo \n> That one used for basic physics. \n> So the robots kept on plugging \n> As their owners upped the ante \n> Very slowly, as did Wagner \n> When composing *Ring* and *Tristan* \n> And their stuff began converging! \n> There, of course, was no agreement \n> Over matters of the phases \n> Such as whether it conducted \n> When one cooled it down to zero, \n> Since a crystal of electrons \n> Was one state in competition. \n> But at length scales one could access \n> There was clearly dissipation \n> Of a most peculiar nature \n> In the dielectric function \n> And the quantum magnetism, \n> Just exactly as predicted \n> By an ancient bunch of papers \n> Over quantum phase transitions, \n> Which these guys had never studied \n> Since it was too esoteric \n> And had not been seen in nature \n> And was hated by their funders. \n> But the thing that really clinched it \n> Was the endless disagreement, \n> That got worse as things proceeded \n> And was very clearly cronic, \n> Over type and shape of edges \n> That would best produce convergence, \n> Since one found that subtle changes \n> In the way one built the model \n> Would turn on and off the striping \n> And therefore the insulation, \n> So that whether it was present \n> In the limit of large sample \n> Simply could not be determined \n> With the codes that they had written. \n\n> This, of course, was a disaster \n> For the plan to keep things murky \n> And required drastic action \n> To somehow repair the damage \n> All this progress had created, \n> And prevent these guys from seeing \n> What was right beneath their noses. \n\n> And one was not disappointed. \n> Once again a flash of brilliance \n> Like a great big city-buster \n> Brighter than the sun at midday, \n> Blazed across the dome of heaven \n> Toward its final destination \n> In the Guinness Book of Records. \n> They declared the problem *over*! \n> The computer guys had solved it! \n> For their codes had proved the cuprates \n> Were indeed the Hubbard model, \n> And that's why the stuff conducted. \n> Thus there was no urgent reason \n> To pursue the matter further! \n> One could zero out their budgets \n> With no loss to human knowledge \n> And, in fact, perhaps improve it \n> Since this money was incentive \n> To continue calculations \n> That were clearly unimportant \n> And report them in the journals \n> Thus just adding to the clutter. \n\n> Hiawatha, now much wiser \n> Through his labors as a scholar \n> And, quite frankly, some maturing \n> Watched these things unfold before him, \n> As he had on past occasions, \n> But this time with eyes wide open \n> And was filled with understanding. \n> It was not a happy moment, \n> For it meant that his own judgement \n> As to what was good and worthy \n> Had been faulty from the outset, \n> Something for which he must answer. \n> But instead of indignation \n> And a passion to get even \n> That he might have felt when younger \n> Hiawatha, deep in thinking, \n> Found himself consumed with sadness. \n> He was not the only victim, \n> For the guys who manned those robots, \n> And were heroes of the cuprates\u2014 \n> For through focussed dedication \n> They had stumbled on the answer \n> That the models were unstable \n> And did *not* describe the cuprates, \n> Since a modest perturbation \n> Would profoundly change their nature\u2014 \n> Were about to have their triumph \n> Snatched from them by clever scoundrels \n> Who, pretending to befriend them, \n> Would then redefine their output \n> To mean something that it didn't, \n> Thus protecting their investment, \n> But, of course, destroying others. \n\n# Hiawatha's Lamentation\n\n> Hiawatha's knowing sadness, \n> Like the darkening at twilight \n> Or a gathering storm in winter, \n> Slowly gained in strength and deepened \n> As he spent time in reflection, \n> Working through the implications \n> Of the things that he had witnessed \n> For the cause of noble science \n> That thus far had so beguiled him. \n> It would simply not be manly \n> To pretend he wasn't guilty \n> Of ignoring frequent warnings \n> That the needed path to nature \n> Was obscured or nonexistent. \n> It was clear that he'd been foolish \n> To have bought this awful fiction \n> And that blame must fall quite squarely \n> On himself and not on others. \n> But this candid *mea culpa*, \n> Made in silence where it mattered, \n> While it comforted his conscience, \n> Did not quite assuage the wounding, \n> For it begged the nagging question \n> Of how they could have succeeded \n> In hoodwinking all the people \n> For so long without some doubting. \n> It was simply not an option \n> To presume these guys were stupid, \n> Since the instruments they dealt with, \n> Often built by hand from nothing, \n> Needed great sophistication \n> To deploy and mine for data. \n> There was clearly something larger \n> And extremely fundamental \n> Working in the group dynamic \n> That involved access to funding \n> And the policy of journals \n> And the need to service markets \n> And the mythos of the subject \n> One must use to make a living \n> That these crooks had first deciphered, \n> Then reduced with understanding, \n> Then usurped to do their bidding. \n\n> Hiawatha, turning inward, \n> Thought for weeks about this problem \n> During which he was obnoxious \n> Due to his preoccupation. \n> But at last he got an answer \n> That made sense and was quite simple, \n> Thus withstanding Occam's razor, \n> So he thought that he believed it. \n> When he'd set out on his mission \n> He had understood the challenge \n> Of the mastery of nature \n> But not basic economics \n> And the fact that art and science \n> Both require sacrifices \n> Of a clear financial nature \n> That one sometimes just can't handle \n> Nor, in fairness, should one do so \n> Since a good guy pays the mortgage \n> And supports the kids in college \n> And the other things a body \n> Has to do to keep the lights on. \n> But, in fact, the compromises \n> That one makes as part of living \n> Such as saying what one has to \n> For maintaining healthy cash flow \n> Often toss big monkey wrenches \n> In the fine machine of science \n> And can stop it altogether \n> In conflicted situations. \n> Then the body, badly weakened, \n> Barely able to keep breathing, \n> Gets exploited by diseases, \n> Such as villains lacking scruples \n> Who descend on it like termites \n> To a house that's been neglected, \n> Wreaking terrible destruction \n> On the lives of those affected. \n\n> The conclusion of this story \n> Is well known from all the textbooks. \n> Hiawatha never wavered \n> In his deep respect for physics, \n> But he came by this adventure \n> To the deeper understanding \n> That to get things done that mattered \n> Often was a social question, \n> Not just logical abstraction, \n> And, as well, a part of nature, \n> Just the thing he thought he'd hated \n> And had thrilled at desecrating \n> As a tender freshman student \n> In the little private college \n> By the shores of Gitchee-Gumee. \n> It was true that all the creatures \n> Living in those swamps and woodlands \n> Generated lots of pooping, \n> But then so did real people, \n> And the people poop was stronger, \n> So that one could not ignore it. \n> But one really would not want to, \n> For the lesson of the cuprates \n> Was that lack of understanding \n> Of these basic group dynamics, \n> Was a recipe for failure \n> Since they were the central issue \n> For most things that were essential. \n\n> Thus the mighty Hiawatha \n> Turned his mind to other problems \n> Such as how to use resources \n> That were his by luck and birthright \n> Through the power of his father \n> Which he'd been inclined to squander, \n> But now realized he shouldn't. \n> Thus he studied like a madman \n> To acquire the skills of statecraft, \n> Such as how to plan a project, \n> How to give effective orders, \n> How to make sure they were followed, \n> How to get things done with meetings, \n> And to leave the money grubbing \n> Up to folks his father hired \n> Such as all those gifted spin docs \n> Who created key revisions \n> Necessary for his image \n> To be something people honored. \n> Thus the pain of too much sliding \n> On the ice in dead of winter \n> In an inexpensive loincloth \n> And his other misadventures \n> Got removed, as did the cuprates, \n> From his long official story. \n> But the memory persisted \n> And it helped to make him wiser \n> For, of course, as he got older \n> He had many bad encounters \n> Not so different from the cuprates. \n\n> But whenever he was troubled \n> With a problem that would vex him \n> He would cheer himself by thinking \n> Of the special room in Hades \n> Into which these happy people \n> On account of their transgressions \n> Would be ushered when they bagged it \n> And be stuck in there forever, \n> Forced to listen to each other \n> Giving lectures on the cuprates. \n> It would always leave him smiling.","meta":{"dup_signals":{"dup_doc_count":12,"dup_dump_count":3,"dup_details":{"curated_sources":2,"2024-26":1,"unknown":9}},"filename":"out\/physics0408066_extract_p01jan04.tex.md"},"subset":"arxiv"} +{"text":"abstract: We present a number of notable results from the VLT-FLAMES Tarantula Survey (VFTS), an ESO Large Program during which we obtained multi-epoch medium-resolution optical spectroscopy of a very large sample of over 800 massive stars in the 30 Doradus region of the Large Magellanic Cloud (LMC). This unprecedented data-set has enabled us to address some key questions regarding atmospheres and winds, as well as the evolution of (very) massive stars. Here we focus on O-type runaways, the width of the main sequence, and the mass-loss rates for (very) massive stars. We also provide indications for the presence of a *top-heavy* initial mass function (IMF) in 30 Dor.\nauthor: Jorick S. Vink$^1$, C.J. Evans$^2$, J. Bestenlehner$^{1,3}$, C. McEvoy$^4$, O. Ram\u00edrez-Agudelo$^{2}$, H. Sana$^5$, F. Schneider$^{6}$; VFTS\ntitle: The VLT-FLAMES Tarantula Survey\n\n# Introduction\n\nMassive star evolution is important for many fields of Astrophysics including supernovae (SNe; Levesque, these proceedings). Yet, it remains largely unconstrained (Langer 2012; Meynet these proceedings). Progress can be made using high-quality observations from nearby sources, as well as from large data-sets such as VFTS (Evans et al. 2011) discussed here. In parallel, VFTS data are analysed using state-of-the-art model atmospheres such as CMFGEN (Hiller & Miller 1998) and FASTWIND (Puls et al. 2005), as well as automatic fitting tools (Sab\u0131\u0301n-Sanjuli\u00e1n et al. 2014; Bestenlehner et al. 2014; Ram\u00edrez-Agudelo et al. 2017).\n\nIn addition to this observational progress, our VFTS collaboration strives to make theoretical progress on stellar winds and evolution, and we are in the unique position to confront our new models against VFTS data. In the following, we highlight a number of recent results that we argue make a real difference to our knowledge of massive stars.\n\n## Motivation for the Tarantula region\n\nThe Tarantula region (30 Doradus) is the largest active star-forming region in our Local Universe for which individual spectra of the massive-star population can be obtained. Because it is the largest region, it provides a unique opportunity to study the most massive stars, including very massive stars (VMS) with masses up to 200-300 $M_{\\odot}$ (Crowther et al. 2010; Bestenlehner et al. 2014; Martins 2015; Vink et al. 2015). This allows us to properly investigate whether the upper-IMF may be top-heavy (Schneider et al. 2017). Answering this question is important as these VMS that are thought to dominate the ionizing radiation and wind feedback from massive stars (Doran et al. 2013).\n\nAnother reason to study 30 Doradus is that testing massive star evolution *requires* large data-sets. For instance, the issue of the location of the terminal-age main sequence (TAMS) can only be addressed when the sample-size is sufficiently large to populate both the main-sequence with O-type stars (Sab\u0131\u0301n-Sanjuli\u00e1n et al. 2017; Ram\u00edrez-Agudelo et al. 2017) and B supergiants (McEvoy et al. 2015).\n\n# Results on binarity, rotation rates, and runaways\n\nThe aims of VFTS were to determine the stellar parameters, such as $T_{\\rm eff}$, $\\log g$ & $\\log L$ to place our objects on the HR-diagram; the mass and $\\dot{M}$ to determine the evolution & fate of massive stars; and the helium (He) and nitrogen (N) abundances to test (rotational) mixing processes (Grin et al. 2016; Rivero-Gonzalez et al. 2012). All these parameters require sophisticated atmosphere modeling, but VFTS also offered some model-independent parameters including the rotational velocities $v$ $\\sin$ $i$ and radial velocities (RVs) thanks to the multi-epoch nature of the survey. The latter allowed us to obtain information on the $\\sim$ 50% frequency in 30 Dor (Sana et al. 2013) and the opportunity to study the dominant mechanism for runaways (Fig.\u2006; Sana et al. in prep.).\n\nFigure\u2006 might allow us to disentangle the dynamical runaway scenario (Gies & Bolton 1986) from the binary-SN kick scenario (Stone 1991), as the first scenario might produce relatively fast runaways, whilst one would expect the binary SN kick scenario to produce rapid rotators. Obviously, definitive conclusions can only be obtained when more sophisticated models become available.\n\n# The width of the main-sequence and constraints on core overshooting\n\nFigure\u2006 shows a zoomed-in version of the Hertzsprung-Russell diagram for both single and binary B supergiants from McEvoy et al. (2015). The position of the dark line indicates the position of the TAMS, with its location is determined by the value of the core overshooting parameter ($\\alpha_{\\rm ov}$) which is basically a \"free\" parameter (e.g. Vink et al. 2010; Brott et al. 2011) until astro-seismology on a large number of OB supergiants becomes available. The Brott et al. models employ a value of $\\alpha_{\\rm ov} = 0.335$, whilst the Geneva models (Georgy these proceedings) employ a smaller value. The VFTS results shown in Fig.\u2006 appear to suggest a *larger* value of $\\alpha_{\\rm ov}$ than 0.335.\n\nLarger $\\alpha_{\\rm ov}$ makes bi-stability braking (BSB; Vink et al. 2010; Keszthelyi et al. 2017) feasible, which we test by showing $v$ $\\sin$ $i$ of both VFTS and previous FLAMES-I results (Hunter et al. 2008) versus $T_{\\rm eff}$ in Fig.\u2006. Note the presence of another \"Region-of-Avoidance\"[^1], where rapidly-rotating \"cool\" (cooler than the bi-stability location of 20\u2006000\u2006K; Petrov et al. 2016) B supergiants are simply not observed. The reason for this avoidance below 20\u2006000\u2006K could either involve BSB, or it might be that the slowly rotating cool B supergiants are He-burning objects (due to post red-supergiant evolution or binarity).\n\n# The mass-loss rates\n\nThe mass-loss rates for O-type dwarfs were discussed by Sab\u0131\u0301n-Sanjuli\u00e1n et al. (2014; 2017), whilst those for the O giants and supergiants are plotted in the form of the wind-momentum-luminosity relation (WLR; Kudritzki & Puls 2000; Puls et al. 2008) in Fig.\u2006. Interestingly, the empirical WLR lies above the theoretical WLR (of Vink et al. 2001). Usually a discrepancy between theoretical and empirical values would be interpreted such that the theoretical rates would be too low, but here it is different, as it is widely accepted that empirical modeling is more dependent on wind clumping and porosity than theory (see Muijres et al. 2011 for theoretical expectations).\n\nIndeed, it is more likely that the empirical WLR is too high, as a result of wind clumping, which has not been included in the analysis. This would imply that the empirical WLR would need to be lowered by a factor $\\sqrt{D}$, where $D$ is the clumping factor, which is as yet uncertain. However, given the model-independent (from clumping & porosity) transition mass-loss rate (Vink & Gr\u00e4fener 2012; next Sect.) a value of $D \\simeq 10$ (with a mass-loss rate and WLR reduction of $\\sim$``{=html}3) would bring the empirical WLR and theory in reasonable agreement. None of this means that the theoretical rates for lower mass-and-luminosity O stars need necessarily to be correct. Therefore, spectral analysis of large data-sets of O-stars including clumping & porosity (Surlan et al. 2013; Sundqvist et al. 2014) are needed to provide definitive answers.\n\n# Very Massive Stars\n\nThe most massive stars in VFTS were analysed by Bestenlehner et al. (2014), plotted in the HRD of Fig.\u2006. Over-plotted are VMS evolutionary tracks and the location of the ZAMS. The HRD shows the presence of 12 VMS (with $M$ $>$ 100 $M_{\\odot}$; Vink et al. 2015), which enables us to derive the upper-IMF of 30 Dor for the first time. Figure compares the preferred value for the mass function to that of Salpeter. It is found that the slope is different to that of Salpeter (at $\\sim$``{=html}85% confidence), and also that a Salpeter IMF cannot reproduce the larger number of massive stars above 30\u2006$M_{\\odot}$ at $>$``{=html}99% confidence (Schneider et al. 2017). As this result is obtained using the largest spectroscopic data-set ever obtained, and analysed with the most sophisticated analysis tools, we consider this the most robust test to date. A top-heavy IMF would have major implications for the interpretation of spectral modelling of high-redshift galaxies, as well as the ionizing radiation and kinetic wind energy input into galaxies. Answers will strongly depend on the mass-loss rates of these VMS, as discussed next.\n\nFigure\u2006 shows VFTS results of the mass-loss rates of the most massive stars in 30 Dor (Bestenlehner et al. 2014). Whilst at relatively low values of the Eddington value $\\Gamma$, the slope of the empirical data is consistent with that for O stars, those above the crossover point are not. Here the mass-loss rate kinks upwards, with a steeper slope. The winds have become optically thick, and show WR-like spectra. Also, above this critical $\\Gamma$ point, the wind efficiency crosses unity, enabling a calibration of the absolute mass-loss rates for the first time (Vink & Gr\u00e4fener 2012). Moreover, Bestenlehner et al. (2014) found profound changes in the surface He abundances exactly coinciding with the luminosity threshold where mass loss is enhanced. This suggests that Of\/WN and WNh stars are objects whose H-rich layers have been stripped by enhanced mass-loss during their main-sequence life. Note that this mass-loss enhancement for VMS has not been included in most stellar evolution calculations, and this implies there will be many exciting surprises for extra-galactic applications of massive stars in the near future!\n\n# Final Words\n\nThe VFTS has conclusively shown that binaries are common in 30 Dor. With a corrected close-binary fraction of $\\sim$``{=html}50% (Sana et al. 2013), we do not yet know whether this hints at a lower binary frequency at low metallicity, or it it is still consistent with the larger Galactic frequency of $\\sim$``{=html}70% when evolutionary considerations are taken into account. Either way, we now know we require both single & binary evolutionary models to make progress. Another interesting finding is that there is a high-velocity tail present in single O-type supergiants (Ram\u00edrez-Agudelo et al. 2013), which is not present in the spectroscopic binaries (Ram\u00edrez-Agudelo et al. 2015). This suggests that binary interactions need to be accounted for to understand the underlying rotational distribution.\n\nThe VFTS results also indicate that the main-sequence needs widening. This hints at a larger value for the core overshooting parameter than usually adopted. Finally, VMS up to at least 200$M_{\\odot}$ are common in 30 Dor, but VMS mass-loss rates have been *under*estimated.\n\n[^1]: The perceived lack of rapid rotators on the hot side of the diagram is not real, there are many rapidly rotating O-type stars. These O-stars are just not included here.","meta":{"dup_signals":{"dup_doc_count":14,"dup_dump_count":2,"dup_details":{"curated_sources":7,"unknown":7}},"filename":"out\/1710.11220_extract_vink-iaus329-resubmit.tex.md"},"subset":"arxiv"} +{"text":"abstract: This paper explains the math behind a generative adversarial network (GAN)\u00a0 model and why it is hard to be trained. Wasserstein GAN is intended to improve GANs' training by adopting a smooth metric for measuring the distance between two probability distributions.\nauthor: Lilian\u00a0Weng \nOpenAI \n`email@example.com`\ntitle: From GAN to WGAN\n\n# Introduction\n\nGenerative adversarial network (GAN)\u00a0 has shown great results in many generative tasks to replicate the real-world rich content such as images, human language, and music. It is inspired by game theory: two models, a generator and a critic, are competing with each other while making each other stronger at the same time. However, it is rather challenging to train a GAN model, as people are facing issues like training instability or failure to converge.\n\nHere I would like to explain the math behind the generative adversarial network framework, why it is hard to be trained, and finally introduce a modified version of GAN intended to solve the training difficulties.\n\n# Kullback\u2013Leibler and Jensen\u2013Shannon Divergence\n\nBefore we start examining GANs closely, let us first review two metrics for quantifying the similarity between two probability distributions.\n\n\\(1\\) **KL (Kullback\u2013Leibler) Divergence** measures how one probability distribution $p$ diverges from a second expected probability distribution $q$.\n\n$$D_{KL}(p \\| q) = \\int_x p(x) \\log \\frac{p(x)}{q(x)} dx$$\n\n$D_{KL}$ achieves the minimum zero when $p(x) == q(x)$ everywhere.\n\nIt is noticeable according to the formula that KL divergence is asymmetric. In cases where $p(x)$ is close to zero, but $q(x)$ is significantly non-zero, the $q$'s effect is disregarded. It could cause buggy results when we just want to measure the similarity between two equally important distributions.\n\n\\(2\\) **Jensen\u2013Shannon Divergence** is another measure of similarity between two probability distributions, bounded by $[0, 1]$. JS divergence is symmetric and more smooth. Check this [post](https:\/\/www.quora.com\/Why-isnt-the-Jensen-Shannon-divergence-used-more-often-than-the-Kullback-Leibler-since-JS-is-symmetric-thus-possibly-a-better-indicator-of-distance) if you are interested in reading more about the comparison between KL divergence and JS divergence.\n\n$$D_{JS}(p \\| q) = \\frac{1}{2} D_{KL}(p \\| \\frac{p + q}{2}) + \\frac{1}{2} D_{KL}(q \\| \\frac{p + q}{2})$$\n\nSome\u00a0 believe that one reason behind GANs' big success is switching the loss function from asymmetric KL divergence in traditional maximum-likelihood approach to symmetric JS divergence. We will discuss more on this point in the next section.\n\n# Generative Adversarial Network\n\nGAN consists of two models:\n\n- A discriminator $D$ estimates the probability of a given sample coming from the real dataset. It works as a critic and is optimized to tell the fake samples from the real ones.\n\n- A generator $G$ outputs synthetic samples given a noise variable input $z$ ($z$ brings in potential output diversity). It is trained to capture the real data distribution so that its generative samples can be as real as possible, or in other words, can trick the discriminator to offer a high probability.\n\nThese two models compete against each other during the training process: the generator $G$ is trying hard to trick the discriminator, while the critic model $D$ is trying hard not to be cheated. This interesting zero-sum game between two models motivates both to improve their functionalities.\n\nGiven,\n\n| **Symbol** | **Meaning** | **Notes** |\n|:--:|:---|:---|\n| $p_{z}$ | Data distribution over noise input $z$ | Usually, just uniform. |\n| $p_{g}$ | The generator's distribution over data $x$ | |\n| $p_{r}$ | Data distribution over real sample $x$ | |\n\nOn one hand, we want to make sure the discriminator $D$'s decisions over real data are accurate by maximizing $\\mathbb{E}_{x \\sim p_{r}(x)} [\\log D(x)]$. Meanwhile, given a fake sample $G(z), z \\sim p_z(z)$, the discriminator is expected to output a probability, $D(G(z))$, close to zero by maximizing $\\mathbb{E}_{z \\sim p_{z}(z)} [\\log (1 - D(G(z)))]$.\n\nOn the other hand, the generator is trained to increase the chances of $D$ producing a high probability for a fake example, thus to minimize $\\mathbb{E}_{z \\sim p_{z}(z)} [\\log (1 - D(G(z)))]$.\n\nWhen combining both aspects together, $D$ and $G$ are playing a *minimax game* in which we should optimize the following loss function:\n\n$$\\begin{aligned}\n\\min_G \\max_D L(D, G) \n& = \\mathbb{E}_{x \\sim p_{r}(x)} [\\log D(x)] + \\mathbb{E}_{z \\sim p_z(z)} [\\log(1 - D(G(z)))] \\\\\n& = \\mathbb{E}_{x \\sim p_{r}(x)} [\\log D(x)] + \\mathbb{E}_{x \\sim p_g(x)} [\\log(1 - D(x)]\n\\end{aligned}$$\n\nwhere $\\mathbb{E}_{x \\sim p_{r}(x)} [\\log D(x)]$ has no impact on $G$ during gradient descent updates.\n\n## What is the Optimal Value for D?\n\nNow we have a well-defined loss function. Let's first examine what is the best value for $D$.\n\n$$L(G, D) = \\int_x \\bigg( p_{r}(x) \\log(D(x)) + p_g (x) \\log(1 - D(x)) \\bigg) dx$$\n\nSince we are interested in what is the best value of $D(x)$ to maximize $L(G, D)$, let us label\n\n$$\\tilde{x} = D(x), \nA=p_{r}(x), \nB=p_g(x)$$\n\nAnd then what is inside the integral (we can safely ignore the integral because $x$ is sampled over all the possible values) is:\n\n$$\\begin{aligned}\nf(\\tilde{x}) \n& = A log\\tilde{x} + B log(1-\\tilde{x}) \\\\\n\\frac{d f(\\tilde{x})}{d \\tilde{x}}\n& = A \\frac{1}{ln10} \\frac{1}{\\tilde{x}} - B \\frac{1}{ln10} \\frac{1}{1 - \\tilde{x}} \\\\\n& = \\frac{1}{ln10} (\\frac{A}{\\tilde{x}} - \\frac{B}{1-\\tilde{x}}) \\\\\n& = \\frac{1}{ln10} \\frac{A - (A + B)\\tilde{x}}{\\tilde{x} (1 - \\tilde{x})} \\\\\n\\end{aligned}$$\n\nThus, set $\\frac{d f(\\tilde{x})}{d \\tilde{x}} = 0$, we get the best value of the discriminator: $D^*(x) = \\tilde{x}^* = \\frac{A}{A + B} = \\frac{p_{r}(x)}{p_{r}(x) + p_g(x)} \\in [0, 1]$. Once the generator is trained to its optimal, $p_g$ gets very close to $p_{r}$. When $p_g = p_{r}$, $D^*(x)$ becomes $1\/2$.\n\n## What is the Global Optimal? \n\nWhen both $G$ and $D$ are at their optimal values, we have $p_g = p_{r}$ and $D^*(x) = 1\/2$ and the loss function becomes:\n\n$$\\begin{aligned}\nL(G, D^*) \n&= \\int_x \\bigg( p_{r}(x) \\log(D^*(x)) + p_g (x) \\log(1 - D^*(x)) \\bigg) dx \\\\\n&= \\log \\frac{1}{2} \\int_x p_{r}(x) dx + \\log \\frac{1}{2} \\int_x p_g(x) dx \\\\\n&= -2\\log2\n\\end{aligned}$$\n\n## What does the Loss Function Represent?\n\nAccording to the formula listed in Sec.\u00a0, JS divergence between $p_{r}$ and $p_g$ can be computed as:\n\n$$\\begin{aligned}\nD_{JS}(p_{r} \\| p_g) \n=& \\frac{1}{2} D_{KL}(p_{r} || \\frac{p_{r} + p_g}{2}) + \\frac{1}{2} D_{KL}(p_{g} || \\frac{p_{r} + p_g}{2}) \\\\\n=& \\frac{1}{2} \\bigg( \\log2 + \\int_x p_{r}(x) \\log \\frac{p_{r}(x)}{p_{r} + p_g(x)} dx \\bigg) + \\\\& \\frac{1}{2} \\bigg( \\log2 + \\int_x p_g(x) \\log \\frac{p_g(x)}{p_{r} + p_g(x)} dx \\bigg) \\\\\n=& \\frac{1}{2} \\bigg( \\log4 + L(G, D^*) \\bigg)\n\\end{aligned}$$\n\nThus,\n\n$$L(G, D^*) = 2D_{JS}(p_{r} \\| p_g) - 2\\log2$$\n\nEssentially the loss function of GAN quantifies the similarity between the generative data distribution $p_g$ and the real sample distribution $p_{r}$ by JS divergence when the discriminator is optimal. The best $G^*$ that replicates the real data distribution leads to the minimum $L(G^*, D^*) = -2\\log2$ which is aligned with equations above.\n\n**Other Variations of GAN**: There are many variations of GANs in different contexts or designed for different tasks. For example, for semi-supervised learning, one idea is to update the discriminator to output real class labels, $1, \\dots, K-1$, as well as one fake class label $K$. The generator model aims to trick the discriminator to output a classification label smaller than $K$.\n\n# Problems in GANs\n\nAlthough GAN has shown great success in the realistic image generation, the training is not easy; The process is known to be slow and unstable.\n\n## Hard to Achieve Nash Equilibrium\n\ndiscussed the problem with GAN's gradient-descent-based training procedure. Two models are trained simultaneously to find a Nash equilibrium to a two-player non-cooperative game. However, each model updates its cost independently with no respect to another player in the game. Updating the gradient of both models concurrently cannot guarantee a convergence.\n\nLet's check out a simple example to better understand why it is difficult to find a Nash equilibrium in an non-cooperative game. Suppose one player takes control of $x$ to minimize $f_1(x) = xy$, while at the same time the other player constantly updates $y$ to minimize $f_2(y) = -xy$.\n\nBecause $\\frac{\\partial f_1}{\\partial x} = y$ and $\\frac{\\partial f_2}{\\partial y} = -x$, we update $x$ with $x-\\eta \\cdot y$ and $y$ with $y+ \\eta \\cdot x$ simultaneously in one iteration, where $\\eta$ is the learning rate. Once $x$ and $y$ have different signs, every following gradient update causes huge oscillation and the instability gets worse in time, as shown in Fig. 3.\n\n## Low Dimensional Supports\n\n| **Term** | **Explanation** |\n|:--:|:---|\n| Manifold | A topological space that locally resembles Euclidean space near each point. Precisely, when this Euclidean space is of dimension $n$, the manifold is referred as $n$-manifold. |\n| Support | A real-valued function $f$ is the subset of the domain containing those elements which are not mapped to zero. |\n\ndiscussed the problem of the supports of $p_r$ and $p_g$ lying on low dimensional manifolds and how it contributes to the instability of GAN training thoroughly.\n\nThe dimensions of many real-world datasets, as represented by $p_r$, only appear to be *artificially high*. They have been found to concentrate in a lower dimensional manifold. This is actually the fundamental assumption for *Manifold Learning*. Thinking of the real world images, once the theme or the contained object is fixed, the images have a lot of restrictions to follow, i.e., a dog should have two ears and a tail, and a skyscraper should have a straight and tall body, etc. These restrictions keep images away from the possibility of having a high-dimensional free form.\n\n$p_g$ lies in a low dimensional manifolds, too. Whenever the generator is asked to a much larger image like 64x64 given a small dimension, such as 100, noise variable input $z$, the distribution of colors over these 4096 pixels has been defined by the small 100-dimension random number vector and can hardly fill up the whole high dimensional space.\n\nBecause both $p_g$ and $p_r$ rest in low dimensional manifolds, they are almost certainly gonna be disjoint (See Fig.\u00a0). When they have disjoint supports, we are always capable of finding a perfect discriminator that separates real and fake samples 100% correctly.\u00a0\n\n## Vanishing Gradient\n\nWhen the discriminator is perfect, we are guaranteed with $D(x) = 1, \\forall x \\in p_r$ and $D(x) = 0, \\forall x \\in p_g$. Therefore the loss function $L$ falls to zero and we end up with no gradient to update the loss during learning iterations. Fig. 5 demonstrates an experiment when the discriminator gets better, the gradient vanishes fast.\n\nAs a result, training a GAN faces an dilemma:\n\n- If the discriminator behaves badly, the generator does not have accurate feedback and the loss function cannot represent the reality.\n\n- If the discriminator does a great job, the gradient of the loss function drops down to close to zero and the learning becomes super slow or even jammed.\n\nThis dilemma clearly is capable to make the GAN training very tough.\n\n## Mode Collapse\n\nDuring the training, the generator may collapse to a setting where it always produces same outputs. This is a common failure case for GANs, commonly referred to as *Mode Collapse*. Even though the generator might be able to trick the corresponding discriminator, it fails to learn to represent the complex real-world data distribution and gets stuck in a small space with extremely low variety.\n\n## Lack of a Proper Evaluation Metric\n\nGenerative adversarial networks are not born with a good objection function that can inform us the training progress. Without a good evaluation metric, it is like working in the dark. No good sign to tell when to stop; No good indicator to compare the performance of multiple models.\n\n# Improved GAN Training\n\nThe following suggestions are proposed to help stabilize and improve the training of GANs.\n\nFirst five methods are practical techniques to achieve faster convergence of GAN training\u00a0. The last two are proposed in\u00a0 to solve the problem of disjoint distributions.\n\n\\(1\\) **Feature Matching**\n\nFeature matching suggests to optimize the discriminator to inspect whether the generator's output matches expected statistics of the real samples. In such a scenario, the new loss function is defined as $\\| \\mathbb{E}_{x \\sim p_r} f(x) - \\mathbb{E}_{z \\sim p_z(z)}f(G(z)) \\|_2^2$, where $f(x)$ can be any computation of statistics of features, such as mean or median.\n\n\\(2\\) **Minibatch Discrimination**\n\nWith minibatch discrimination, the discriminator is able to digest the relationship between training data points in one batch, instead of processing each point independently.\n\nIn one minibatch, we approximate the closeness between every pair of samples, $c(x_i, x_j)$, and get the overall summary of one data point by summing up how close it is to other samples in the same batch, $o(x_i) = \\sum_{j} c(x_i, x_j)$. Then $o(x_i)$ is explicitly added to the input of the model.\n\n\\(3\\) **Historical Averaging**\n\nFor both models, add $\\| \\Theta - \\frac{1}{t} \\sum_{i=1}^t \\Theta_i \\|^2$ into the loss function, where $\\Theta$ is the model parameter and $\\Theta_i$ is how the parameter is configured at the past training time $i$. This addition piece penalizes the training speed when $\\Theta$ is changing too dramatically in time.\n\n\\(4\\) **One-sided Label Smoothing**\n\nWhen feeding the discriminator, instead of providing 1 and 0 labels, use soften values such as 0.9 and 0.1. It is shown to reduce the networks' vulnerability.\n\n\\(5\\) **Virtual Batch Normalization (VBN)**\n\nEach data sample is normalized based on a fixed batch (*\"reference batch\"*) of data rather than within its minibatch. The reference batch is chosen once at the beginning and stays the same through the training.\n\n\\(6\\) **Adding Noises**\n\nBased on the discussion in Sec.\u00a0, we now know $p_r$ and $p_g$ are disjoint in a high dimensional space and it causes the problem of vanishing gradient. To artificially \"spread out\" the distribution and to create higher chances for two probability distributions to have overlaps, one solution is to add continuous noises onto the inputs of the discriminator $D$.\n\n\\(7\\) **Use Better Metric of Distribution Similarity**\n\nThe loss function of the vanilla GAN measures the JS divergence between the distributions of $p_r$ and $p_g$. This metric fails to provide a meaningful value when two distributions are disjoint.\n\nWasserstein metric is proposed to replace JS divergence because it has a much smoother value space. See more in the next section.\n\n# Wasserstein GAN (WGAN)\n\n## What is Wasserstein Distance?\n\n*Wasserstein Distance* is a measure of the distance between two probability distributions. It is also called *Earth Mover's distance*, short for EM distance, because informally it can be interpreted as the minimum energy cost of moving and transforming a pile of dirt in the shape of one probability distribution to the shape of the other distribution. The cost is quantified by: the amount of dirt moved x the moving distance.\n\nLet us first look at a simple case where the probability domain is discrete. For example, suppose we have two distributions $P$ and $Q$, each has four piles of dirt and both have ten shovelfuls of dirt in total. The numbers of shovelfuls in each dirt pile are assigned as follows:\n\n$$P_1 = 3, P_2 = 2, P_3 = 1, P_4 = 4\\\\\nQ_1 = 1, Q_2 = 2, Q_3 = 4, Q_4 = 3$$\n\nIn order to change $P$ to look like $Q$, as illustrated in Fig.\u00a0, we:\n\n- First move 2 shovelfuls from $P_1$ to $P_2$ =\\> $(P_1, Q_1)$ match up.\n\n- Then move 2 shovelfuls from $P_2$ to $P_3$ =\\> $(P_2, Q_2)$ match up.\n\n- Finally move 1 shovelfuls from $Q_3$ to $Q_4$ =\\> $(P_3, Q_3)$ and $(P_4, Q_4)$ match up.\n\nIf we label the cost to pay to make $P_i$ and $Q_i$ match as $\\delta_i$, we would have $\\delta_{i+1} = \\delta_i + P_i - Q_i$ and in the example:\n\n$$\\begin{aligned}\n\\delta_0 &= 0\\\\\n\\delta_1 &= 0 + 3 - 1 = 2\\\\\n\\delta_2 &= 2 + 2 - 2 = 2\\\\\n\\delta_3 &= 2 + 1 - 4 = -1\\\\\n\\delta_4 &= -1 + 4 - 3 = 0\n\\end{aligned}$$\n\nFinally the Earth Mover's distance is $W = \\sum \\vert \\delta_i \\vert = 5$.\n\nWhen dealing with the continuous probability domain, the distance formula becomes:\n\n$$W(p_r, p_g) = \\inf_{\\gamma \\sim \\Pi(p_r, p_g)} \\mathbb{E}_{(x, y) \\sim \\gamma}[\\| x-y \\|]$$\n\nIn the formula above, $\\Pi(p_r, p_g)$ is the set of all possible joint probability distributions between $p_r$ and $p_g$. One joint distribution $\\gamma \\in \\Pi(p_r, p_g)$ describes one dirt transport plan, same as the discrete example above, but in the continuous probability space. Precisely $\\gamma(x, y)$ states the percentage of dirt should be transported from point $x$ to $y$ so as to make $x$ follows the same probability distribution of $y$. That's why the marginal distribution over $x$ adds up to $p_g$, $\\sum_{x} \\gamma(x, y) = p_g(y)$ (Once we finish moving the planned amount of dirt from every possible $x$ to the target $y$, we end up with exactly what $y$ has according to $p_g$.) and vice versa $\\sum_{y} \\gamma(x, y) = p_r(x)$.\n\nWhen treating $x$ as the starting point and $y$ as the destination, the total amount of dirt moved is $\\gamma(x, y)$ and the traveling distance is $\\| x-y \\|$ and thus the cost is $\\gamma(x, y) \\cdot \\| x-y \\|$. The expected cost averaged across all the $(x,y)$ pairs can be easily computed as:\n\n$$\\sum_{x, y} \\gamma(x, y) \\| x-y \\| \n= \\mathbb{E}_{x, y \\sim \\gamma} \\| x-y \\|$$\n\nFinally, we take the minimum one among the costs of all dirt moving solutions as the EM distance. In the definition of Wasserstein distance, the $\\inf$ (infimum, also known as \\*greatest lower bound\\*) indicates that we are only interested in the smallest cost.\n\n## Why Wasserstein is better than JS or KL Divergence?\n\nEven when two distributions are located in lower dimensional manifolds without overlaps, Wasserstein distance can still provide a meaningful and smooth representation of the distance in-between.\n\nThe WGAN paper exemplified the idea with a simple example.\n\nSuppose we have two probability distributions, $P$ and $Q$:\n\n$$\\forall (x, y) \\in P, x = 0 \\text{ and } y \\sim U(0, 1)\\\\\n\\forall (x, y) \\in Q, x = \\theta, 0 \\leq \\theta \\leq 1 \\text{ and } y \\sim U(0, 1)\\\\\n$$\n\nWhen $\\theta \\neq 0$:\n\n$$\\begin{aligned}\nD_{KL}(P \\| Q) &= \\sum_{x=0, y \\sim U(0, 1)} 1 \\cdot \\log\\frac{1}{0} = +\\infty \\\\\nD_{KL}(Q \\| P) &= \\sum_{x=\\theta, y \\sim U(0, 1)} 1 \\cdot \\log\\frac{1}{0} = +\\infty \\\\\nD_{JS}(P, Q) &= \\frac{1}{2}(\\sum_{x=0, y \\sim U(0, 1)} 1 \\cdot \\log\\frac{1}{1\/2} + \\sum_{x=0, y \\sim U(0, 1)} 1 \\cdot \\log\\frac{1}{1\/2}) = \\log 2\\\\\nW(P, Q) &= |\\theta|\n\\end{aligned}$$\n\nBut when $\\theta = 0$, two distributions are fully overlapped:\n\n$$\\begin{aligned}\nD_{KL}(P \\| Q) &= D_{KL}(Q \\| P) = D_{JS}(P, Q) = 0\\\\\nW(P, Q) &= 0 = \\lvert \\theta \\rvert\n\\end{aligned}$$\n\n$D_{KL}$ gives us infinity when two distributions are disjoint. The value of $D_{JS}$ has sudden jump, not differentiable at $\\theta = 0$. Only Wasserstein metric provides a smooth measure, which is super helpful for a stable learning process using gradient descents.\n\n## Use Wasserstein Distance as GAN Loss Function\n\nIt is intractable to exhaust all the possible joint distributions in $\\Pi(p_r, p_g)$ to compute $\\inf_{\\gamma \\sim \\Pi(p_r, p_g)}$. Thus the authors proposed a smart transformation of the formula based on the Kantorovich-Rubinstein duality to:\n\n$$W(p_r, p_g) = \\frac{1}{K} \\sup_{\\| f \\|_L \\leq K} \\mathbb{E}_{x \\sim p_r}[f(x)] - \\mathbb{E}_{x \\sim p_g}[f(x)]$$\n\nwhere $\\sup$ (supremum) is the opposite of $inf$ (infimum); we want to measure the least upper bound or, in even simpler words, the maximum value.\n\n### Lipschitz Continuity\n\nThe function $f$ in the new form of Wasserstein metric is demanded to satisfy $\\| f \\|_L \\leq K$, meaning it should be *K-Lipschitz continuous*.\n\nA real-valued function $f: \\mathbb{R} \\rightarrow \\mathbb{R}$ is called $K$-Lipschitz continuous if there exists a real constant $K \\geq 0$ such that, for all $x_1, x_2 \\in \\mathbb{R}$,\n\n$$\\lvert f(x_1) - f(x_2) \\rvert \\leq K \\lvert x_1 - x_2 \\rvert$$\n\nHere $K$ is known as a Lipschitz constant for function $f(.)$. Functions that are everywhere continuously differentiable is Lipschitz continuous, because the derivative, estimated as $\\frac{\\lvert f(x_1) - f(x_2) \\rvert}{\\lvert x_1 - x_2 \\rvert}$, has bounds. However, a Lipschitz continuous function may not be everywhere differentiable, such as $f(x) = \\lvert x \\rvert$.\n\nExplaining how the transformation happens on the Wasserstein distance formula is worthy of a long post by itself, so I skip the details here. If you are interested in how to compute Wasserstein metric using linear programming, or how to transfer Wasserstein metric into its dual form according to the Kantorovich-Rubinstein Duality, read this awesome [post](https:\/\/vincentherrmann.github.io\/blog\/wasserstein\/).\n\n### Wasserstein Loss Function\n\nSuppose this function $f$ comes from a family of K-Lipschitz continuous functions, $\\{ f_w \\}_{w \\in W}$, parameterized by $w$. In the modified Wasserstein-GAN, the \"discriminator\" model is used to learn $w$ to find a good $f_w$ and the loss function is configured as measuring the Wasserstein distance between $p_r$ and $p_g$.\n\n$$L(p_r, p_g) = W(p_r, p_g) = \\max_{w \\in W} \\mathbb{E}_{x \\sim p_r}[f_w(x)] - \\mathbb{E}_{z \\sim p_r(z)}[f_w(g_\\theta(z))]$$\n\nThus the \"discriminator\" is not a direct critic of telling the fake samples apart from the real ones anymore. Instead, it is trained to learn a $K$-Lipschitz continuous function to help compute Wasserstein distance. As the loss function decreases in the training, the Wasserstein distance gets smaller and the generator model's output grows closer to the real data distribution.\n\nOne big problem is to maintain the $K$-Lipschitz continuity of $f_w$ during the training in order to make everything work out. The paper presents a simple but very practical trick: After every gradient update, clamp the weights $w$ to a small window, such as $[-0.01, 0.01]$, resulting in a compact parameter space $W$ and thus $f_w$ obtains its lower and upper bounds to preserve the Lipschitz continuity.\n\nCompared to the original GAN algorithm, the WGAN undertakes the following changes:\n\n- After every gradient update on the critic function, clamp the weights to a small fixed range, $[-c, c]$.\n\n- Use a new loss function derived from the Wasserstein distance, no logarithm anymore. The \"discriminator\" model does not play as a direct critic but a helper for estimating the Wasserstein metric between real and generated data distribution.\n\n- Empirically the authors recommended RMSProp optimizer on the critic, rather than a momentum based optimizer such as Adam which could cause instability in the model training. I haven't seen clear theoretical explanation on this point through.\n\nSadly, Wasserstein GAN is not perfect. Even the authors of the original WGAN paper mentioned that *\"Weight clipping is a clearly terrible way to enforce a Lipschitz constraint\"*. WGAN still suffers from unstable training, slow convergence after weight clipping (when clipping window is too large), and vanishing gradients (when clipping window is too small).\n\nSome improvement, precisely replacing weight clipping with *gradient penalty*, has been discussed in\u00a0.","meta":{"dup_signals":{"dup_doc_count":12,"dup_dump_count":2,"dup_details":{"curated_sources":2,"unknown":10}},"filename":"out\/1904.08994_extract_gan.tex.md"},"subset":"arxiv"} +{"text":"title: The Karlskrona manifesto for sustainability design\n\n**Introduction** *Version 1.0, May 2015* \nAs software practitioners and researchers, we are part of the group of people who design the software systems that run our world. Our work has made us increasingly aware of the impact of these systems and the responsibility that comes with our role, at a time when information and communication technologies are shaping the future. We struggle to reconcile our concern for planet Earth and its societies with the work that we do. Through this work we have come to understand that we need to redefine the narrative on sustainability and the role it plays in our profession. \nWhat is sustainability, really? We often define it too narrowly. Sustainability is at its heart a systemic concept and has to be understood on a set of dimensions, including social, individual, environmental, economic, and technical. \nSustainability is fundamental to our society. The current state of our world is unsustainable in more ways that we often recognize. Technology is part of the dilemma and part of possible responses. We often talk about the immediate impact of technology, but rarely acknowledge its indirect and systemic effects. These effects play out across all dimensions of sustainability over the short, medium and long term. \nSoftware in particular plays a central role in sustainability. It can push us towards growing consumption of resources, growing inequality in society, and lack of individual self-worth. But it can also create communities and enable thriving of individual freedom, democratic processes, and resource conservation. As designers of software technology, we are responsible for the long-term consequences of our designs. Design is the process of understanding the world and articulating an alternative conception on how it should be shaped, according to the designer's intentions. Through design, we cause change and shape our environment. If we don't take sustainability into account when designing, no matter in which domain and for what purpose, we miss the opportunity to cause positive change. \n**We recognize that** there is a rapidly increasing awareness of the fundamental need and desire for a more sustainable world, and a lot of genuine desire and goodwill - but this alone can be ineffective unless we come to understand that: \n\nAs a result, even though the importance of sustainability is increasingly recognized, many software systems are unsustainable, and the broader impacts of most software systems on sustainability are unknown. \n**Thus, we propose the following initial set of principles and commitments:** \n**Sustainability is systemic.** Sustainability is never an isolated property. Systems thinking has to be the starting point for the transdisciplinary common ground of sustainability. \n**Sustainability has multiple dimensions.** We have to include those dimensions into our analysis if we are to understand the nature of sustainability in any given situation. \n**Sustainability transcends multiple disciplines.** Working in sustainability means working with people from across many disciplines, addressing the challenges from multiple perspectives. \n**Sustainability is a concern independent of the purpose of the system.** Sustainability has to be considered even if the primary focus of the system under design is not sustainability. \n**Sustainability applies to both a system and its wider contexts** There are at least two spheres to consider in system design: the sustainability of the system itself and how it affects sustainability of the wider system of which it will be part. \n**Sustainability requires action on multiple levels.** Some interventions have more leverage on a system than others. Whenever we take action towards sustainability, we should consider opportunity costs: action at other levels may offer more effective forms of intervention. \n**System visibility is a necessary precondition and enabler for sustainability design.** The status of the system and its context should be visible at different levels of abstraction and perspectives to enable participation and informed responsible choice. \n**Sustainability requires long-term thinking.** We should assess benefits and impacts on multiple timescales, and include longer-term indicators in assessment and decisions. \n**It is possible to meet the needs of future generations without sacrificing the prosperity of the current generation.** Innovation in sustainability can play out as decoupling present and future needs. By moving away from the language of conflict and the trade-off mindset, we can identify and enact choices that benefit both present and future. \nSustainability design in the context of software systems is the process of designing systems with sustainability as a primary concern, based on a commitment to these principles.\n\n**So what now? How do we start?** \nEach of the following stakeholders can do something right now to get started. \n**Software practitioners**: Try to identify effects of your project on technical, economic, environmental sustainability. Start asking questions about how to incorporate the principles into daily practice. Think about the social and individual dimensions. Talk about it with your colleagues. \n**Researchers**: Identify one research question in your field that can help us to better understand sustainability design. Discuss it with your peers and think about how sustainability impacts your research area. \n**Professional associations**: Revise code of ethics and practice to incorporate principles and explicitly acknowledge the need to consider sustainability as part of professional practice. \n**Educators**: Integrate sustainability design in curricula for software engineering and other disciplines and articulate competencies required for successful sustainability design. \n**Customers**: Put the concern on the table. Demand it in the next project. \n**Users**: Demand that the products you use demonstrate that their designers have considered all dimensions of sustainability.\n\n**Signed,**\n\n*Christoph Becker*, University of Toronto & Vienna University of Technology\n\n*Ruzanna Chitchyan*, University of Leicester\n\n*Leticia Duboc*, State University of Rio de Janeiro\n\n*Steve Easterbrook*, University of Toronto\n\n*Martin Mahaux*, University of Namur\n\n*Birgit Penzenstadler*, California State University Long Beach\n\n*Guillermo Rodriguez-Navas*, Malardalen University\n\n*Camille Salinesi*, Universite Paris 1\n\n*Norbert Seyff*, University of Zurich\n\n*Colin C. Venters*, University of Huddersfield\n\n*Coral Calero*, University of Castilla-La Mancha\n\n*Sedef Akinli Kocak*, Ryerson University\n\n*Stefanie Betz*, Karlsruhe Institute of Technology","meta":{"dup_signals":{"dup_doc_count":75,"dup_dump_count":56,"dup_details":{"curated_sources":2,"2023-14":1,"2023-06":1,"2022-49":1,"2022-40":1,"2022-33":1,"2022-21":1,"2021-49":1,"2021-39":1,"2021-31":1,"2021-25":1,"2021-21":1,"2021-17":1,"2021-10":2,"2021-04":1,"2020-45":1,"2020-34":2,"2020-29":1,"2020-24":1,"2020-16":1,"2020-10":1,"2020-05":1,"2019-51":1,"2019-47":1,"2019-43":1,"2019-39":1,"2019-35":1,"2019-30":2,"2019-26":2,"2019-18":2,"2019-13":1,"2019-09":2,"2019-04":1,"2018-51":2,"2018-47":1,"2018-43":1,"2018-39":2,"2018-34":1,"2018-30":2,"2018-26":1,"2018-22":2,"2018-17":1,"2018-13":1,"2018-09":2,"2018-05":1,"2017-51":2,"2017-43":2,"2017-39":1,"2017-34":2,"2017-26":2,"2017-17":2,"2023-50":1,"2024-18":1,"2024-10":2,"2017-13":2,"2024-26":1}},"filename":"out\/1410.6968.tex.md"},"subset":"arxiv"} +{"text":"abstract: The experimental conditions by which electromagnetic signals (EMS) of low frequency can be emitted by diluted aqueous solutions of some bacterial and viral DNAs are described. That the recorded EMS and nanostructures induced in water carry the DNA information (sequence) is shown by retrieval of that same DNA by classical PCR amplification using the TAQ polymerase, including both primers and nucleotides. Moreover, such a transduction process has also been observed in living human cells exposed to EMS irradiation. These experiments suggest that coherent long range molecular interaction must be at work in water so to allow the observed features. The quantum field theory analysis of the phenomenon is presented.\nauthor: Luc Montagnier${}^{a,b}$; Emilio Del Giudice${}^{c,}$[^1]; Jamal A\u00efssa${}^{b}$; Claude Lavallee${}^{a}$; Steven Motschwiller${}^{d}$; Antonio Capolupo${}^{e,f}$; Albino Polcari${}^{g}$; Paola Romano${}^{g,h}$; Alberto Tedeschi${}^{i}$; Giuseppe Vitiello${}^{e,f}$\ntitle: Transduction of DNA information through water and electromagnetic waves\n\n# INTRODUCTION\n\nThis paper is an overview of what we have achieved during the past ten years in this new field of Biology: the role of water memory and electromagnetic waves in biological processes, including pathological conditions. The reported data is not only of theoretical interest, but leads to many medical applications.\n\nThis work could not have been done and analyzed without the constant interaction of biologists and physicists. The quantum field theoretical analysis of the phenomenon points to the crucial role played by coherent molecular dynamics.\n\n# ELECTROMAGNETIC SIGNALING OF DNA\n\n## The detection of electromagnetic signals (EMS)\n\nOn 13 July 2005, (the eve of Bastille Day in France), by using a device previously designed by the Jacques Benveniste team to detect electromagnetic signals in water dilutions of biologically active compounds, and with the help of one of his former collaborators, Dr. Jamal A\u00efssa, two of us (LM, JA) observed for the first time an increase of amplitude and frequency of the recorded electric signals emitted by some high dilutions of filtrates of bacteria (Mycoplasma pirum, then Escherichia coli). This was the beginning of an extensive investigation on the role and the molecular origin of this new phenomenon (Montagnier, A\u00efssa, Ferris, et al., 2009; Montagnier, A\u00efssa, Lavallee, et al., 2009; Montagnier, A\u00efssa, Del Giudice, et al., 2011).\n\nWe soon discovered that DNA was the main source of the initiation of electromagnetic signals in water. In contrast to the fresh preparation of biological fluids (blood plasma, culture media) which lose their capacity of inducing EMS in water upon freezing, DNA extraction could be done from frozen material without losing its EMS capacity.\n\nIn fact, as we will see below, this property of some bacterial and viral DNA sequences of emitting EMS is like an indelible tag, and is faithfully transmitted to water structures. The bacterial species with pathological potential cultured in standard growth media yield DNA with EMS capacity. However, we noticed that one apathogenic strain of E. coli used for DNA cloning lacks this capacity as does a probiotic bacterium (Lactobacillus).\n\nThe size of DNA fragments emitting EMS ranges between 104 base pair (LTR fragment of HIV) to several kilo-bases (adhesin gene of Mycoplasma pirum: 1.5 - 3 kbp). Some PCR (Polymerase Chain Reaction) amplicons[^2], of 400 - 500 bp, have been found good emitters for the transduction experiments (see below) . For the capture of EMS in water dilutions, the conditions are very strict, and are the same for the extracted DNA as for the fresh unfrozen samples of plasma or of culture medium:\n\n: through 450 nm and then 100 nm Millipore filters, for detection of EMS having a bacterial origin; or 450 nm then 20 nm (Whatman anotop) for EMS of viral origin (only tested for some small virus DNA and HIV DNA). The usual starting concentration is 2 ng of DNA\/1 ml diluted 100 times (10 mls) for filtration.\n\n: several decimal dilutions are made in conic plastic tubes (Eppendorf) usually 0.1 ml\/0.9 ml of water under a laminar flow hood. (Fig. 1) Strong vortex shaking (for 15 seconds) is made at each dilution at room temperature. Water is purchased from commercial firms (usually 5 prime water, DNase - RNase free from Sigma). Usually, the EMS-emitting dilutions are between $10^{-7}$ to $10^{-13}$ for bacteria, $10^{-6}$ to $10^{-10}$ for small viruses (as HIV1). The first lower dilutions are apparently \"silent\", not emitting detectable signals.\n\n: As the EMS are produced by resonance,(Montagnier, A\u00efssa, Del Giudice, et al., 2011) we use either an artificial excitation (7 Hz which was found to be the minimally active frequency) in a closed environment shielded by mu-metal, or in open air exposed to the background ambient noise. Usually, this background noise predominates in the 50 - 300 Hz range, so that the positive EMS, which are in the range of 500 - 3000 Hz, are easily detected.\n\nIn the measurement room, cell phones should be turned off (battery removed) as some phones are regulated by low frequency signals.\n\n## Evidence that EMS emission depends on specific modification of the DNA molecule\n\nThe diversity of the DNA sequences emitting EMS does not indicate any clue as to attribute this EMS emission property to specific sequences.\n\nHowever, we have studied an interesting situation in the case of HIV-infected patients: here, besides EMS produced by HIV DNA (nanostuctures filtering at 20 nm pores), we detected EMS filtering at 100nm pores and retained at 20 nm pores produced by DNA of an intracellular bacterium present in red blood cells. Surprisingly, these \"bacterial\" EMS were found to be produced in part by human DNA sequences integrated in or strongly associated with the bacterial DNA. The same DNA sequences belonging to the chromosomal genome of the same patient never produced EMS.\n\nMoreover, the same sequence was found present in the red blood cells of some healthy individuals, HIV negative; but in these HIV negative individuals this sequence was found not to emit signals. This would indicate that the modification of this DNA resulting in EMS emission occurred only under pathogenic conditions. This modification was maintained in all molecules derived by PCR amplification (amplicon).\n\nAs EMS are so far only detected in patients suffering of various chronic diseases, it is tempting to speculate that there is a common biochemical modification of the DNA of infectious bacteria and\/or viruses present in such diseases. This modification, which remains to be determined, should be different from a base change (mutation), since there is no difference in the base sequence of the previously mentioned amplicons in diseased patients, compared to healthy individuals.\n\n## Water nanostructures and EMS do carry DNA information\n\nOur formerly reported experiments (Fig.\u00a02 and ref. (Montagnier, A\u00efssa, Lavallee, et al., 2009)) indicate that the ability of EMS production can be transmitted from tube 1 containing an emitter DNA dilution to tube 2 of \"naive\" water, provided the system is excited overnight by electromagnetic waves of a minimal frequency of 7 Hz. Presumably tube 1 transmits waves to the water in tube 2, which did not originally contain any trace of the DNA at the origin of the signals.\n\nThe emission of EMS by the exposed tubes is thus a resonance phenomenon, dependent on external wave input. More importantly, these EMS carry specific information of the initial DNA, as shown by retrieving the DNA by PCR in the recipient tube.\n\nThis experiment has been repeated many times in our laboratory, with extraordinary precautions taken to avoid contamination in the PCR step, and many controls were always done. Omission of any of the main parameters of the procedure (7 Hz excitation, mu metal shield, time of exposure to the 7 Hz excitation, any ingredient of the PCR) as well as any minor detail of the protocol will result in failure of the experiment.\n\nTo further make the demonstration unassailable, the EMS carrying the DNA information were recorded as a digital file and sent via Internet to a recipient laboratory where work on this DNA or on the bacterium or virus which was the source of that DNA had never been done (Fig.\u00a03). Several labs in Italy and Germany accepted the challenge.\n\nHere, as an example, we show the results obtained at G\u00f6ttingen University using a recorded file (digitized in a lap top computer in our laboratory) of ribosomal 16S DNA from Borrelia burgdorferi (Fig.\u00a04).\n\nIn the German laboratory, the electric current resulting from the digital file communicated by our lab was converted to analog and was amplified. The current was then connected to a solenoid. A tube of water was inserted in the solenoid and in this way was submitted to the induced modulated magnetic field for one hour. Then the PCR ingredients were introduced in an aliquot of water from the tube, and after 40 PCR cycles of amplification the original DNA was detected, as shown by a specific band in gel electrophoresis of the expected molecular weight.\n\nThese intriguing results raised several questions :\n\n1\\) How a DNA polymerase (the TAQ polymerase of a thermophilic bacterium) can \"read\" a genetic code on water structures?\n\n2\\) What about other DNA polymerases of procaryotic and eucaryotic cells? Do they have the same capacity?\n\nAlthough still at its early stages, the theoretical study of how water structures can store molecular information and transport it by electromagnetic waves gives a crucial role to the coherent molecular dynamics in the formation of water nanostructures (see below and (Montagnier, A\u00efssa, Del Giudice, et al., 2011)). We need, however, further theoretical analysis for a complete understanding of the phenomenon, especially because we have recent evidence that some other DNA polymerases have the same capacity as the TAQ polymerase to read water messages and can act in living cells.\n\n## Transduction of DNA in living cells\n\nThe modified DNA transduction system is shown in figure 5.\n\nInstead of magnetizing water in a tube placed inside a solenoid reading the modulated current from the recorded EMS signature, we placed inside the solenoid a flask containing cultured cells; the flask was placed in a vertical position for cells growing in suspension, and in a horizontal position for cells adhering to the surface of the flask. The voltage (between 2 - 4 volts) applied to the solenoid was adjusted in order to not generate heat damaging the cultured cells. This weak intensity was compensated by the duration time of exposure, between 5 to 10 days. A control flask was always placed outside the solenoid in the same 37 ${}^{\\circ} C$ incubator as the exposed flask.\n\nWe used several recorded EMS files, including the 16S Borrelia and the 194 bp HIV1 LTR amplicon all having been previously shown to be good at transducing their DNA through water.\n\nWe tested several immortalized human cell lines derived from leukemias, or cancers: the HL60, originated from a myeloblastic leukemia, the U937, derived from a lung lymphoma, the MCF7, derived from a breast adenocarcinoma. In addition, we tested normal cells: the MRC5 diploid fibroblast cell line, derived from the lung of a human embryo, T lymphocytes from a healthy blood donor activated with PHA, and interleukin 2.\n\nResults were striking: All cells of tumor origin synthesized Borrelia 16S DNA after they were exposed for several days to magnetic field modulated by the EMS of Borellia 16S DNA. At the same time, cell growth was inhibited, ending in cell death. DNA was extracted from the dying cells and the Borrelia amplicon was detected by PCR.\n\nRemarkably the resulting amplicon was found to be EMS emitter, showing that this initial property was not lost during the complex transmission of DNA information. The normal differenciated MRC5 cells and the T lymphocytes were not affected in their growth under the same culture conditions and the Borrelia amplicon could not be detected in these cells. The 194 bp HIV LTR amplicon had no effect on the tumor cells.\n\nThese preliminary results indicate that the tumoral cell lines so far investigated do possess the enzymatic ability of reading the water nanostructures carrying the DNA information. It remains to be determined whether or not normal embryonic totipotent stem cells have the same ability to read the DNA sequence signals.\n\n# THEORETICAL ANALYSIS\n\nIn the previous Section we have reported the experimental observation that EMS can be emitted by diluted aqueous solutions of bacterial and viral DNA under proper conditions. Moreover, it has been observed that duplication of the emitting DNA segment can be obtained by using pure water exposed to the corresponding DNA EMS and, upon addition of enzymes, primers, etc., submitted to PCR cycles. Such a transduction process has been observed to occur also in EMS exposed living cells of tumoral origin. These experimental observations suggest that long range molecular interaction must be at work in water so to allow the observed properties. Indeed, since in the transduction process the high level of sequential ordering among several hundreds of nucleotides entering the transduced DNA chain is obtained, we are clearly in the presence of collective molecular dynamical behavior of water. In quantum field theory (QFT) it is known that the ordering of the elementary components of a system is achieved as a result of the spontaneous breakdown of symmetry and constitutes the observable manifestation of coherence (Blasone, Jizba and Vitiello, 2011; Fr\u00f6hlich, 1977; Umezawa, 1993; Vitiello, 1998;). Ordering is thus not the result of short range forces, but of long range collective coherent correlation. The classical behavior of the ordered pattern derives from the fact that in coherent states the ratio between the quantum fluctuation $\\Delta n$ in the correlation modes and their condensate number $n$ is $\\Delta n\/n = 1\/{|\\alpha|}$ and quantum fluctuations are thus negligible for high $|\\alpha|$, which denotes the coherent strength. In the present case, the symmetry which gets broken is the rotational symmetry of the electrical dipoles of the water molecules and correlation modes are the ones associated to the dipole waves (similar to spin waves in ferromagnets) (Del Giudice, Doglia, Milani and Vitiello, 1985, 1986).\n\nWe thus conclude that the observed properties of the DNA-water system provide an indication of (may be accounted by) the coherent molecular dynamics. The theoretical analysis based on quantum electrodynamics (QED) shows (Montagnier, A\u00efssa, Del Giudice, et al., 2011) that liquid water appears to behave as an active medium able to perform through very low frequency electromagnetic fields (e.m.f.). Short range H-bond and electric dipole-dipole static interactions among liquid water molecules set in as the consequence of the molecule interaction with time-dependent radiative e.m.f. over an extended region called coherence domain (CD) (Del Giudice, Doglia, Milani and Vitiello, 1985, 1986; Del Giudice, Preparata and Vitiello, 1988; Del Giudice and Vitiello, 2006; Bono, Del Giudice, Gamberale and Henry, 2012). Short range H-bond and electric dipole-dipole static interactions are themselves the dynamical effects caused by the most fundamental long range molecular and radiative e.m.f. interaction. This last one is thus responsible for the dynamic origin of short range interactions. This can be better understood by recalling a few points of the discussion presented in (Montagnier, A\u00efssa, Del Giudice, et al., 2011).\n\nAbove a density threshold and below a critical temperature, an ensemble of molecules interacting with the e.m.f. undergoes a transition to a dynamical regime characterized by a minimum energy state where the phase oscillations of the molecules are no longer uncorrelated. Such a minimum energy state implies a configuration of the system where all molecules enclosed within the CD oscillate in unison in tune with the e.m.f. trapped within the CD (phase locking). The linear size of the CD is determined by the wavelength $\\lambda$ of the trapped e.m.f. (typically of the order of 100 nm). The dynamical mechanism ruling the CD formation is the one of the spontaneous breakdown of symmetry and it is described in (Del Giudice, Doglia, Milani and Vitiello, 1985, 1986; Del Giudice, Preparata and Vitiello, 1988; Del Giudice, Spinetti and Tedeschi, 2010; Del Giudice and Tedeschi, 2009; Del Giudice and Vitiello, 2006). Its mathematical formulation (Matsumoto, Papastematiou, Umezawa and Vitiello, 1975) is similar to the one of the Anderson-Higgs-Kibble mechanism (Anderson, 1958; Higgs, 1966; Kibble, 1967) which has led to the recent discovery of the Higgs particle. One important aspect of such a general QFT mechanism is that the transition to the coherent dynamical regime can be triggered by a vanishingly weak external input. Due to the weakness of the input, the system does not get \"slaved\" by it, but reacts to it according to its own internal dynamics and, provided that the mentioned conditions of temperature and density are satisfied, the system sets in a coherent state, whose phase is determined by the phase of the triggering input (Blasone, Jizba and Vitiello, 2011; Umezawa, 1993). Its coherence strength, however, does not depend on the input strength[^3]. In the Appendix we will comment on the question whether coherence is not destroyed by, or theoretically incompatible with the decoherence phenomenon in quantum mechanics (QM). This is an important issue, which shows that the DNA transduction process is indeed a *quantum field theory* process; it could not be understood and described in QM, where the decoherence phenomenon does in fact occur (Alfinito, Viglione and Vitiello, 2001). Our framework, however, is the one of QFT.\n\nThese features already help us in the understanding of some of the experimental observations. In particular, it is immediately recognized the observed relevance of extremely low signal (ELS) in the phenomena under study. The stimulation caused by the electromagnetic background of very low frequency is indeed observed to be essential in order for the DNA-water system to emit the EMS. In the experiments, the background ELS is either produced from natural sources (the Schumann resonances which start at 7.83 Hz (Montagnier, A\u00efssa, Del Giudice, et al., 2011; Nickolaenko and Hayakawa, 2002)) or from artificial sources.\n\nThe fragment of DNA embedded in the water also acts as a trigger in the surrounding water, causing the spontaneous formation of CDs, which appear as a self-produced cavity with molecular coherence strength independent from the specific input strength. However, there is \"phase locking\" between the specific DNA molecular structure and the water molecules. Such a specific feature of the DNA-water coherent coupling accounts for the experimental observations. In the part of the experiment concerning the DNA transduction, the dynamical phase locking is shared with pure water in a tube when it is irradiated by the EMS emitted by the aqueous DNA solution system. For brevity, we omit to report further analysis of the interplay between the size of the CD, the e.m.f. wavelength and the e.m.f. self-trapping and frequency inside the CD. For the reader convenience, we report in the Appendix a brief summary of some other features of CD discussed in (Montagnier, A\u00efssa, Del Giudice, et al., 2011).\n\nThe question now naturally arises: does the EMS have any specific property related to the coherent dynamical structure discussed above? The question is particularly relevant because the emitted EMS, acting on water molecular dynamics, produces coherent structures such that in PCR processes the DNA transduction occurs with the same nucleotide sequence as the one of the parent DNA. The answer to the question is provided by observing that the EMS appears to carry not only the specific information of its frequency spectrum, amplitude and phase modulation (the syntactic level), but it also describes the dynamics out of which it is generated. In other words, beside the syntactic level of pure information (\u00e0 la Shannon), there is a *semantic* content, which manifests itself in the underlying coherent dynamics of the DNA-water system responsible of the polymerization (highly ordered sequence) of hundreds of nucleotides. We refer to such a semantic content as to the \"meaning\" of the EMS. In some recent papers it has been shown that an isomorphism exists between (squeezed) coherent states and self-similar fractal properties (Vitiello, 2009a, 2009b, 2012, 2014). Here, for brevity we do not proceed further with our analysis on this point. Results will be presented elsewhere.\n\nAbove we have mentioned the mechanism of phase locking and phase content in water molecular dynamics. Let us close this Section by stressing indeed the crucial role played by the phase in the considered processes. Due to the relation between phase and electromagnetic potential, non-trivial topological properties, with associated Bohm-Aharonov-like effects, have a non-secondary part in the molecular dynamical properties of water (see e.g. (Del Giudice, Fuchs and Vitiello, 2010; Del Giudice and Vitiello, 2006) and discussions there reported). Such a remark may turn out to be important when considering charge transfer along the double helix, produced by oxidative agents observed in (Genereux and Barton, 2010). The question if such a current could contribute to the production of EMS may have an affirmative answer since the charge transfer along the DNA produces magnetic field in the surroundings. On the other hand, the oscillations of electric dipoles of the DNA macromolecule may propagate on the DNA in wave form so to contribute to the electromagnetic signal emission. These EMS are considered in (Del Giudice, Doglia, Milani and Vitiello, 1985, 1986) where it is shown that they produce symmetry breakdown in the water in which the dipole chain (DNA or protein chain) is embedded and the mathematical details of the proof are reported.\n\nFinally, concerning the question 1 in Section II.C (how a DNA polymerase, the TAQ polymerase of a thermophilic bacterium, can \"read\" a genetic code on water structures) the whole dynamical scenario above presented provides the answer to it. It is a complex scenario founded on QFT of coherent systems and is then not surprising that those who are unaware of it could not conceive the positive answer, with all of its complex but clear details, to that question.\n\n# CONCLUSION AND PERSPECTIVES\n\nThis 10 year long collaborative work has yielded some scientific facts and concepts in a new domain of Science at the frontier of Biology and quantum field Physics.\n\nA new property of some DNA molecules has been discovered, that of emitting low frequency electromagnetic waves in water dilutions. These DNAs originate from pathogenic agents or agents endowed with pathogenic potential. It may be not pure coincidence that such EMS are associated with diseases, particularly chronic diseases.\n\nUnder natural conditions EMS and water nanostructures may play a role of stealthy elements carrying DNA information of infectious agents while being undetected by the immune system or being insensitive to conventional therapies (Fig.\u00a06). However, one cannot discard the possibility that DNA waves can play a role in the physiology of living entities.\n\nMoreover, in the laboratory, we have shown for the first time that EMS can be re-transcribed into DNA in living cells. These cells are so far of tumoral origin, opening the way to non-invasive treatments of cancers, assuming that normal stem cells are not affected, or less affected.\n\nThus, this new biology that we can call after Jacques Benveniste, Digital Biology, has a very promising future, both at the level of quantum Physics, and in numerous medical applications.\n\nFrom the point of view of the theoretical understanding of the observed phenomena, the discussion above presented suggests that the dynamical law of coherence acts as a *law of form* inducing guided polymerization processes (controlling morphogenesis): the specific polymerization so obtained is the expression of the semantic content of the EMS mentioned in the previous Section, by us referred to as the signal meaning. The dynamics of coherence appears to play the role of dynamic paradigm ruling the ordering (polymerization) processes through dissipative non-equilibrium dynamics controlled by entropy variations and the consequent appearance of the arrow of time (breakdown of time-reversal symmetry) (Celeghini, Rasetti and Vitiello, 1992; Vitiello, 2012, 2014).\n\nThe experiments discussed in this paper suggest that also in the *usual* PCR processes the DNA duplication is obtained due to the EMS emitted by the parent DNA in the environment of reciprocal inter-actions with water molecules, enzymes, primers and nucleotides in the solution.\n\nThe EMS appears thus to be the carrier of the coherence expressed in the DNA code. One might conjecture (Vitiello, 2014) that modifications are induced in the properties of the EMS resulting in the \"deformation\"[^4] of coherence (e.g. such as those, but not only those, induced by the observed bacterial actions; cf. Section II.B). This may play a role in epigenetic modifications, thus revealing the appearance of \"new meanings\" (in the above mentioned sense) associated to deformed properties of EMS. DNA appears to be the *vehicle* through which coherence and its dynamical deformations propagate in living matter (Vitiello, 2014).\n\n# Acknowledgements\n\nPartial financial support from MIUR and INFN is acknowledged. We thank Pr. E. Schutz, and Dr. H. Urnovitz, Chronix Biomedical, for allowing transduction experiments in their laboratory. Mrs. Laila A\u00efssa is acknowledged for her skilled technician assistance. Mrs. S. McDonnell is acknowledged for her constant participation in the management of this work.\n\n# On some properties of water molecular dynamics\n\nOne of the peculiarities of water consists in the fact that water molecules in the CD coherently oscillate between the ground state and an excited state lying at 12.06 eV, just below the 12.60 eV ionization threshold (Bono, Del Giudice, Gamberale and Henry, 2012; Montagnier, A\u00efssa, Del Giudice, et al., 2011). The almost free electrons in the CD can be excited by external inputs so to form coherent excitations (vortices), whose entropy is lower than the entropy of the energy supplied by the inputs. Vortices, due to coherence and their non-trivial topology, are not easily destroyed by small, additional external supplies of energy. On the contrary, additional energetic inputs may add up to form a unique vortex, thus storing in the CDs an amount of energy which may be large enough to activate chemical reactions among molecules, otherwise below the activation energy threshold. Thus, small energy contributions coming from many high entropy inputs add up to form low entropy ordered patterns of upgraded high energy (Del Giudice, Fuchs and Vitiello, 2010; Voeikov and Del Giudice, 2009).\n\nAn important remark is that DNA and proteins are polyelectrolytes, and are surrounded by positive counterions. Ions having a cyclotron frequency, $\\nu_{c} = q \\, B\/(2\\, \\pi\\, m)$, where $q$ and $m$ are the electric charge and the mass of the ion, respectively, and $B$ is the magnetic field (Liboff, 1997; Liboff, Smith and McLeod, 1987), may play an important role in obtaining a collective performance of water CDs, a coherence of coherent domains. The observed dependence of the signal emission on the aqueous dilution may be understood (Montagnier, A\u00efssa, Del Giudice, et al., 2011) as follows: suppose that a low magnetic field (for example a natural or artificially produced background magnetic field) matches the ion cyclotron frequency; suppose it may be then able to extract $n$ ions per CD. Then, due to angular momentum conservation, the plasma of $N$ quasi-free electrons in the CDs starts to counter-rotate with a frequency much higher than the ion cyclotron frequency since electron mass is much smaller than the ion mass. This frequency depends on the number of involved ions, namely on their concentration, which therefore is the only relevant variable. This occurs on all the CDs of the system, whose number is irrelevant for the frequency purpose, in agreement with observations. The magnetic component of EMS is produced by the so induced rotation of the plasma of the quasi-free electrons in the CDs. As a further effect, a co-resonating field appears in the surroundings of the rotating CDs depending on the ion concentration, i.e. on the DNA solution dilution. It could be at the origin of an extended coherence among CDs. The existence of the observed window of dilutions for the occurrence of the EMS emission could be understood by presuming that the signal produced by the lower dilutions could have a frequency higher than the interval of the values detectable by the used instruments. Higher dilutions, on the contrary, could produce no signal because the ion concentration is decreased below the threshold able to excite the CDs (Montagnier, A\u00efssa, Del Giudice, et al., 2011).\n\nWe observe that thermal collisions could be in competition with electrodynamic attraction of molecules inside the CD and produce permanent fluxes of molecules between a coherent regime and a non-coherent one, and vice-versa, although the total number of coherent and non-coherent molecules are constant for a given temperature T. Water is thus not a homogeneous liquid, rather it appears as a two fluid system, with coexisting coherent and non-coherent phases, like in the Landau theory of liquid Helium (Landau, and Lifshitz,1959). We have thus a mixed structure system, consistent also with experimental findings (Taschin, Bartolini, Eramo et. al. 2013), which may appear in observations only when observation time is very short with respect to the time of flickering between the two phases. Near surfaces, the coherent phase may be more stable due to the interaction between water molecules and the surface (Pollack, 2001, 2013). For example, in living matter, water, which is bound to membranes or to biomolecules, could more easily manifest the properties of coherence.\n\nLet us finally consider the question whether the phenomenon of decoherence in quantum mechanics (QM) might contradict our analysis based on the formation and non-vanishing life-time of coherent structures in water. We remark that the *belief* that coherence is possible only at very low temperatures is disproved by the fact that coherence is observed in a wide range of temperature, from very low to very high ones: the diamond crystal loses its coherence (it melts) at a temperature of about $+3545 ~{}^{0}C$ in the absence of oxigen; the common kitchen salt $NaCl$ melts at $+804 ~{}^{0}C$; in the iron the coherence of the elementary magnets is lost at $+770 ~{}^{0}C$. In superconductors the critical temperature $T_c$ are much lower, in some compound of niobium they are not higher than $- 252 ~{}^{0}C$ and for some high $T_c$ superconductors it is a little above $-153 ~{}^{0}C$. Also the so called BEC systems (which are mostly condensates of atoms) need very low temperatures and are not so stable. All these systems are macroscopic systems and they are described in quantum field theory in terms of coherent Bose-Einstein condensates. The bosons that condense in a crystal are called the phonons, i.e. the quanta of the elastic waves responsible of the ordering in crystals; in the magnets, they are called the magnons, namely the quanta of the spin waves in magnets; in the water, they are called \"dipole wave quanta\" (DWQ), the quanta of the fluctuating molecular dipole waves; and so on. This teaches us that the ordered patterns we observe at a macroscopic scale in these systems are sustained and generated by long range correlations maintained by these waves (Alfinito, Viglione and Vitiello, 2001). One would never be able to construct any of these systems by using short range interaction among the nearest neighbours. Short range interaction, if it is there, is made possible by the long range one which brings \"near\" the components (e.g., making possible the formation of $H$-bonds in water). Decoherence in QM would forbid the existence of crystals, magnets, superconductors, etc.. However, these systems do exist and are observed since they are QFT systems.\n\n# Bibliography\n\nAlfinito, E., Viglione, R. and Vitiello, G. (2001). The decoherence criterion. Mod. Phys. Lett. B15: 127-135.\n\nAnderson, P.W. (1958). Coherent Excited States in the Theory of Superconductivity: Gauge Invariance and the Meissner Effect. Phys. Rev. 110:827-835.\n\nBlasone, M., Jizba, P. and Vitiello, G. (2011). Quantum field theory and its macroscopic manifestations. London: Imperial College Press.\n\nBono, I., Del Giudice, E., Gamberale, L. and Henry, M. (2012). Emergence of the Coherent Structure of Liquid Water. Water 4(3): 510-532. doi:10.3390\/w4030510\n\nCeleghini, E., De Martino, S., De Siena, S., Rasetti, M. and Vitiello, G. (1995). Quantum groups, coherent states, squeezing and lattice quantum mechanics. Annals of Phys.(N.Y.), 241:50-67.\n\nCeleghini, E., Rasetti, M. and Vitiello, G. (1992). Quantum dissipation. Nucl. Phys. B215:156-170.\n\nDel Giudice, E. Doglia, S. Milani, M. and Vitiello, G. (1985). A quantum field theoretical approach to the collective behavior of biological systems. Nucl. Phys. B251 (FS 13):375-400.\n\nDel Giudice, E. Doglia, S. Milani, M. and Vitiello, G. (1986). Electromagnetic field and spontaneous symmetry breakdown in biological matter. Nucl. Phys. B275 (FS 17):185-199.\n\nDel Giudice, E., Fuchs, E. C. and Vitiello, G. (2010). Collective Molecular Dynamics of a Floating Water Bridge. Water J. 2:69-82. doi.org\/10.14294\/WATER.2010.5\n\nDel Giudice, E., Preparata, G. and Vitiello, G. (1988). Water as a free electron laser. Phys. Rev. Lett. 61:1085-1088.\n\nDel Giudice, E. and Tedeschi, A. (2009). Water and the Autocatalysis in Living Matter. Electr. Biol. Med. 28(1):46-52.\n\nDel Giudice, E., Spinetti, P. R. and Tedeschi, A. (2010). Water dynamics at the root of metamorphosis in living organisms. Water. Water J. 2:566-586.\n\nDel Giudice, E. and Vitiello, G. (2006). Role of the electromagnetic field in the formation of domains in the process of symmetry-breaking phase transitions. Phys. Rev. A 74:022105.\n\nFr\u00f6hlich, H. (1977). Long-range coherence in biological systems. Rivista del Nuovo Cim. 7:399-418.\n\nGenereux, J. C. and Barton, J. K. (2010). Mechanisms for DNA Charge Transport. Chem Rev. 110(3):1642-1662.\n\nHiggs, P. W. (1966). Spontaneous symmetry breakdown without massless bosons. Phys. Rev. 145:1156.\n\nKibble, T.W. B. (1967). Symmetry breaking in non-Abelian gauge theories. Phys. Rev. 155:1554\n\nLandau, L.D. and Lifshitz, E. M. (1959). Fluids Mechanics. Reading, Mass.: Addison-Wesley, Ch. XVI.\n\nLiboff, A. R. (1997). Electric-field ion cyclotron resonance. Bioelectromagnetics 18(1):85-7.\n\nLiboff, A. R. Smith, S. D., McLeod, B. R. (1987). Experimental Evidence for Ion Cyclotron Resonance Mediation of Membrane Transport. In Mechanistic Approaches to Interactions of Electric and Electromagnetic Fields with Living Systems, Editors: Blank, M., Findl, E.. New York:Springer-Verlag, pp 109-132.\n\nMatsumoto, H., Papastamatiou, L., Umezawa, H. and Vitiello, G. (1975). Dynamical rearrangement of Symmetry in the Anderson-Higgs-Kibble Mechanism. Nucl. Phys. B97:61-89.\n\nMontagnier, L., A\u00efssa, J., Ferris, S., Montagnier, J-L., Lavallee, C. (2009). Electromagnetic Signals Are Produced by Aqueous Nanostructures Derived from Bacterial DNA Sequences. Interdiscip. Sci. Comput. Life Sci. 1:81-90.\n\nMontagnier, L., A\u00efssa, J., Lavallee, C., Mireille Mbamy, M., Varon, J., Chenal, H. (2009). Electromagnetic detection of HIV DNA in the blood of AIDS patients treated by antiretroviral therapy. Interdiscip Sci. Comput. Life Sci. 1:245-253.\n\nMontagnier, L., A\u00efssa, J., Del Giudice, E. Lavallee, C., Tedeschi, A., Vitiello, G. (2011). DNA waves and water. Journal of Physics: Conference Series 306:012007.\n\nNickolaenko, A. P. and Hayakawa M. (2002). Resonances in the Earth-ionosphere Cavity. Dordrecht-Boston-London: Kluwer Academic Publishers.\n\nPollack, G.H. (2001). Cells, Gels and the Engines of Life. Seattle, WA: Ebner and Sons.\n\nPollack, G.H. (2013). The fourth phase of water: Beyond Solid, Liquid, and Vapor. Seattle, WA: Ebner and Sons.\n\nTaschin, A., Bartolini, P., Eramo, R. et. al. (2013). Evidence of two distinct local structures of water from ambient to supercooled conditions. Nature Comm. 4:2041-2045. \ndoi:10.1038\/ncomms3041\n\nUmezawa, H. (1993). Advanced Field theory: micro, macro and thermal concepts. New York: American Institute of Physics.\n\nVitiello, G. (1998). Structure and function. An open letter to Patricia Churchland. In Hameroff, S.R. Kaszniak, A. W. and Scott, A.C. Eds.. Toward a science of consciousness II. The second Tucson Discussions and debates. Cambridge: MIT Press. p. 231-236.\n\nVitiello, G. (2009a). Coherent states, Fractals and brain waves. New Mathematics and Natural Computation 5: 245-264.\n\nVitiello, G. (2009b). Fractals and the Fock-Bargmann representation of coherent states. In P. Bruza, D. Sofge, et al. Eds., Quantum Interaction. Lecture Notes in Artificial Intelligence, Edited by R.Goebel, J. Siekmann, W.Wahlster. Berlin: Springer-Verlag, p. 6-16.\n\nVitiello, G. (2012). Fractals, coherent states and self-similarity induced noncommutative geometry. Phys. Lett. A 376:2527-2532.\n\nVitiello, G. (2014). On the Isomorphism between Dissipative Systems, Fractal Self-Similarity and Electrodynamics. Toward an Integrated Vision of Nature. Systems 2:203-216. doi:10.3390\/systems2020203\n\nVoeikov, V. L. and Del Giudice, E. (2009). Water respiration: the base of the living state. Water J. 1:52-75.\n\nYuen, H. P. (1976). Two-photon coherent states of the radiation field. Phys. Rev. A 13:2226.\n\n[^1]: deceased 31 January 2014\n\n[^2]: An amplicon is a piece of DNA or RNA that is the source and\/or product of natural or artificial amplification or replication events, usually PCR.\n\n[^3]: We stress that a strong input may drive the system shielding its own internal dynamics. In such a case, the symmetry is said to be explicitly broken and one has a substantial modification of the original system by inclusion of the strong perturbing agent. However, this is not what we are interested in in the present case, and in general in Biology, where small perturbing inputs may trigger relevant reaction of the system driven by its own internal dynamics\n\n[^4]: In the jargon of coherent states, the word deformation, or also $q$-deformation, refers to the technically well defined process of \"squeezing\" of the coherent state. For technical details see (Yuen, 1976; Celeghini, De Martino, De Siena, et al. 1995).","meta":{"dup_signals":{"dup_doc_count":14,"dup_dump_count":3,"dup_details":{"curated_sources":1,"2024-22":1,"unknown":12}},"filename":"out\/1501.01620_extract_Transduc_of_DNA_24October2014.tex.md"},"subset":"arxiv"} +{"text":"abstract: The Adaptive Gain Integrating Pixel Detector (AGIPD) is an x-ray imager, custom designed for the European x-ray Free-Electron Laser (XFEL). It is a fast, low noise integrating detector, with an adaptive gain amplifier per pixel. This has an equivalent noise of less than 1\u00a0keV when detecting single photons and, when switched into another gain state, a dynamic range of more than 10$^4$ photons of 12\u00a0keV. In burst mode the system is able to store 352 images while running at up to 6.5\u00a0MHz, which is compatible with the 4.5\u00a0MHz frame rate at the European XFEL. The AGIPD system was installed and commissioned in August 2017, and successfully used for the first experiments at the Single Particles, Clusters and Biomolecules (SPB) experimental station at the European XFEL since September 2017. This paper describes the principal components and performance parameters of the system.\nauthor: Aschkan; Julian; Annette; Roberto; Peter; Dominic; Beat; Helmut; Manuela; Robert; Alexander; Hans; Sabine; Torsten; Alessandro; Davide; Aldo; Magdalena; Jennifer; Joern; Igor; Xintian; Sergej; Lothar; Jolanta; Ulrich; Qingqing; Mourad; Jiaguo; Manfred; Bernd\ntitle: The Adaptive Gain Integrating Pixel Detector at the European XFEL\n\nAllahgholi\n\nBecker\n\nDelfs\n\nDinapoli\n\nGoettlicher\n\nGreiffenberg\n\nHenrich\n\nHirsemann\n\nKuhn\n\nKlanner\n\nKlyuev\n\nKrueger\n\nLange\n\nLaurus\n\nMarras\n\nMezza\n\nMozzanica\n\nNiemann\n\nPoehlsen\n\nSchwandt\n\nSheviakov\n\nShi\n\nSmoljanin\n\nSteffen\n\nSztuk-Dambietz\n\nTrunk\n\nXia\n\nZeribi\n\nZhang\n\nZimmer\n\nSchmidt\n\n# Introduction\n\nWith the start of the European XFEL, a new milestone is set in the field of x-ray research and many related fields due to the high coherence, pulse intensity and repetition rate of the x-ray pulses available at this facility. The superconducting accelerator provides up to 600\u00a0s long trains with up to 2700 pulses followed by an inter-train gap of 99.4\u00a0ms. Inside each train, consecutive pulses of typically less than 100\u00a0fs duration are spaced approximately 220\u00a0ns apart. This corresponds to an effective repetition rate of 4.5\u00a0MHz during a train. Each pulse contains up to 10$^{12}$ photons , which in many cases is sufficient to produce a complete scattering pattern from the sample with a single pulse. This means that the area detectors at the European XFEL not only have to be compatible with the high repetition rate of the source, but also need to have a dynamic range from single photons to 10$^4$ photons\/pixel\/pulse. More detailed and complete requirements can be found in . Dedicated detector development programs for the European XFEL were started more than 10 years before its inauguration, as it was clear that no existing detector would be able to meet the requirements imposed by the new facility. The Adaptive Gain Integrating Pixel Detector (AGIPD) was designed to fulfill as many of the general requirements as possible with a focus on the requirements most important for scattering experiments in the energy range from 7 to 15\u00a0keV, for which is has been successfully used in first user experiments and . There has been interest to use the system at other bright sources to study fast dynamics down to the microsecond scale.\n\nThe AGIPD is not the only camera developed for use at the European XFEL. The LPD system is currently in use at the FXE beamline and the DSSC system will be available soon. Other FELs are using other custom developed camera systems like the CSPAD and ePIX detectors at LCLS or the JUNGFRAU detector at the SwissFEL.\n\n# System layout\n\nThe AGIPD camera consists of four individually moveable quadrants, each having four detector tiles with 512 x 128 pixels per tile, giving a total of 1024 x 1024, or roughly 1 million pixels. Fig. a shows a CAD design of the AGIPD 1 million pixel detector with cuts to expose the arrangement of the electronics inside and outside of the vacuum vessel.\n\nEach detector tile consists of a front end module (FEM), an in-vacuum board that provides power to the FEM and routes signals, two ADC boards and a control and data IO board. Fig. b and c show a CAD drawing and a photograph of these boards, respectively.\n\nBeing glued to a ceramic board, each FEM consists of a monolithic pixelated silicon sensor responsible for absorbing the x-ray photons and creating an electrical signal per pixel which proportional to the sum of the energies of all simultaneously absorbed photons in that pixel. The silicon sensor is bump-bonded to 2\u00a0x\u00a08 pixelated Application Specific Integrated Circuits (ASICs), which are responsible for signal integration and intermediate image storage.\n\nThe front-end modules protrude into the attached sample interaction chamber and can be cooled to stabilize their temperature and improve performance. Although there are many components in the vacuum chamber, vacuum levels better than $10^{-5}$\u00a0mbar have been reached in our labs. Note that this was without attaching the system to the sample interaction chambers at the European XFEL, so the vacuum levels obtained during XFEL experiments might be different.\n\nThe detector logically and electrically divides into two wings. Each wing consists of the ADC boards and control and data IO board of each tile (one set each per tile, 8 tiles per wing) as well as a vacuum backplane board, which acts as a vacuum barrier and routes signals in and out of the vacuum vessel, a micro controller board for slow control and a master FPGA board. These boards are located outside the vacuum chamber[^1] in a thermally sealed, water cooled housing.\n\nThe two master FPGA boards, one for each side, provide the interface to the clock and control system, and control the detector tiles, including the FEMs. In the following sections the individual components and operational concepts are described in more detail.\n\n# Mechanics\n\nFig. a shows the front view of the AGIPD 1 million pixel system installed at the SPB station of the European XFEL. The four independently movable quadrants allow the formation of a horizontal slit or a rectangular central hole with user selectable size for the direct beam to pass through. The movement is realized by mounting each cooling block on a motion stage formed by two wedges (Fig. b).\n\nThe cooling blocks were made by a combination of milling and electroforming techniques. In the first step, the basic shape of the cooling block was milled out of a solid block of HCP copper. This basic shape omitted details that would be included in the final milling step, but already included the cooling channels. To enhance the turbulence of the flow of the silicone oil coolant, holes were drilled into the channels and pins were inserted into the holes. Afterwards, the channels were covered by copper using an electroforming process. Initially omitted details, like connector feedthroughs, were defined in a final milling run, which also ensured the overall dimensions and tolerances of the cooling block. Fig. c and d show the channels with inserted pins before the cover deposition.\n\n# Front end modules\n\nAn AGIPD 1 million pixel detector incorporates 16 front end modules (FEMs). Each front end module uses a bump bonded hybrid of a monolithic silicon sensor with 128 x 512 pixels and 2 x 8 AGIPD ASICs. The power and signal contacts of the ASICs are wire bonded to gold plated pads on an LTCC (Low Temperature Co-fired Ceramics) board. The bias voltage of the sensor is provided via wire bond connections between sensor and LTCC on all four corners of the assembly. A PT1000 temperature sensor is located on the backside of the LTCC, next to a Samtec 500 pin connector, which connects the FEM to the interface electronics. The hybrid assembly is glued to a silicon heat spreader, which reduces thermal gradients across the tile. For the bonding between heat spreader and LTCC an adhesive with high thermal conductance is used that is able to handle the different coefficients of thermal expansion of silicon and ceramics.\n\nFor mounting and handling, the wire bonded assembly is screwed onto an interposer made from a copper alloy which provides sufficient rigidity and heat conductivity. The interposer features an insertion pin for defined mounting to the cooling block. To reduce the thermal resistance between the layers, the interface between LTCC and interposer is filled with vacuum compatible liquid gap filler and the interface between interposer and cooling block with graphite. Fig. a shows a macro photograph of the edge of a FEM, detailing sensor, ASICs, heat spreader, LTCC and interposer. Fig. b shows two fully assembled FEMs and a bare interposer.\n\nIn our lab the system has been tested at temperatures as low as -20\u00a0C on the LTCC, but the system can run at temperatures above that, which was also the case for early user experiments at the SPB beamline. The system is also compatible with operation at ambient pressure and temperature.\n\n## The ASIC\n\nThe AGIPD 1.1 ASIC forms the heart of the system; each ASIC incorporates 64 x 64 pixels and the necessary readout and control circuitry. It is manufactured in 130\u00a0nm CMOS technology, using radiation hardened layout techniques in most parts of the circuitry.\n\nFig. shows a schematic diagram of the pixel circuitry: the input of each pixel is formed by a resettable charge sensitive preamplifier, built around an inverter core. Its output feeds a discriminator and a correlated double-sampling (CDS) stage with two globally adjustable gain settings. Once the discriminator output exceeds the globally defined threshold, additional feedback capacitors of 3 or 10\u00a0pF are added to the 60\u00a0fF of the un-switched preamplifier feedback loop. This way the sensitivity of the preamplifier is adaptively decreased and the dynamic range is extended in two steps , where each pixel automatically and independently adapts its gain to the incoming signal. The CDS stage is used to remove noise from the reset switch and to suppress low frequency noise components .\n\nThe pixel response is recorded to an in-pixel memory at high speed, compatible with the 4.5 MHz requirement for the European XFEL . Each memory address contains two separate pieces of information: a) the output voltage of the CDS stage, which is proportional to the detected signal, is stored on a 200 fF capacitor and b) the gain state of the pixel, encoded as a voltage, is stored on a 30 fF capacitor.\n\nThe memory matrix occupies about 80% of the pixel area and can store up to 352 images consisting of signal and gain information. The pixel size of (200\u00a0m)$^2$ is a compromise between resolution, analog performance and number of memory cells.\n\nThe memory can be randomly accessed, providing the option of overwriting images or frame selective readout. At the European XFEL it is used to implement a veto system .\n\nDuring readout of the chip another charge sensitive amplifier in each pixel is used to read the memory, which happens in parallel for each row of pixels. The further readout uses two interleaved column buses and four multiplexers, each serializing data from a block of 16 x 64 pixels. This parallelizing onto 4 outputs reduces the power consumption of the readout circuitry by reducing its speed. Instrumentation amplifiers convert the signal to differential levels, which are driven off-chip for subsequent digitization.\n\nA command based control circuit provides all the signals for memory access, read and write operations to the pixels. It uses a 3 line serial current mode logic interface and also provides slow control tasks, like the programming of internal timings and on-chip biases generated by digital to analog converters .\n\n## The Sensor\n\nMuch like the ASIC, the sensor design was driven by the requirements set forth by the expected experiments and performance of the European XFEL . The main requirements were a thickness of 500\u00a0m in order to reach sufficiently high quantum efficiency and a tolerance of a total dose of 1\u00a0GGy during the expected lifetime of the detector. All of this should be accomplished while making large area sensors that allow minimizing the dead area of the final detector system.\n\nIn addition to the challenge of radiation tolerance, the impact of plasma effects due to the high number of instantaneously absorbed photons was investigated . When many photons are locally absorbed, sufficient electron-hole pairs are created to form a plasma. This plasma shields its core from the electric field required to drift the charges to the readout ASICs.\n\nDiffusion will eventually expand the plasma and lower its density enough for the drift field to take over, but this process takes time. As a result the charge cloud will spread laterally, potentially degrading the spatial resolution, and the total charge collection time will increase, potentially piling up with the next photon pulse.\n\nThese studies concluded that the sensor should be constructed using p$^+$ electrodes in a highly resistive n-type bulk, thus collecting holes. Also a voltage of at least 500\u00a0V should be applied to the AGIPD sensor to suppress the consequences of the plasma effects as much as possible. At this voltage the time to collect at least 95% of the deposited charge is less than 60\u00a0ns for tightly focused spots (3\u00a0m rms) of up to 10$^5$ 12\u00a0keV photons.\n\nSurface damages are the dominant type of radiation damage for the sensor, as the damage threshold for silicon bulk damage is far above 12 keV, the original design photon energy of the European XFEL.\n\nTherefore, surface damages, namely the creation of positive charges in the SiO$_2$, and the introduction of traps at the Si-SiO$_2$ interface, were studied in detail with numerous irradiation campaigns to establish the relevant parameters, which in turn were used for sensor optimization studies .\n\nThese studies showed that several design parameters, i.e. oxide thickness, pixel implant depth, and metal overhang, have significant influence on the ability to operate the sensor at a high voltage. Some of the findings show conflicting results for the situation before and after irradiation, i.e., a thick oxide is preferred before irradiation and a thin oxide after irradiation .\n\nFinally, a compromise layout was chosen that fulfilled the design specifications and showed a breakdown voltage above 800\u00a0V in simulation . This layout was produced and its radiation tolerance was studied , finding sufficient performance for operation at the European XFEL.\n\nThe dead area of the final system is minimized for each front end module by using a monolithic sensor bump bonded to 2\u00a0x\u00a08 individual ASICs. The large sensor guarantees that there are no blind spots within the sensitive surface (0% dead area), however since the ASICs cannot physically touch each other the pixels horizontally in between two ASICs are twice as wide (400\u00a0m x 200\u00a0m), covering the area of two ordinary pixels. Pixels vertically in between two ASICs have normal size.\n\nThe entire sensitive area is surrounded by a guard ring that is 1.2 mm (or 6 pixels) wide. A detailed description of the sensitive and non-sensitive areas and the implications for coherent diffraction imaging at the SPB instrument of the European XFEL can be found in . Disregarding intentional gap between quadrants caused by the moving apparatus each quadrant has less than 15% insensitive area, which increases to less than 18% for the whole system, still disregarding intentional gaps.\n\n# Signal handling, data preparation and control\n\nAs indicated earlier, the AGIPD 1 million pixel detector is made of two electronically independent halves. Each half consists of a master FPGA board, a micro controller based slow control unit, an 8 port vacuum barrier board and eight tiles consisting of front end modules and their readout boards.\n\nThe 8-port vacuum barrier board, which interconnects between the different boards, is realized using a multi-layer printed circuit board which acts as a vacuum barrier and as a backplane for signal distribution.\n\nThe master FPGA acts as an interface device between the detector, the clock and control system of the European XFEL and the control PC. It receives configuration, 'start' and 'stop' signals from the control computer; receives bunch synchronized clocks and classification flags; synchronizes the operation of the ASICs and fast ADCs; and triggers the read out FPGAs of the tiles.\n\nThe boards constituting the readout electronics of each tile are located either inside the vacuum vessel or in the enclosed air environment of the external housing.\n\nFrom each front end module 64 differential analog signal lines are guided via the in-vacuum board and vacuum barrier board to the analog boards. The flexible part of the in-vacuum board (indicated in Fig. a and b) allows the movement of the quadrants.\n\nThe ADCs (Analog Devices AD9257) sample at a resolution of 14\u00a0bit and operate at a frequency of up to 33\u00a0MHz, which results in a minimum possible read out time of approximately 22\u00a0ms for 352 frames (signal amplitude and gain state information are read separately, no overheads), well within the 99.4\u00a0ms inter train spacing at the European XFEL. At 14\u00a0bit resolution the ADC noise and the quantization errors do not contribute significantly to the overall noise of the system.\n\nEach readout FPGA orders the information of the 64 ADC channels of its tile into a single frame and sends the data via a 10\u00a0GbE optical link to the data acquisition system of the European XFEL.\n\nThe slow control board monitors the status of the detector by collecting data on supply currents and voltages, as well as temperature, humidity and cooling fan information.\n\nDuring the detector start-up phase the experimental control computer sends commands to the micro controller to power the electronics sequentially. Collected monitor and status information can be queried by the slow control computer of the experiment which adds time stamps and makes the data available to the XFEL system.\n\nIn addition, the slow control board serves as a second level interlock, transmitting hard wired flags to the experimental programmable logic controller (PLC) to initiate a shutdown, if operational conditions are outside pre-defined safety margins. The architecture of the electronics is discussed in more detail in .\n\n# Calibration\n\nThe performance of the AGIPD ASIC has been extensively tested and documented over the years . Since every pixel can be in any of its three gain states in any image (i.e., memory cell) the current calibration procedure calibrates each memory cell individually for all three gain settings requiring more than 2.8 billion calibration parameters for a 1 million pixel detector system.\n\nIn a big system of one million pixels or more, calibrating all relevant parameters using only external sources is not feasible in a time efficient manner. Therefore the ASIC includes two internal calibration sources - a current source and a pulsed capacitor - to enable dynamic range scans without the need for external stimulus. Both sources combine elements that are global to the chip and local in each pixel. The current source is implemented as a distributed current mirror and each pixel features a calibration capacitor while the voltage step is generated in the periphery.\n\nEach data point from the detector contains two values per pixel per memory cell: the analog signal value and the encoded gain state information. Each value is expressed as Analog to Digital converter Units (ADU) in the raw data stream of the detector.\n\nThe internal sources allow the determination of valuable information. For the analog signal value, that is the relative gain of all 3 states within each pixel and the relative gain of the pixels with respect to each other. For the encoded gain state used, that is the discrimination threshold that indicates the different states of the gain.\n\nThe offsets for each gain state are measured without illumination (dark frames) and the gain of the high gain state is measured for each pixel using a flood illumination with characteristic x-rays. The characteristic x-rays were generated by illuminating a metal foil with an x-ray tube. The distance between foil and sensor was approximately 10-20\u00a0cm, depending on the foil, to ensure sufficient count rate in all pixels without creating a strong gradient in count rate over the module.\n\nIn a last step the calibration data are matched and merged, resulting in independent calibration constants for each memory cell in each pixel. Each memory cell is characterized by 8 parameters: 3 offsets (in ADU, one per state), 3 gain values (in ADU\/keV, one per state), and 2 thresholds (in ADU) for state discrimination. This totals more than 2.8 billion calibration parameters in a 1 million pixel detector system.\n\nFigure shows examples of intermediate results during the calibration procedure. Fig. a shows the histogram of 10,000 analog signal values measured for a single memory cell of a single pixel. The data were acquired using illumination with characteristic x-rays from a molybdenum foil. The absolute gain of this memory cell of this pixel is given by the distance between the peaks in the histogram (noise to single photon or photon to photon). The average gain value of a typical FEM is 7.7 ADU\/keV +- 3.4 %.\n\nThe integration time during data taking was increased to 50\u00a0s, a value significantly beyond the 130\u00a0ns typically used for experiments at the European XFEL. This was done to increase the number of detected photons per frame to at least 1000 and thereby reduce the statistical uncertainty of the fit. Due to the long integration time the noise of the system (width of the peaks) is much higher than for typical experimental conditions. The noise measurement of the high gain state is extracted from dark frames taken under experimental conditions (not shown) and typical values for the Equivalent Noise Charge (ENC) are 320 and 240 electrons in CDS gain low or high, respectively. For the other gain states the noise is higher, but still below the Poisson limit .\n\nThe internal current sources allow extrapolation of the absolute calibration of the high gain state to other states. The procedure using the internal current source utilizes injecting a constant current to the input of the pre-amplifier while sweeping the integration time.\n\nAn example of a current source scan is shown in Fig. b. From linear fits to the data the ratios of low, medium and high gain are determined[^2]. The high- and medium gain regime can also be scanned with a pulsed capacitor as shown in Fig. c.\n\nThe pulsed capacitor scans the dynamic range by gradually increasing the height of the applied voltage step while keeping the integration time constant. The encoded gain state values and the corresponding discrimination thresholds can be extracted from both current source and pulsed capacitor scans, and are shown by the green dots in Fig. b and c.\n\nRecording the required data to extract the calibration data typically takes 12-14 hours and occupies more than 20\u00a0TB of disk space. The largest contribution to this is the current source scan, which typically takes 10-11 hours and generates roughly half of the data volume. The smallest contribution to this has the x-ray illumination, which is the only time the detector must be illuminated during the calibration data taking process, with about 30 min duration and 0.4\u00a0TB of data volume.\n\nAnalysis of the data on the DESY MAXWELL cluster using routines that have been optimized for parallel data processing can be performed in less than a day.\n\nThe electrical calibration described so far is essential for every experiment. In addition to this, the mechanical calibration, which determines the position of each pixel in all three dimensions and especially in relation to the beam, is of great importance for many experiments as well.\n\nThe detector mechanics is designed and built with high precision, but the movability of the four quadrants requires a quick, robust and accurate method of calibration for the position and orientation of the FEMs at any time. On top of the uncertainties introduced by the movements, small displacements and tilts of the FEMs[^3] with respect to ideal positions are unavoidable as their manufacture involves gluing steps which are naturally limited in their positioning accuracy.\n\nA commonly used approach to calibrate the absolute position of each FEM in space is to take diffraction data of a well-known sample and fit the detector geometry .\n\nThe detector has just entered routine operation at the European XFEL and first crystal structures have been successfully determined . For these experiments the detector was calibrated using the procedures described here. However the procedures and recommended intervals are likely to evolve as the system gets used more often.\n\nCurrently, we recommend to take dark data for offset determination before and after each practical block of scientific data taking to account for any small drifts, e.g., of the temperature, that might occur during the scientific data taking process. Rechecking of the absolute gain and the gain ratios between the gain states using x-rays and the internal sources should be done periodically, as these might change over longer periods of time, e.g., due to accumulation of radiation damage.\n\nWhile the system has been designed to be radiation hard, as detailed in earlier sections, forecasting the point at which radiation damage effects do show up is currently not possible for us.\n\nFurther, we recommend an electrical recalibration of the detector every time there has been a beam damage incident and if modules were exchanged. Mechanical position calibration is recommended every time the detector position is changed unless knowledge of the exact module positions is not required for the experiment.\n\nFinding a quicker way to reliably determine high quality calibration constants is an ongoing development effort together with the detector group from the European XFEL.\n\n# Performance data and imaging example\n\nIn order to test the noise performance over the entire dynamic range a pulsed IR laser was used, where the deposited energy in a pixel was varied between 1 and $10^4$ 12.4\u00a0keV photon equivalents by reducing the intensity of the IR pulses with calibrated filters.\n\nThe noise of the system is higher in medium and low gain mode (approximately equivalent to 3.5\u00a0keV and 18\u00a0keV, respectively), but it was shown that it always remained significantly below the Poisson noise of the incoming signal . The same IR laser was used to scan the dynamic range of the system, which was measured to be 34.4\u00a0x\u00a010$^6$ electrons, corresponding to approximately 10$^4$ photons of 12.4\u00a0keV. The non-linearity of the low gain state proved to be better than 0.5% up to 5\u00a0x\u00a010$^3$ photons .\n\nFig. a shows the x-ray image of a printed circuit board taken with the setup shown in Fig. b. The shown x-ray image is the average of 10,000 individual images.\n\nEach individual image is corrected for pixel offset and gain on a per pixel basis. These corrections compensate fixed pattern 'zero-level' variations and pixel-to-pixel sensitivity variations that are commonly caused by process variations during the production of the ASIC, but can have many causes.\n\nSome artifacts remain after the correction. Especially in the medium intensity region vertical stripes can be observed. These originate in the double sized pixels (400\u00a0m\u00a0x\u00a0200\u00a0m) between ASICs. These pixels collect, on average, twice the amount photons, hence they appear brighter.\n\nFig. c shows the x-ray image of a pen drive. For this image 30,000 individual images were averaged.\n\nEach image was offset and gain corrected as explained above, in addition a flat field correction was applied. The flat field correction accounts for the effective size of the pixels, removing the effect of the double size pixels, and removes artifacts from non-uniform illumination. Compensating the double sized pixels with a flat field correction is neither the only nor necessarily the best approach to correct for these pixels[^4]. The image not only shows the structure of the plastic cover of the pen drive, but also the sticky tape which was used to hold the pen drive to the acrylic glass in front of the FEM, demonstrating the high sensitivity of the system.\n\nLastly, the results of first user experiments show, that the detector is capable of determining protein crystal structures for both known and previously unknown proteins . Of course this success is possible in combination with all the other infrastructure of the beamline at the European XFEL.\n\n# Summary\n\nThe adaptive gain integrating pixel detector, AGIPD, is an x-ray camera developed for use at the European XFEL. It was officially inaugurated together with the European XFEL in August 2017.\n\nThis paper reviewed the complete system of the AGIPD 1 million pixel camera currently installed at the SPB beamline of the European XFEL.\n\nThe system has a complex mechanical mounting that includes an in-vacuum movement system and many electronic boards, some of which are inside the vacuum vessel, some outside in an external housing. Its four independently movable quadrants can be arranged to form a horizontal slit or a rectangular hole with user selectable size.\n\nThe system is built from monolithic blocks of 2\u00a0x\u00a08 ASICs forming a matrix of 128\u00a0x\u00a0512 pixels of (200\u00a0m)$^2$ size[^5]. Each pixel automatically adjusts to the incoming signal such that it can detect any number of photons from single photons to 10$^4$ photons of 12.4\u00a0keV above its noise floor. Images are stored in one of 352 memory cells during the XFEL pulse train and read out in between trains.\n\nThe detector noise is approximately 1\u00a0keV (750\u00a0eV for high CDS gain), which is sufficient to detect single photons in many experiments and most of the early experiments at the SPB beamline have used the AGIPD with great success .\n\nThe MID beamline of the European XFEL is scheduled to have a similar system installed in 2018.\n\nThe authors are deeply indebted to the seemingly countless number of people that have contributed to this project. Beyond the former team members that are not included on the author list these are the people 'behind the scenes' at DESY, PSI, Hamburg University, Bonn University and the European XFEL, who, with their tireless efforts, created a joint environment that made the development of this camera possible.\n\nWe would like to explicitly thank the European XFEL detector group and the involved XFEL groups CAS and ITDM. Integrating a system as complex as the AGIPD into the environment at the European XFEL was an enormous undertaking that required a huge collaborative effort and was successfully done in time to operate the detector at the inauguration of the European XFEL facility.\n\nWe are thankful for the help from many of the DESY infrastructure groups (esp. ZM1, ZM2, ZM3, ZE, FEA and FEB) during the development of the AGIPD.\n\nWe tested many prototypes, sometimes with help from external facilities. We would like to thank all of the people involved in these tests, which were crucial to the development process.\n\nLast, but not least, we are very thankful for the valuable discussions with and the proof-reading of this manuscript by David Pennicard.\n\n[^1]: The vacuum backplane forms a vacuum interface.\n\n[^2]: The deviations from the ideal behavior at the beginning and end of each gain state are excluded from this fit.\n\n[^3]: Since each FEM uses a monolithic silicon sensor which is defined by photolithography to precisions much better than 1\u00a0m the displacements of the pixels in each module w.r.t. their ideal positions in the pixel matrix is negligible compared to the other displacements described here.\n\n[^4]: For most experiments at the European XFEL these pixels are currently excluded from the data analysis.\n\n[^5]: If the double sized pixels are logically split the matrix is 128\u00a0x\u00a0526 pixels.","meta":{"dup_signals":{"dup_doc_count":11,"dup_dump_count":3,"dup_details":{"curated_sources":1,"2024-26":2,"unknown":8}},"filename":"out\/1808.00256_extract_manuscript.tex.md"},"subset":"arxiv"} +{"text":"abstract: Recently there has been tremendous increase in the number of identified extra-solar planetary systems. Our understanding of their formation is tied to exoplanet internal structure models, which rely upon equations of state of light elements and compounds like water. Here we present shock compression data for water with unprecedented accuracy that shows water equations of state commonly used in planetary modeling significantly overestimate the compressibility at conditions relevant to planetary interiors. Furthermore, we show its behavior at these conditions, including reflectivity and isentropic response, is well described by a recent first-principles based equation of state. These findings advocate this water model be used as the standard for modeling Neptune, Uranus, and \"hot Neptune\" exoplanets, and should improve our understanding of these types of planets.\nauthor: M.D. Knudson; M.P.\u00a0Desjarlais; R.W. Lemke; T.R. Mattsson; M. French; N. Nettelmann; R. Redmer\ndate: 2024-10-01\ntitle: Probing the interiors of the ice giants: Shock compression of water to 700 GPa and 3.8 g\/cm$^3$\n\nThe past several years have seen a virtual explosion in the number of extra-solar planets discovered. Two rapidly growing populations of exoplanets are ice giants referred to as \"hot Neptunes\" and \"mini-Neptunes\";\u00a0 planets roughly the same size as or, respectively, smaller than Neptune and Uranus that transit their host stars at significantly smaller radii, resulting in higher temperatures than the ice giants in our solar system. Understanding of the composition and formation of these planets, and thus development of these planetary systems, relies on our knowledge of the equation of state (EOS) of light elements and compounds like water, over a wide pressure and temperature range. To date much of the modeling of ice giants has employed the ANEOS\u00a0 and Sesame\u00a0 models for water that were developed decades ago\u00a0. Discrepancies between these EOS models lead to significant differences in predicted radius evolution of Neptune-mass planets. Depending upon the total amount of heavy elements, and their distribution within the planetary interior, the resulting variation in predicted radius at a given age due to the water EOS can range between 5 and 30% \u00a0. This is a major factor in preventing accurate determination of exoplanet internal composition from their observed radius.\n\nRecent quantum molecular dynamics (QMD) calculations of water\u00a0 suggest an EOS that differs significantly from ANEOS and Sesame. Notably, when incorporated into planetary models, this first-principles (FP) based EOS predicts a $\\sim$``{=html}20% cooler core temperature for Neptune and Uranus\u00a0. The conductivity properties of this FP model are also noteworthy\u00a0, suggesting that water is super-ionic\u00a0 at high densities, $\\rho$, and low temperatures, $T$, relevant to planets such as Uranus and Neptune. This predicted property plays a key role in dynamo models to explain the enigmatic magnetic field structure of these planets\u00a0. Another important result is derived from the predicted phase diagram of water: the icy giants Uranus and Neptune perhaps contain no \"ice\" but dissociated water at a high ionic conductivity, even less so would close-in exoplanets. Hot and mini-Neptunes may even comprise water plasma with substantial electronic conduction. However, the FP EOS for water has not been widely accepted due to its inability to reproduce results from laser driven shock wave experiments in the Mbar regime\u00a0.\n\nWe present results of magnetically accelerated flyer-plate experiments on water performed at the Sandia Z machine\u00a0, a pulsed power accelerator capable of producing extremely large current ($\\sim$``{=html}20 MA) and magnetic field densities ($\\sim$``{=html}10 MG) within a short circuit load. These data, in the range of 100-450 GPa along the Hugoniot \u2013 the locus of end states achievable through compression by large amplitude shock waves \u2013 have considerably higher precision than data obtained with previously used methods, and support the FP EOS for water. The high precision stems from the ability to perform well-defined flyer-plate experiments on Z; the magnetic pressure (\\>500 GPa) can propel the outer anode to velocities approaching 30 km\/s, enabling high-precision, plate-impact EOS measurements in the TPa regime\u00a0. Furthermore, and more significantly, the present work obtained re-shock data of water in the range of 200-700 GPa. These data, at high $\\rho$ and low $T$, provide a stringent test of the isentropic response of water in the several Mbar regime, which is directly relevant to the conditions of interest for planetary modeling of Neptune, Uranus\u00a0, and presumably water-rich exoplanets such as the hot Neptune GJ436b\u00a0. Finally, reflectivity on the Hugoniot was measured and compared to FP calculations for water\u00a0.\n\nAn aluminum flyer-plate\u00a0 was magnetically accelerated to peak velocities of 12-27 km\/s across a 3-4 mm vacuum gap\u00a0. The flyer-plate velocity was monitored throughout the entire trajectory using a Velocity Interferometer System for Any Reflector (VISAR\u00a0), at locations above and below an aluminum water cell\u00a0. A rear quartz window in the cell provided optical access to the sample. In some cases an additional quartz plate was placed between the aluminum drive plate and the water sample, enabling data to be obtained using two different materials as the high-pressure standard, thereby increasing confidence in the measurements. Impact with the cell generated a strong, multi-Mbar shock wave in the aluminum drive plate. This shock was then transmitted either directly into the water sample, or into a quartz plate and then into the water sample. Upon reaching the rear quartz window, the shock was transmitted into the window and reflected back into the water,which re-shocked the water to a higher $P$ and $\\rho$. In all cases the shock waves in the water and quartz were of sufficient amplitude that the resulting shocked material was reflecting\u00a0, enabling the shock velocities to be directly measured using the VISAR. A total of 18 diagnostic channels were utilized for each experiment, enabling multiple, redundant measurements to be made, resulting in an overall uncertainty in the measured flyer-plate and shock velocities of a few tenths of a percent\u00a0.\n\nThe shocked state of the water was determined using the impedance matching technique and the Rankine-Hugoniot (RH) jump relations\u00a0, a set of conditions derived by considering conservation of mass, momentum, and energy across a steady propagating shock wave. The shocked state of the aluminum (quartz) drive plate was determined from the known Hugoniot of aluminum\u00a0 (quartz\u00a0) and the measured flyer-plate (quartz shock) velocity; this defined a point in the pressure - particle velocity ($P-u_p$) plane, as shown in Fig.\u00a0. When the shock transits into the water, a release wave propagates back toward the flyer-plate, and thus the state of the drive plate is constrained to lie on a release adiabat from this point in the $P-u_p$ plane, shown in Fig.\u00a0 as the green line. The shocked state of the water is constrained to lie along a chord in the $P-u_p$ plane with slope given by the product of the measured shock velocity of water, $U_{sw}$, and the known initial density. The intersection of these two curves provides $P$ and $u_p$, shown in Fig.\u00a0 as $(P_1,u_{p1})$; The RH jump relations then provide $\\rho$ in the shocked state. Uncertainties in all kinematic values were determined through a Monte Carlo technique, which uses a statistical process for propagation of all random measurement errors and systematic errors in the standards\u00a0. Using this technique, the one-sigma uncertainties in $P$ and $\\rho$ were found to be \u00a00.5% and \u00a01%, respectively.\n\nA total of 8 Hugoniot experiments were performed over the range of 100 to 450 GPa. Results of these experiments are shown as the red symbols in Fig.\u00a0(a). Also shown are Hugoniot data of Mitchell and Nellis\u00a0, Volkov *et al.*\u00a0, Celliers *et al.*\u00a0, and Podurets *et al.*\u00a0, and the predicted Hugoniot response from ANEOS\u00a0, Sesame 7150\u00a0, and the recent FP EOS model of French *et al.*\u00a0. Note that a reanalysis of the nuclear driven datum of Podurets *et al.*, using an improved aluminum standard for impedance matching\u00a0, resulted in a slight decrease in $\\rho$. The low-$P$ end of our data is in good agreement with the gas gun data of Mitchell and Nellis and the explosively driven shock data of Volkov *et al.* In contrast, our data are significantly less compressible than the laser driven data of Celliers *et al.*, which tend to support the much more compressible ANEOS and Sesame Hugoniots, albeit with significantly large uncertainty and scatter. The vastly reduced uncertainty in $\\rho$ for our data, roughly an order of magnitude, strongly suggest that water is much less compressible than the ANEOS and Sesame models predict, and that water is instead very accurately described by the FP EOS of French *et al.* Furthermore, the reanalyzed Podurets *et al.* datum is also in very good agreement with the FP EOS. Thus, with the exception of the Celliers *et al.* data, the FP based model for water matches all experimental Hugoniot data up to \u00a01.4 TPa.\n\nIn all 8 of the Hugoniot experiments described above, the reflected shock from the rear quartz window drove the water from the Hugoniot state to a re-shocked state at higher $P$ and $\\rho$. The measured shock velocity in the water immediately prior to reflection from the rear quartz window defined the initial shocked state of the water. The measured shock velocity in the rear quartz window and the known Hugoniot of quartz provided the double-shocked $P$ and $u_p$ for water, shown in Fig.\u00a0 as $(P_2,u_{p2})$. The velocity of the second shock in the water, $U_{sw2}$, was then determined by the RH jump relations using the change in $P$, $(P_2-P_1 )$, and $u_p$, $(u_{p2}-u_{p1} )$. The re-shock $\\rho$ was then determined from $U_{sw2}$, the first shock $\\rho$, and $(u_{p2}-u_{p1})$. Using the Monte Carlo technique, the one-sigma uncertainties in $P$ and $\\rho$ for the re-shock states were found to be \u00a00.5-1% and \u00a01-2%, respectively. Although the uncertainty for the re-shock data is larger than that for the principal Hugoniot data (entirely due to the larger uncertainty in the initial state), the accuracy of the present data is a significant improvement over previous re-shock data of Mitchell and Nellis\u00a0 (uncertainty in $\\rho$ of 4-14%) and the pre-compressed Hugoniot data of Lee *et al.*\u00a0, (uncertainty in $\\rho$ of 5-10%).\n\nThe re-shock data for water are shown in Fig.\u00a0(b), where first and second shock states are correlated by like symbols. Also shown are several FP re-shock Hugoniots (thin red lines) and isentropes (thin black lines) for comparison\u00a0. These re-shock Hugoniots along with the known Hugoniot of quartz\u00a0 were used to determine the double-shock envelopes \u2013- the locus of end states achievable through shock and re-shock using a quartz anvil: FP (orange line), ANEOS\u00a0 (pink line), and Sesame\u00a0 (gray line). These re-shock data further confirm the less compressible response of water above \u00a0100 GPa.\n\nNote that the FP re-shock Hugoniots (red) and isentropes (black) are nearly coincident over the $\\rho$ range accessed through the re-shock experiments. This is due to a second order contact for the Hugoniot and isentrope at the initial state\u00a0, which is most easily seen by expanding the entropy as a function of volume in a Taylor series. This implies that the Hugoniot and isentrope are very close in $P$ and $\\rho$ until, at large compression, the rise in $T$ associated with the irreversible shock becomes large enough that thermal pressures become significant. In the range investigated in this study, the difference in $T$ between the re-shock Hugoniot states and the isentrope at the re-shock $\\rho$, as determined by the FP EOS\u00a0, ranged from 200K (out of 6800K) to 330K (out of 40000K) at the lowest and highest $P$, respectively. This makes such a re-shock measurement the best possible test of the isentropic response of the EOS model in this range of $P$ and $\\rho$. Thus the present data validates the isentropic response of the FP EOS in the $P$ and $\\rho$ regime that is intersected by the water-rich models of Neptune and Uranus\u00a0, shown in green, and the exoplanet GJ436b\u00a0, shown in blue.\n\nThe VISAR was also used to infer reflectivity, $R$, of water (at 532 nm) along the Hugoniot. A quadrature VISAR was used for all experiments, which provides four measures of the interference signal at $90\\,^{\\circ}$ intervals. The signals at $180\\,^{\\circ}$ intervals can be subtracted, ensuring the remaining signal only includes coherent reflected laser light (incoherent light, such as self-emission from the hot plasma, would equally contribute to all four quadrature signals). Comparison of the magnitude of these subtracted signals before and after shock breakout from the water to the quartz rear window provides a relative measure of the shocked water $R$ with respect to shocked quartz\u00a0. The uncertainty in $R$ was taken to be the linear sum of the standard deviation of the inferred $R$ from the nine independent VISAR signals obtained from each water cell and the reported uncertainty in $R$ of shocked quartz\u00a0.\n\n$R$ data along the Hugoniot are shown in Fig.\u00a0. Also shown are data from Celliers *et al.*\u00a0 and the predicted $R$ from FP calculations of French and Redmer\u00a0 using both the Perdew, Burke, and Ernzerhof (PBE) and Heyd, Scuseria, and Ernzerhof (HSE) functionals for exchange and correlation. It was anticipated that the HSE functional, which includes the nonlocal Fock exchange, would prove to be more accurate in the calculation of $R$, as this functional has been shown to better reproduce the band gap in semiconductor materials (PBE is known to significantly underestimate the band gap). In comparison with $R$ data of Celliers *et al.*\u00a0 it would appear that the HSE calculations are less accurate. However, our data suggest a much lower peak $R$, which is in significantly better agreement with the HSE calculations. We note that two recent data points ($\\sim$``{=html}140 and 260 GPa) from a group\u00a0 at the Gekko laser in Japan also suggest lower $R$, in very good agreement with our results. These new results lend confidence to the FP calculations, which also predict a super-ionic phase of water at low $T$ and high $\\rho$ conditions relevant to planetary interiors. Furthermore, these results strongly suggest that at these conditions water is in a plasma phase, which would imply that a $T$=0 K EOS for water is not sufficient for modeling of hot and mini-Neptunes, and that water would be expected to mix in the H\/He envelope rather than form an ice shell separate from an outer H\/He envelope.\n\nWe presented data with unprecedented accuracy for shock compression of water to 0.7 TPa and 3.8 g\/cc in a regime relevant to water-rich models of Uranus, Neptune and the exoplanet GJ436b. The experimental $P$, $\\rho$, and $R$ are in excellent agreement with density functional theory predictions, thereby validating first-principles thermodynamic calculations as a sound basis for planetary modeling, and strongly advocating the FP EOS be the standard in modeling water in Neptune, Uranus, and \"hot Neptune\" exoplanets. In particular this work supports the prediction of a $\\sim$``{=html}20% cooler core temperature for Neptune and Uranus\u00a0. As the calculated amount of H and He in the planets decreases with the stiffness of the water EOS, confidence in the presence of a few percent H and He in the deep interior of Neptune and Uranus, as derived from the (rather stiff) FP EOS based models \u00a0, is strengthened by this work. As H would be metallic, this might influence the generation of the magnetic field. Furthermore, the validation of the FP EOS in the regime relevant to planetary interiors all but eliminates one significant source of uncertainty in the predicted radius evolution of Neptune-mass planets within assumed composition models. This will improve our understanding of the interior structure of these planets, and perhaps our understanding of these planetary systems.\n\nWe acknowledge the crew of the Sandia Z facility for their contributions to these experiments, AB and MB for assistance in numerical calculations, and support from the DFG via the SFB 652 and the grant Re 882\/11-1. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.","meta":{"dup_signals":{"dup_doc_count":13,"dup_dump_count":13,"dup_details":{"curated_sources":1,"2016-44":1,"2016-40":1,"2016-36":1,"2016-30":1,"2016-26":1,"2016-22":1,"2016-18":1,"2016-07":1,"2015-48":1,"2015-40":1,"2015-32":1,"2016-50":1}},"filename":"out\/1201.2622_extract_article.tex.md"},"subset":"arxiv"} +{"text":"abstract: A capsule is a group of neurons whose activity vector represents the instantiation parameters of a specific type of entity such as an object or an object part. We use the length of the activity vector to represent the probability that the entity exists and its orientation to represent the instantiation parameters. Active capsules at one level make predictions, via transformation matrices, for the instantiation parameters of higher-level capsules. When multiple predictions agree, a higher level capsule becomes active. We show that a discrimininatively trained, multi-layer capsule system achieves state-of-the-art performance on MNIST and is considerably better than a convolutional net at recognizing highly overlapping digits. To achieve these results we use an iterative routing-by-agreement mechanism: A lower-level capsule prefers to send its output to higher level capsules whose activity vectors have a big scalar product with the prediction coming from the lower-level capsule.\nauthor: Sara Sabour \nNicholas Frosst \nGeoffrey E. Hinton \nGoogle Brain \nToronto \n`{sasabour, frosst, email@example.com` \nbibliography: nips.bib\ndate: May 2017\ntitle: Dynamic Routing Between Capsules\n\n# Introduction\n\nHuman vision ignores irrelevant details by using a carefully determined sequence of fixation points to ensure that only a tiny fraction of the optic array is ever processed at the highest resolution. Introspection is a poor guide to understanding how much of our knowledge of a scene comes from the sequence of fixations and how much we glean from a single fixation, but in this paper we will assume that a single fixation gives us much more than just a single identified object and its properties. We assume that our multi-layer visual system creates a parse tree-like structure on each fixation, and we ignore the issue of how these single-fixation parse trees are coordinated over multiple fixations.\n\nParse trees are generally constructed on the fly by dynamically allocating memory. Following , however, we shall assume that, for a single fixation, a parse tree is carved out of a fixed multilayer neural network like a sculpture is carved from a rock. Each layer will be divided into many small groups of neurons called \"capsules\" () and each node in the parse tree will correspond to an active capsule. Using an iterative routing process, each active capsule will choose a capsule in the layer above to be its parent in the tree. For the higher levels of a visual system, this iterative process will be solving the problem of assigning parts to wholes.\n\nThe activities of the neurons within an active capsule represent the various properties of a particular entity that is present in the image. These properties can include many different types of instantiation parameter such as pose (position, size, orientation), deformation, velocity, albedo, hue, texture, etc. One very special property is the existence of the instantiated entity in the image. An obvious way to represent existence is by using a separate logistic unit whose output is the probability that the entity exists. In this paper we explore an interesting alternative which is to use the overall length of the vector of instantiation parameters to represent the existence of the entity and to force the orientation of the vector to represent the properties of the entity[^1]. We ensure that the length of the vector output of a capsule cannot exceed $1$ by applying a non-linearity that leaves the orientation of the vector unchanged but scales down its magnitude.\n\nThe fact that the output of a capsule is a vector makes it possible to use a powerful dynamic routing mechanism to ensure that the output of the capsule gets sent to an appropriate parent in the layer above. Initially, the output is routed to all possible parents but is scaled down by coupling coefficients that sum to $1$. For each possible parent, the capsule computes a \"prediction vector\" by multiplying its own output by a weight matrix. If this prediction vector has a large scalar product with the output of a possible parent, there is top-down feedback which increases the coupling coefficient for that parent and decreasing it for other parents. This increases the contribution that the capsule makes to that parent thus further increasing the scalar product of the capsule's prediction with the parent's output. This type of \"routing-by-agreement\" should be far more effective than the very primitive form of routing implemented by max-pooling, which allows neurons in one layer to ignore all but the most active feature detector in a local pool in the layer below. We demonstrate that our dynamic routing mechanism is an effective way to implement the \"explaining away\" that is needed for segmenting highly overlapping objects.\n\nConvolutional neural networks (CNNs) use translated replicas of learned feature detectors. This allows them to translate knowledge about good weight values acquired at one position in an image to other positions. This has proven extremely helpful in image interpretation. Even though we are replacing the scalar-output feature detectors of CNNs with vector-output capsules and max-pooling with routing-by-agreement, we would still like to replicate learned knowledge across space. To achieve this, we make all but the last layer of capsules be convolutional. As with CNNs, we make higher-level capsules cover larger regions of the image. Unlike max-pooling however, we do not throw away information about the precise position of the entity within the region. For low level capsules, location information is \"place-coded\" by which capsule is active. As we ascend the hierarchy, more and more of the positional information is \"rate-coded\" in the real-valued components of the output vector of a capsule. This shift from place-coding to rate-coding combined with the fact that higher-level capsules represent more complex entities with more degrees of freedom suggests that the dimensionality of capsules should increase as we ascend the hierarchy.\n\n# How the vector inputs and outputs of a capsule are computed\n\nThere are many possible ways to implement the general idea of capsules. The aim of this paper is not to explore this whole space but simply to show that one fairly straightforward implementation works well and that dynamic routing helps.\n\nWe want the length of the output vector of a capsule to represent the probability that the entity represented by the capsule is present in the current input. We therefore use a non-linear **\"squashing\"** function to ensure that short vectors get shrunk to almost zero length and long vectors get shrunk to a length slightly below $1$. We leave it to discriminative learning to make good use of this non-linearity. $${\\bf v}_j = \\frac{||{\\bf s}_j||^2}{1+||{\\bf s}_j||^2} \\frac{{\\bf s}_j}{||{\\bf s}_j||}\n\\label{squash}$$ where ${\\bf v}_j$ is the vector output of capsule $j$ and ${\\mathbf{s}}_j$ is its total input.\n\nFor all but the first layer of capsules, the total input to a capsule ${\\bf s}_j$ is a weighted sum over all \"prediction vectors\" ${\\bf \\hat{u}}_{j|i}$ from the capsules in the layer below and is produced by multiplying the output ${\\bf u}_i$ of a capsule in the layer below by a weight matrix ${\\bf W}_{ij}$ $${\\bf s}_j = \\sum_i c_{ij} {\\bf \\hat{u}}_{j|i} \\ , \\ \\ \\ \\ \\ \\ \\ \n{\\bf \\hat{u}}_{j|i} = {\\bf W}_{ij}{\\bf u}_i$$where the $c_{ij}$ are coupling coefficients that are determined by the iterative dynamic routing process.\n\nThe coupling coefficients between capsule $i$ and all the capsules in the layer above sum to $1$ and are determined by a \"routing softmax\" whose initial logits $b_{ij}$ are the log prior probabilities that capsule\u00a0$i$ should be coupled to capsule\u00a0$j$. $$c_{ij} = \\frac{\\exp(b_{ij})}{\\sum_k \\exp(b_{ik})}\n\\label{softmax}$$ The log priors can be learned discriminatively at the same time as all the other weights. They depend on the location and type of the two capsules but not on the current input image[^2]. The initial coupling coefficients are then iteratively refined by measuring the agreement between the current output ${\\bf v}_j$ of each capsule, $j$, in the layer above and the prediction ${\\bf \\hat{u}}_{j|i}$ made by capsule $i$.\n\nThe agreement is simply the scalar product $a_{ij} = {\\bf v}_j . {\\bf \\hat{ u}}_{j|i}$. This agreement is treated as if it was a log likelihood and is added to the initial logit, $b_{ij}$ before computing the new values for all the coupling coefficients linking capsule\u00a0$i$ to higher level capsules.\n\nIn convolutional capsule layers, each capsule outputs a local grid of vectors to each type of capsule in the layer above using different transformation matrices for each member of the grid as well as for each type of capsule.\n\n# Margin loss for digit existence\n\nWe are using the length of the instantiation vector to represent the probability that a capsule's entity exists. We would like the top-level capsule for digit class $k$ to have a long instantiation vector if and only if that digit is present in the image. To allow for multiple digits, we use a separate margin loss, $L_k$ for each digit capsule, $k$: $$L_k = T_k \\ \\max(0, m^{+} - ||{\\bf v}_k||)^2 + \\lambda \\ (1-T_k) \\ \\max(0, ||{\\bf v}_k|| - m^{-})^2\n\\label{digit-loss}$$ where $T_k=1$ iff a digit of class $k$ is present[^3] and $m^{+} = 0.9$ and $m^{-} = 0.1$. The $\\lambda$ down-weighting of the loss for absent digit classes stops the initial learning from shrinking the lengths of the activity vectors of all the digit capsules. We use $\\lambda=0.5$. The total loss is simply the sum of the losses of all digit capsules.\n\n# CapsNet architecture\n\nA simple CapsNet architecture is shown in Fig.\u00a0. The architecture is shallow with only two convolutional layers and one fully connected layer. Conv$1$ has $256$, $9 \\times 9$ convolution kernels with a stride of 1 and ReLU activation. This layer converts pixel intensities to the activities of local feature detectors that are then used as inputs to the *primary* capsules.\n\nThe primary capsules are the lowest level of multi-dimensional entities and, from an inverse graphics perspective, activating the primary capsules corresponds to inverting the rendering process. This is a very different type of computation than piecing instantiated parts together to make familiar wholes, which is what capsules are designed to be good at.\n\nThe second layer (PrimaryCapsules) is a convolutional capsule layer with $32$ channels of convolutional $8$D capsules (*i.e.* each primary capsule contains 8 convolutional units with a $9 \\times 9$ kernel and a stride of 2). Each primary capsule output sees the outputs of all $256 \\times 81$ Conv$1$ units whose receptive fields overlap with the location of the center of the capsule. In total PrimaryCapsules has $[32 \\times 6 \\times 6]$ capsule outputs (each output is an $8$D vector) and each capsule in the $[6 \\times 6]$ grid is sharing their weights with each other. One can see PrimaryCapsules as a Convolution layer with Eq.\u00a0 as its block non-linearity. The final Layer (DigitCaps) has one $16$D capsule per digit class and each of these capsules receives input from all the capsules in the layer below.\n\nWe have routing only between two consecutive capsule layers (e.g. PrimaryCapsules and DigitCaps). Since Conv$1$ output is $1$D, there is no orientation in its space to agree on. Therefore, no routing is used between Conv$1$ and PrimaryCapsules. All the routing logits ($b_{ij}$) are initialized to zero. Therefore, initially a capsule output (${\\bf u}_i$) is sent to all parent capsules (${\\bf v}_0...{\\bf v}_{9}$) with equal probability ($c_{ij}$). \nOur implementation is in TensorFlow () and we use the Adam optimizer () with its TensorFlow default parameters, including the exponentially decaying learning rate, to minimize the sum of the margin losses in Eq. .\n\n## Reconstruction as a regularization method\n\n```latex\n\\begin{tabular}{>{\\centering\\arraybackslash} m{1cm} | >{\\centering\\arraybackslash} m{1.5cm} | >{\\centering\\arraybackslash} m{1.5cm} | >{\\centering\\arraybackslash} m{1.5cm} | >{\\centering\\arraybackslash}m{1.5cm} ? >{\\centering\\arraybackslash} m{1.5cm} | >{\\centering\\arraybackslash} m{1.5cm}}\n$(l,p,r)$ & $(2,2,2)$ & $(5,5,5)$ & $(8,8,8)$ & $(9,9,9)$ & $(5,3,5)$ & $(5,3,3)$\\\\ \\hline\n\\pbox{1cm}{Input \\\\ \\\\ \\\\ \\\\ Output}&\n%input &\n\\includegraphics[height=3cm]{recons\/2_258} &\n\\includegraphics[height=3cm]{recons\/5_153} &\n\\includegraphics[height=3cm]{recons\/8_226} &\n\\includegraphics[height=3cm]{recons\/9_125} &\n\\includegraphics[height=3cm]{recons\/5_2035} &\n\\includegraphics[height=3cm]{recons\/5_2035p}\n\\end{tabular}\n```\n\nWe use an additional reconstruction loss to encourage the digit capsules to encode the instantiation parameters of the input digit. During training, we mask out all but the activity vector of the correct digit capsule. Then we use this activity vector to reconstruct the input image. The output of the digit capsule is fed into a decoder consisting of $3$ fully connected layers that model the pixel intensities as described in Fig.\u00a0. We minimize the sum of squared differences between the outputs of the logistic units and the pixel intensities. We scale down this reconstruction loss by $0.0005$ so that it does not dominate the margin loss during training. As illustrated in Fig.\u00a0 the reconstructions from the $16$D output of the CapsNet are robust while keeping only important details.\n\n# Capsules on MNIST\n\nTraining is performed on $28 \\times 28$ MNIST () images that have been shifted by up to $2$ pixels in each direction with zero padding. No other data augmentation\/deformation is used. The dataset has $60$K and $10$K images for training and testing respectively.\n\nWe test using a single model without any model averaging. achieves $0.21$% test error with ensembling and augmenting the data with rotation and scaling. They achieve $0.39$% without them. We get a low test error ($\\bm{0.25}$%) on a $3$ layer network previously only achieved by deeper networks. Tab.\u00a0 reports the test error rate on MNIST for different CapsNet setups and shows the importance of routing and reconstruction regularizer. Adding the reconstruction regularizer boosts the routing performance by enforcing the pose encoding in the capsule vector.\n\nThe baseline is a standard CNN with three convolutional layers of $256, 256, 128$ channels. Each has 5x5 kernels and stride of 1. The last convolutional layers are followed by two fully connected layers of size $328, 192$. The last fully connected layer is connected with dropout to a $10$ class softmax layer with cross entropy loss. The baseline is also trained on 2-pixel shifted MNIST with Adam optimizer. The baseline is designed to achieve the best performance on MNIST while keeping the computation cost as close as to CapsNet. In terms of number of parameters the baseline has 35.4M while CapsNet has 8.2M parameters and 6.8M parameters without the reconstruction subnetwork.\n\n| Method | Routing | Reconstruction | MNIST (%) | MultiMNIST (%) | |\n|:--------:|:-------:|:--------------:|:-----------------------:|:--------------:|:---:|\n| Baseline | \\- | \\- | $0.39$ | $8.1$ | |\n| CapsNet | 1 | no | $0.34_{\\pm 0.032}$ | \\- | |\n| CapsNet | 1 | yes | $0.29_{\\pm 0.011}$ | $7.5$ | |\n| CapsNet | 3 | no | $0.35_{\\pm 0.036}$ | \\- | |\n| CapsNet | 3 | yes | $\\bm{0.25}_{\\pm 0.005}$ | $\\bm{5.2}$ | |\n\nCapsNet classification test accuracy. The MNIST average and standard deviation results are reported from $3$ trials.\n\n## What the individual dimensions of a capsule represent\n\n| Scale and thickness | \"image\" |\n|:---|:---|\n| Localized part | \"image\" |\n| Stroke thickness | \"image\" |\n| Localized skew | \"image\" |\n| Width and translation | \"image\" |\n| Localized part | \"image\" |\n\nSince we are passing the encoding of only one digit and zeroing out other digits, the dimensions of a digit capsule should learn to span the space of variations in the way digits of that class are instantiated. These variations include stroke thickness, skew and width. They also include digit-specific variations such as the length of the tail of a 2. We can see what the individual dimensions represent by making use of the decoder network. After computing the activity vector for the correct digit capsule, we can feed a perturbed version of this activity vector to the decoder network and see how the perturbation affects the reconstruction. Examples of these perturbations are shown in Fig.\u00a0. We found that one dimension (out of $16$) of the capsule almost always represents the width of the digit. While some dimensions represent combinations of global variations, there are other dimensions that represent variation in a localized part of the digit. For example, different dimensions are used for the length of the ascender of a 6 and the size of the loop.\n\n## Robustness to Affine Transformations\n\nExperiments show that each DigitCaps capsule learns a more robust representation for each class than a traditional convolutional network. Because there is natural variance in skew, rotation, style, etc in hand written digits, the trained CapsNet is moderately robust to small affine transformations of the training data.\n\nTo test the robustness of CapsNet to affine transformations, we trained a CapsNet and a traditional convolutional network (with MaxPooling and DropOut) on a padded and translated MNIST training set, in which each example is an MNIST digit placed randomly on a black background of $40\\times40$ pixels. We then tested this network on the affNIST[^4] data set, in which each example is an MNIST digit with a random small affine transformation. Our models were never trained with affine transformations other than translation and any natural transformation seen in the standard MNIST. An under-trained CapsNet with early stopping which achieved 99.23% accuracy on the expanded MNIST test set achieved 79% accuracy on the affnist test set. A traditional convolutional model with a similar number of parameters which achieved similar accuracy (99.22%) on the expanded mnist test set only achieved 66% on the affnist test set.\n\n# Segmenting highly overlapping digits\n\nDynamic routing can be viewed as a parallel attention mechanism that allows each capsule at one level to attend to some active capsules at the level below and to ignore others. This should allow the model to recognize multiple objects in the image even if objects overlap. Hinton et al. propose the task of segmenting and recognizing highly overlapping digits ( and others have tested their networks in a similar domain (, , ). The routing-by-agreement should make it possible to use a prior about the shape of objects to help segmentation and it should obviate the need to make higher-level segmentation decisions in the domain of pixels.\n\n## MultiMNIST dataset\n\n```latex\n\\begin{tabular}{c|c|c|c?c|c?c|c}\nR:$(2,7)$ & R:$(6,0)$ & R:$(6,8)$ & R:$(7,1)$ & *R:$(5,7)$ & *R:$(2,3)$ & R:$(2,8)$ & R:P:$(2,7)$\\\\\nL:$(2,7)$ & L:$(6,0)$ & L:$(6,8)$ & L:$(7,1)$ & L:$(5,0)$ & L:$(4,3)$ &L:$(2,8)$ & L:$(2,8)$\\\\ \\hline\n\\includegraphics[height=3cm]{recons\/27} &\n\\includegraphics[height=3cm]{recons\/60} &\n\\includegraphics[height=3cm]{recons\/68} &\n\\includegraphics[height=3cm]{recons\/71} &\n\\includegraphics[height=3cm]{recons\/0_5_5_0_332} &\n\\includegraphics[height=3cm]{recons\/4_3_3_4_397} &\n\\includegraphics[height=3cm]{recons\/2_8_2_7_152} &\n\\includegraphics[height=3cm]{recons\/2_8_2_7_153p} \\\\ \\hline\nR:$(8,7)$ & R:$(9,4)$ & R:$(9,5)$ & R:$(8,4)$ & *R:$(0,8)$ & *R:$(1,6)$ & R:$(4,9)$ & R:P:$(4, 0)$\\\\\nL:$(8,7)$ & L:$(9,4)$ & L:$(9,5)$ & L:$(8,4)$ & L:$(1,8)$ & L:$(7,6)$ & L:$(4, 9)$ &L:$(4, 9)$\\\\ \\hline\n\\includegraphics[height=3cm]{recons\/87} &\n\\includegraphics[height=3cm]{recons\/94} &\n\\includegraphics[height=3cm]{recons\/95} &\n\\includegraphics[height=3cm]{recons\/84} &\n\\includegraphics[height=3cm]{recons\/1_8_8_1_264} &\n\\includegraphics[height=3cm]{recons\/7_6_6_7_4} &\n\\includegraphics[height=3cm]{recons\/4_9_4_0_453} &\n\\includegraphics[height=3cm]{recons\/4_9_4_0_454p} \\\\ \\hline\n\\end{tabular}\n```\n\nWe generate the MultiMNIST training and test dataset by overlaying a digit on top of another digit from the same set (training or test) but different class. Each digit is shifted up to $4$ pixels in each direction resulting in a $36\\times36$ image. Considering a digit in a $28\\times28$ image is bounded in a $20\\times20$ box, two digits bounding boxes on average have $80$% overlap. For each digit in the MNIST dataset we generate $1$K MultiMNIST examples. So the training set size is $60$M and the test set size is $10$M.\n\n## MultiMNIST results\n\nOur $3$ layer CapsNet model trained from scratch on MultiMNIST training data achieves higher test classification accuracy than our baseline convolutional model. We are achieving the same classification error rate of $5.0$% on highly overlapping digit pairs as the sequential attention model of achieves on a much easier task that has far less overlap ($80$% overlap of the boxes around the two digits in our case vs $<4$% for ). On test images, which are composed of pairs of images from the test set, we treat the two most active digit capsules as the classification produced by the capsules network. During reconstruction we pick one digit at a time and use the activity vector of the chosen digit capsule to reconstruct the image of the chosen digit (we know this image because we used it to generate the composite image). The only difference with our MNIST model is that we increased the period of the decay step for the learning rate to be $10\\times$ larger because the training dataset is larger.\n\nThe reconstructions illustrated in Fig.\u00a0 show that CapsNet is able to segment the image into the two original digits. Since this segmentation is not at pixel level we observe that the model is able to deal correctly with the overlaps (a pixel is on in both digits) while accounting for all the pixels. The position and the style of each digit is encoded in DigitCaps. The decoder has learned to reconstruct a digit given the encoding. The fact that it is able to reconstruct digits regardless of the overlap shows that each digit capsule can pick up the style and position from the votes it is receiving from PrimaryCapsules layer.\n\nTab.\u00a0 emphasizes the importance of capsules with routing on this task. As a baseline for the classification of CapsNet accuracy we trained a convolution network with two convolution layers and two fully connected layers on top of them. The first layer has $512$ convolution kernels of size $9\\times9$ and stride $1$. The second layer has $256$ kernels of size $5\\times5$ and stride $1$. After each convolution layer the model has a pooling layer of size $2\\times2$ and stride $2$. The third layer is a $1024$D fully connected layer. All three layers have ReLU non-linearities. The final layer of $10$ units is fully connected. We use the TensorFlow default Adam optimizer () to train a sigmoid cross entropy loss on the output of final layer. This model has $24.56$M parameters which is $2$ times more parameters than CapsNet with $11.36$M parameters. We started with a smaller CNN ($32$ and $64$ convolutional kernels of $5\\times5$ and stride of $1$ and a $512$D fully connected layer) and incrementally increased the width of the network until we reached the best test accuracy on a $10$K subset of the MultiMNIST data. We also searched for the right decay step on the $10$K validation set.\n\nWe decode the two most active DigitCaps capsules one at a time and get two images. Then by assigning any pixel with non-zero intensity to each digit we get the segmentation results for each digit.\n\n# Other datasets\n\nWe tested our capsule model on CIFAR10 and achieved 10.6% error with an ensemble of 7 models each of which is trained with $3$ routing iterations on $24 \\times 24$ patches of the image. Each model has the same architecture as the simple model we used for MNIST except that there are three color channels and we used $64$ different types of primary capsule. We also found that it helped to introduce a \"none-of-the-above\" category for the routing softmaxes, since we do not expect the final layer of ten capsules to explain everything in the image. 10.6% test error is about what standard convolutional nets achieved when they were first applied to CIFAR10 ().\n\nOne drawback of Capsules which it shares with generative models is that it likes to account for everything in the image so it does better when it can model the clutter than when it just uses an additional \"orphan\" category in the dynamic routing. In CIFAR-10, the backgrounds are much too varied to model in a reasonable sized net which helps to account for the poorer performance.\n\nWe also tested the exact same architecture as we used for MNIST on smallNORB () and achieved $2.7\\%$ test error rate, which is on-par with the state-of-the-art (). The smallNORB dataset consists of 96x96 stereo grey-scale images. We resized the images to 48x48 and during training processed random 32x32 crops of them. We passed the central 32x32 patch during test.\n\nWe also trained a smaller network on the small training set of SVHN () with only 73257 images. We reduced the number of first convolutional layer channels to 64, the primary capsule layer to 16 $6D$-capsules with $8D$ final capsule layer at the end and achieved $4.3\\%$ on the test set.\n\n# Discussion and previous work\n\nFor thirty years, the state-of-the-art in speech recognition used hidden Markov models with Gaussian mixtures as output distributions. These models were easy to learn on small computers, but they had a representational limitation that was ultimately fatal: The one-of-n representations they use are exponentially inefficient compared with, say, a recurrent neural network that uses distributed representations. To double the amount of information that an HMM can remember about the string it has generated so far, we need to square the number of hidden nodes. For a recurrent net we only need to double the number of hidden neurons.\n\nNow that convolutional neural networks have become the dominant approach to object recognition, it makes sense to ask whether there are any exponential inefficiencies that may lead to their demise. A good candidate is the difficulty that convolutional nets have in generalizing to novel viewpoints. The ability to deal with translation is built in, but for the other dimensions of an affine transformation we have to chose between replicating feature detectors on a grid that grows exponentially with the number of dimensions, or increasing the size of the labelled training set in a similarly exponential way. Capsules () avoid these exponential inefficiencies by converting pixel intensities into vectors of instantiation parameters of recognized fragments and then applying transformation matrices to the fragments to predict the instantiation parameters of larger fragments. Transformation matrices that learn to encode the intrinsic spatial relationship between a part and a whole constitute viewpoint invariant knowledge that automatically generalizes to novel viewpoints. proposed transforming autoencoders to generate the instantiation parameters of the PrimaryCapsule layer and their system required transformation matrices to be supplied externally. We propose a complete system that also answers \"how larger and more complex visual entities can be recognized by using agreements of the poses predicted by active, lower-level capsules\".\n\nCapsules make a very strong representational assumption: At each location in the image, there is at most one instance of the type of entity that a capsule represents. This assumption, which was motivated by the perceptual phenomenon called \"crowding\" (), eliminates the binding problem () and allows a capsule to use a distributed representation (its activity vector) to encode the instantiation parameters of *the* entity of that type at a given location. This distributed representation is exponentially more efficient than encoding the instantiation parameters by activating a point on a high-dimensional grid and with the right distributed representation, capsules can then take full advantage of the fact that spatial relationships can be modelled by matrix multiplies.\n\nCapsules use neural activities that vary as viewpoint varies rather than trying to eliminate viewpoint variation from the activities. This gives them an advantage over \"normalization\" methods like spatial transformer networks (): They can deal with multiple different affine transformations of different objects or object parts at the same time.\n\nCapsules are also very good for dealing with segmentation, which is another of the toughest problems in vision, because the vector of instantiation parameters allows them to use routing-by-agreement, as we have demonstrated in this paper. The importance of dynamic routing procedure is also backed by biologically plausible models of invarient pattern recognition in the visual cortex. proposes dynamic connections and canonical object based frames of reference to generate shape descriptions that can be used for object recognition. improves upon dynamic connections and presents a biologically plausible, position and scale invariant model of object representations.\n\nResearch on capsules is now at a similar stage to research on recurrent neural networks for speech recognition at the beginning of this century. There are fundamental representational reasons for believing that it is a better approach but it probably requires a lot more small insights before it can out-perform a highly developed technology. The fact that a simple capsules system already gives unparalleled performance at segmenting overlapping digits is an early indication that capsules are a direction worth exploring.\n\n**Acknowledgement.** Of the many who provided us with constructive comments, we are specially grateful to Robert Gens, Eric Langlois, Vincent Vanhoucke, Chris Williams, and the reviewers for their fruitful comments and corrections.\n\n# How many routing iterations to use?\n\nIn order to experimentally verify the convergence of the routing algorithm we plot the average change in the routing logits at each routing iteration. Fig.\u00a0 shows the average $b_{ij}$ change after each routing iteration. Experimentally we observe that there is negligible change in the routing by $5$ iteration from the start of training. Average change in the $2^{nd}$ pass of the routing settles down after 500 epochs of training to 0.007 while at routing iteration 5 the logits only change by $1e-5$ on average.\n\nWe observed that in general more routing iterations increases the network capacity and tends to overfit to the training dataset. Fig.\u00a0 shows a comparison of Capsule training loss on Cifar10 when trained with 1 iteration of routing vs $3$ iteration of routing. Motivated by Fig.\u00a0 and Fig.\u00a0 we suggest 3 iteration of routing for all experiments.\n\n[^1]: This makes biological sense as it does not use large activities to get accurate representations of things that probably don't exist.\n\n[^2]: For MNIST we found that it was sufficient to set all of these priors to be equal.\n\n[^3]: We do not allow an image to contain two instances of the same digit class. We address this weakness of capsules in the discussion section.\n\n[^4]: Available at .","meta":{"dup_signals":{"dup_doc_count":15},"filename":"out\/1710.09829_extract_nips.tex.md"},"subset":"arxiv"} +{"text":"abstract: Theano is a Python library that allows to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. Since its introduction in\u00a0 it has been one of the most used CPU and GPU mathematical compilers \u2013 especially in the machine learning community\u00a0 \u2013 and has shown steady performance improvements\u00a0. Theano is being actively and continuously developed since 2008, multiple frameworks have been built on top of it and it has been used to produce many state-of-the-art machine learning models.\n .\n The present article is structured as follows. Section\u00a0 provides an overview of the Theano software and its community. Section\u00a0 presents the principal features of Theano and how to use them, and compares them with other similar projects. Section\u00a0 focuses on recently-introduced functionalities and improvements. Section\u00a0 compares the performance of Theano against Torch7\u00a0 and TensorFlow\u00a0 on several machine learning models. Section\u00a0 discusses current limitations of Theano and potential ways of improving it.\nbibliography: theano2016.bib\ntitle: Theano: A Python framework for fast computation of mathematical expressions\n\n[^1]\n\n# Overview\n\n## Vision\n\nTheano allows a user to symbolically define mathematical expressions and have them compiled in a highly optimized fashion either on CPUs or GPUs (the latter using CUDA)[^2], just by modifying a configuration flag. Furthermore, Theano can automatically compute symbolic differentiation of complex expressions, ignore the variables that are not required to compute the final output, reuse partial results to avoid redundant computations, apply mathematical simplifications, compute operations in place when possible to minimize the memory usage, and apply numerical stability optimization to overcome or minimize the error due to hardware approximations. To achieve this, the mathematical expressions defined by the user are stored as a graph of variables and operations, that is pruned and optimized at compilation time.\n\nThe interface to Theano is Python, a powerful and flexible language that allows for rapid prototyping and provides a fast and easy way to interact with the data. The downside of Python is its interpreter, that is in many cases a poor engine for executing mathematical calculations both in terms of memory usage and speed. Theano overcomes this limitation, by exploiting the compactness and ductility of the Python language and combining them with a fast and optimized computation engine.\n\nTheano's API mimics NumPy\u00a0, a widely adopted Python library that provides an n-dimensional array data type and many functions for indexing, reshaping, and performing elementary computations (exp, log, sin, etc.) on entire arrays at once. This allows Python users to rapidly switch to Theano using a familiar syntax and set of instructions \u2013 extended with advanced features, such as automatic gradient computation, numerical stability improvements and optimization \u2013 and generate a high-performance code for CPU as well as for GPU, without requiring changes to the user code. Theano has also been designed for easy and fast extensibility through the definition of custom graph expressions written in Python, C++, or CUDA.\n\n## Community\n\nTheano is a free, open-source software, licensed under the New (3-clause) BSD license. It relies on a wide and very active community of developers and users worldwide.\n\nThe main communication channels with the developers are the project's GitHub page[^3] for bug reports, feature requests, and pull requests, and the theano-dev mailing list,[^4] which has 675 subscribers. Support for users is provided by the community at theano-users[^5] (more than 3000 members) and on StackOverflow[^6] (more than 1000 questions asked). PyPI[^7] counted 38k downloads of Theano packages during the last month.\n\nSince the project development migrated to GitHub in 2011, Theano has been forked 1280 times. Around 250 developers have actively contributed to the code base, and numerous others have played a role in the community, asking, answering or curating questions, helping discussing the development needs, and writing documentation, tutorials,[^8] or even full-fledged software projects based on Theano.\n\n## Software based on Theano\n\nSeveral software packages have been developed to build on the strengths of Theano, with a higher-level user interface, more suitable for certain goals. For instance, machine learning and deep learning packages, such as Pylearn2\u00a0, Blocks\u00a0, Lasagne\u00a0, and Keras\u00a0, have been developed with the goal of making it easier to express the architecture of deep learning models, and training algorithms, as mathematical expressions to be evaluated by Theano.\n\nAnother example is PyMC3\u00a0, a probabilistic programming framework that uses Theano to derive expressions for gradients automatically, and to generate C code for fast execution.\n\n# Main features\n\nTheano defines a *language* to represent mathematical expressions and manipulate them (Section\u00a0), a *compiler* to create functions that can compute values for these expressions (Section\u00a0), and a *library* which will execute these functions when evaluated on numeric values (Section\u00a0). We also explain how Theano can be extended (Section\u00a0). Finally, we provide some comparison points with related software (Section\u00a0).\n\n## Mathematical expressions\n\n### Graph structure\n\nTheano represents symbolic mathematical expressions as directed, acyclic graphs. These graphs are also bipartite, containing two kinds of nodes:\n\n- **Variable** nodes (or variables), which represent *data*, usually tensors;\n\n- **Apply** nodes, which represent the application of *mathematical operations*.\n\nIn practice, variables are used for graph inputs and outputs, as well as for intermediate values. During the execution phase, values will be provided for input variables, and computed for intermediate and output ones. An Apply node has inputs and outputs, which are Variable nodes; it represents the application of a mathematical operation (or Op) on its input variables. A Variable node can be the input to several Apply nodes, but can be the output of at most one (graph inputs are not the result of any computation). This corresponds to the single static assignment (SSA) form in compiler design, in that a variable is the result of only one assignation.\n\nThis structure is similar to dataflow graphs\u00a0, where Apply nodes would correspond to operations nodes (the only kind of nodes), and Variable nodes would correspond to arcs in the dataflow graph. The main difference is that a single intermediate Variable node can be an input to several Apply nodes, whereas a dataflow graph would require different arcs, one for each of the next operations.\n\nVariables are strongly typed, they enforce some conditions on the values that can be associated with them. These types are known since the construction of the graph. The main categories of types are:\n\n- `TensorType`, which represents n-dimensional arrays in the main memory, the values associated with variables of that type are NumPy `ndarray` objects;\n\n- `CudaNdarrayType`, which represents n-dimensional arrays in GPU memory, associated with `CudaNdarray` objects, used in the legacy GPU back-end;\n\n- `GpuArrayType`, associated with `GpuArray` objects, its equivalent in the new GPU back-end;\n\n- `Sparse`, for main-memory sparse matrices, represented by SciPy CSC or CSR matrices.\n\nThe number of dimensions and the data type (float32, int64, etc.) are part of the type, as well as what we call the *broadcastable pattern*, which indicates which dimensions are guaranteed to have a shape of 1. Otherwise, the shape is not part of the type, and neither is the memory layout (strides).\n\n### Building a graph\n\nA computation graph is usually constructed by creating free symbolic variables first, corresponding to the inputs of the graph. Since variables are strongly typed in Theano, the type of these variables has to be specified at creation time. By calling Python functions on variables, the user can then interact with them in a direct and natural way. This is reflected under the hood by the creation of Apply nodes and new Variable nodes that extend the graph. The `tensor` module exposes many of the functions provided by NumPy for tensor operations, to present a familiar interface to users. Some of these add a single Apply node and its output to the graph, returning the output Variable node, while other build more complex graphs with Apply nodes corresponding to different Ops, combined in such a way that the returned variable represents the expected result.\n\nIt is also possible to clone an existing graph, or a part of it. In that case, what was an intermediate variable in the original graph could become a free input, or an output, of the cloned graph. It is also possible to clone with replacements, which make it possible to plug together different disconnected graphs, making inputs into intermediate Variable nodes.\n\n### Symbolic differentiation\n\nA useful way of deriving gradients is by applying the chain rule backwards through the graph, from a scalar cost towards the inputs (or parameters). This procedure is known as gradient back-propagation, or as the backward or reverse mode of differentiation. For instance, if we have three functions $f: \\mathbb R^M \\rightarrow \\mathbb R$, $g: \\mathbb R^N \\rightarrow \\mathbb R^M$, and $C: \\mathbb R^N \\rightarrow \\mathbb R$ so that $C(x) = f(g(x))$, then: $$\\left.\\frac{\\partial C}{\\partial x}\\right|_x =\n \\left.\\frac{\\partial f}{\\partial g}\\right|_{g(x)}\n \\cdot \\left.\\frac{\\partial g}{\\partial x}\\right|_x$$ Instead of computing (and storing in memory) explicitly the whole $M \\times N$ Jacobian matrix, $\\left.\\frac{\\partial g}{\\partial x}\\right|_x$, all we need is a function $\\nabla g_x: \\mathbb R^M \\rightarrow \\mathbb R^N, v \\mapsto v \\cdot \\left.\\frac{\\partial g}{\\partial x}\\right|_x$ that computes the vector-Jacobian dot product for any vector $v$. This can be generalized easily to functions with several inputs, which can be multi-dimensional arrays.\n\nMost of Theano Ops implement a `grad` method that, given symbolic variables for $x$ and $v$, will return a symbolic expression of $\\nabla g_x(v)$, where $g$ is the function represented by that Op. `theano.grad` traverses the graph following the usual back-propagation algorithm, calling the `grad` method on each Apply node's Op, passing that node's input as $x$ and the gradient coming from the subsequent operations as $v$. This builds a symbolic expression for the gradient of the cost with respect to variables. These gradients are symbolic variables that are part of the graph as well, so it is possible to use them as parts of other symbolic expressions (to express a learning rule, for instance), and even to traverse the graph again to obtain higher-order derivatives.\n\nMany Theano Ops also implement an `R_op` method, computing a symbolic expression for the the Jacobian-vector dot product, $R g_x: {\\mathbb R^N \\rightarrow \\mathbb R^M}, {v \\mapsto \\left.\\frac{\\partial g}{\\partial x}\\right|_x \\cdot v}$. This is the R-operator introduced by\u00a0, and corresponds to the forward mode of differentiation. `theano.Rop` traverses the graph from inputs to outputs, calling the `R_op` method on each Apply node's Op.\n\n### Scan: Symbolic loops\n\nSince the computation graph is acyclic, and its structure is fixed and independent from the actual data, it can be a challenge to express loops symbolically. One option, when the number of steps in the loop is fixed, is to explicitly unroll the loop, adding to the computation graph the computation of each of the iterations multiple times. Unfortunately, this makes it impossible to iterate over a sequence of unknown length, or to iterate a variable number of times depending on the value of the data.\n\nTo sidestep these issues, Theano implements a special Op called *Scan*, which abstracts the entire loop in a single Apply node in the graph. That single node contains a full computation graph, isolated from the main one, that represents the computation done during each iteration of the loop. The scan node handles the communication between the *external* or *outer* computation graph it belongs to, and the *internal* or *inner* graph. It is also responsible to manage the bookkeeping between the different iterations.\n\nThe gradient of a Scan operation is implemented as another Scan operation, which iterates over reversed sequences, computing the same gradient as if the loop had been unrolled, implementing what is known as *back-propagation through time*. Similarly, the R operator is also a Scan operation that goes through the loop in the same order as the original Scan.\n\n## The compilation phase\n\nThe compilation phase produces a Theano function (a Python callable object) able to compute values for specified *output* symbolic variables, given values for *input* variables. The set of input and output variables have to be provided when compiling the function, but the inputs do not have to be inputs to the full computation graph, and outputs do not have to be ultimate outputs either. It is possible to compile a function going from some intermediate variables of the graph to other intermediate variables, as long as the set of inputs contains all the information to compute the set of outputs. Several Theano functions can be compiled, computing different parts of the same computation graph.\n\nDuring the compilation of a Theano function, first the relevant portion of the computation graph is cloned, then it gets rewritten by the application of *graph optimizations*, next some optimized C++ or CUDA code gets generated and compiled if necessary, and finally a callable object is built and returned to the user.\n\n### Graph optimizations\n\nThe computation graph structure makes it possible to replace parts of the graph. For instance, a Variable node which is the output of one particular Apply node could be replaced by the output of a different Apply node, as long as they have the same type. Optimizations specify how to perform replacements of variables by other variables representing an equivalent computation. Some of them are *local*, which means they only look at one Apply node and can replace its outputs, some of them are *global*, and can examine the whole computation graph and perform arbitrary substitutions. Optimizations are mostly organized into the stages described below, even if there is some overlap.\n\n- *Canonicalize:* Put the graph in a canonical form, to ease the task of subsequent optimizations (for instance, $x*x \\Rightarrow x^2$). It performs some simplifications as well, like removing duplicate computations, removing some unnecessary computations ($xy\/y \\Rightarrow x$), and computing the value of expressions if all their inputs are known (constant-folding, $2 + 2 \\Rightarrow 4$).\n\n- *Stabilize:* Increase numerical stability, for instance $\\log{(1+x)} \\Rightarrow \\mathrm{log1p}(x)$, where log1p is a stable implementation for small $x$.\n\n- *Specialize:* Insert faster implementations of operations. For instance, successive element-wise operations are fused together to avoid having to loop over a tensor several times.\n\n- *GPU:* Replace the default version of Ops and variables by GPU-specific versions, using either the old or new back-end, if a GPU is requested. Transfer Ops (CPU-to-GPU or GPU-to-CPU) are inserted so that the type of inputs and outputs is preserved, and around CPU-only operations.\n\n- *Inplace:* Replace the default version of Ops by a version that can work in-place, as a view or destructive operation over its inputs. The array types used by Theano, like `ndarray`, support arbitrarily-strided arrays, so all transposition operations, as well as basic slicing, can happen in place, in constant time. Some operations, like most element-wise ones, can overwrite their input and return it, to avoid allocating memory. Since destructive operations introduce additional dependencies between Apply nodes (a value can only be overwritten by the *last* operation to read it), dependency cycles have to be detected and prevented.\n\n- *Scan:* Optimize performance and memory use of Scan nodes. For instance, only keep the value for the last step of an output in memory if the whole sequence is not needed, merge different Scan nodes to perform computations only once, and move invariants out of the loop.\n\nWhile individual optimizations or groups of optimizations can be individually enabled or disabled, some optimizers (sets of optimizations) are predefined: `'None'` does not include any optimization, `'fast_compile'` includes only canonicalization and transfer to the GPU, and `'fast_run'` (the default) includes most optimizations except for experimental and \"unsafe\" ones (removing assertions).\n\n### Shared variables\n\nShared variables are symbolic variables that are associated with persistent values, that are shared between Theano functions. They can only be input variables (not intermediate ones), since their value is not the result of the computation of an Apply node. Shared variables are implicit inputs to all the Theano functions using them.\n\nWhen compiling a Theano function, it is possible to specify *update expressions* for shared variables. These expressions are symbolic variables that represent the new value to assign the the shared variables at the end of each function execution. They are implicit outputs of the function, and will be computed along with the other outputs, before the value gets updated. Such update rules make it possible to update the array in-place in some cases, rather than returning a different array.\n\nIt is also possible to explicitly assign a new value to an existing shared variable, outside of a Theano function, as long as it is compatible with its type. Since the shape is not part of the type, it is possible for the shape of a shared variable to change. If a GPU is enabled, shared variables will be created on the GPU by default, to avoid transfers (this only works for `float32` arrays in the old back-end).\n\n### C code compilation and caching\n\nThe code to compute output values given input values for each Op can be implemented either in Python or in C++ (or CUDA for GPU Ops), using the C API from Python and NumPy (and from CudaNdarray or GpuArray for GPU).\n\nAfter the function graph is optimized, each Op generates the C++ or CUDA code for a Python module implementing that computation (including reading and writing from the right storage map), which is then compiled, and imported.\n\nA persistent cache on disk makes it possible to avoid generating code twice for the same Op, and to avoid compiling again when different Ops generate the same code (this can happen for the same operation applied on different data types, or different numbers of dimensions, for instance).\n\n## Function execution\n\nTheano includes a runtime engine that, upon a Theano function call, determines the computation to be executed on which data and in what order, and orchestrate their evaluation. This was originally done by forward-traversing graphs from input to output, requiring all branches to be evaluated before outputs could be returned. The default runtime now uses a virtual machine (VM) system. By running small code units (each corresponding to an Apply node for one Op) and ignoring branches not necessary for correct computations, lazy evaluation is now possible.\n\nThe runtime uses a data structure containing pointers to storage for each variable (inputs and outputs of each Apply node), ordering constraints, pointers to the functions performing the computations, and information on what has been computed and needs to be computed in the current call. If the speed of execution is more important than memory usage, it is possible to keep references to ndarrays containing intermediate results, to prevent Python's garbage collection from freeing them, and to re-use it for the next run of the function, through the configuration flag `allow_gc=False`. The default is to allow the garbage collector to free the storage of intermediate values.\n\nThe C implementation of that VM (CVM) is the default runtime. Not only does this increase performance by running the runtime loop in C, if a C implementation of an Op is available, the CVM can directly execute it. This eliminates the overhead from a Python function call, which is especially advantageous when performing many operations on small operands.\n\nA Python implementation is also available. It is more flexible and easier to instrument, which is useful to collect more profiling information (for instance, memory usage) and add callbacks for debugging.\n\n## Extending Theano\n\nIf the existing Theano library does not include the operations required for a particular model, the framework was designed for easy extensibility. New Ops can be written by specifying the type of their input and output variables, and providing Python code to perform the evaluation. That Python code can use bindings to external high-performance libraries, or Cython, for instance. Methods can also be added to specify expressions for gradients and the R-operator (see Section\u00a0), and shape inference. Theano's self-testing functions can be used to validate outputs and check symbolic gradients against numeric evaluations among others.\n\nAs mentioned above, operators can also be implemented directly in C++ or CUDA. The raw code can be supplied as a string that the Python code uses to produce the code used by the graph compiler. For added convenience, Theano can now load code from an external C-like file with the `COp` class. The file is divided into sections that map to the different pieces of code that Theano requires. Keeping the Python and C code separate allows more readable code with better indentation. It also enables a clearer view of the C code itself since you can use your favorite C editor to modify that file with syntax highlighting.\n\nA user can then write a new *optimization* to automatically insert that optimized operation in the computation graph, instead of the more na\u00efve or slow version. This is especially useful when implementing an operation on GPU.\n\n## Related software\n\nAlthough Theano is developed and mainly used for research in machine learning and deep learning, it is not a deep learning framework in itself (see Section\u00a0 for some machine learning frameworks based on Theano). However, it makes sense to compare the core features of such systems with Theano, as they all support the definition of a mathematical model in a symbolic way, and implement some automatic gradient computation.\n\nTensorFlow\u00a0 has a core in C++ and includes most of the features from Theano, in particular the graph-compiling approach, and symbolic differentiation (on full layers as well as on elementary operations), all directly accessible from Python through the API. In addition, it has a focus on distributed, multi-node computation. Even though a graph-rewriting engine is present (and used to distribute computation across devices, for instance) it does not seem to be used for mathematical expressions simplification or kernel fusion at the moment.\n\nTorch7\u00a0 has a different approach: it implements efficient CPU and GPU computation kernels in C and makes them available in Lua, but does not provide gradient expressions for elementary operations. Instead, packages like 'nn' and 'cunn' feature higher-level *layers* that can store parameters and provide methods to compute values for forward propagation, gradient back-propagation, and parameter updates. Many packages extend Torch's features, in particular Autograd[^9] provides automatic differentiation of code written in Torch, by building a graph that records the evaluation of expressions (even through loops and conditionals), and playing those records back to build an expression graph for gradients. That graph is symbolic as well, making it possible to express higher-order gradients. Moreover, an optimizer can rewrite the graph to make it more efficient to evaluate.\n\nMXNet\u00a0 and Caffe\u00a0, both written in C++, feature the same kind of higher-level layers as Torch. MXNet can also express the gradients through those layers as symbolic layers themselves, giving more flexibility for the dispatching of the computation to different devices, and for memory reuse. It also allows distributed computation over multiple nodes. Caffe2[^10] is an experimental rewrite of Caffe that features explicit symbolic gradients in the computation graph, rather than a \"backward\" method of the layers.\n\nNeon[^11] and Chainer\u00a0 are two other machine learning frameworks written in Python, with GPU kernels, that feature symbolic computation graphs and symbolic differentiation. Neon's most prominent feature is its collection of highly-optimized GPU kernels, in particular for operations used in neural networks. Chainer instead builds its computation graph dynamically at the same time as its first evaluation, making it easier to express loops and conditionals.\n\n# New features\n\nOver the last couple of years, multiple improvements have been made in Theano, in particular for faster execution, including support for more operations on the GPU and multiple-GPU support (Section\u00a0), faster graph optimization, especially for larger graphs (Section\u00a0), and ease of use, with better error messages and tools for introspection, visualization, and debugging (Section\u00a0).\n\n## Increased performance\n\n### Abstract Ops and 2D convolutions\n\nConvolution operations are at the core of Convolutional Neural Networks (CNNs) that have lead to spectacular advances in machine learning problem involving visual data\u00a0. A more detailed description of the convolution operations can be found in\u00a0.\n\nThe multiplication of available implementations for convolution (CPU-GEMM, GPU-cuDNN, GPU-GEMM, FFT, \u2026) available in Theano has increased the need of a flexible convolution interface that easily allows to switch between those implementations, each implementation having different speed and memory trade-off, as well as different software dependencies. To suit this need, Theano 0.8 introduces abstract Ops that allows to disentangle the interface of an Op to their actual implementation. An abstract Op introduces is a place-holder Apply node in the graph, corresponding to a given operation, that does not provide an actual implementation. For each optimized implementation of that operation, there is an optimization that will insert an Apply node for that optimized Op instead of the abstract Apply node during the compilation phase.\n\nIn particular, Theano proposes three abstract Ops for convolution: `AbstractConv2d`, `AbstractConv2d_gradInputs`, and `AbstractConv2d_gradWeights`, that correspond respectively to the forward convolution, the convolution gradient w.r.t.\u00a0inputs and the convolution gradient w.r.t.\u00a0weights. Each abstract Op can be replaced by one of the different implementations. By default, if a GPU is enabled and cuDNN is available, Theano will use it (see Section\u00a0), otherwise it will fall back to using the GEMM version. A slow, Python-only implementation is part of the abstract Ops for debugging purposes. The optimizations can be included or excluded using the configuration flags, which makes it possible to manually select a specific convolution implementation.\n\n### Using cuDNN\n\nEfficient CUDA primitives for neural networks are implemented in the cuDNN library\u00a0, in particular convolutions, pooling, and their gradients. Several implementation of convolutions (and gradients) are provided, with the same interface, with performance and memory usage that depends on the actual shape of the data and filters. Since the best implementation can be different for different convolutions in the same model (depending on their size) and on different hardware (depending on the available memory), cuDNN also provides a heuristic to guess the best algorithm given shapes, and to actually time the different implementations (that are feasible given the available free memory) and select the fastest one.\n\nTheano wraps cuDNN 2D and 3D convolutions and their gradients, and provide options to select the algorithm to use, either explicitly or using one of the following special values: `'guess_once'`, `'guess_on_shape_change'`, `'time_once'`, or `'time_on_shape_change'`. This selection can be done individually for each Apply node in the graph, and configuration flags select the global default for the forward convolution, the gradient w.r.t. the data, and the gradient w.r.t. the weights. Theano also wraps pooling operations, as well as softmax and log-softmax operations. More operations will be added in the future.\n\n### CNMeM integration\n\nAnother improvement to the GPU performance comes integrating the CNMeM library,[^12] and using the allocator and deallocator it provides. The main issue was that calling `cudaFree` is synchronous, so it forces the synchronization of all the streams on the device, waiting for them to finish, which seriously limited the potential for parallel execution of different kernels. A previous option was to keep memory allocated for intermediate values between calls, as mentioned in Section\u00a0, but the amount of memory typically available on GPU devices is limited.\n\nCNMeM works by allocating large memory pools using `cudaMalloc`, returning chunks of it when its allocator is called, and keeping track of which ones are released by its deallocator. Theano makes it possible to reserve part of the GPU memory from the start, using `lib.cnmem=0.9` to reserve 90% of the memory for CNMeM. The new GPU back-end does not use CNMeM, but implements a similar strategy, with asynchronous allocator and deallocator and a memory pool.\n\n### Improvements in Scan\n\nImportant speed improvements have been made to Scan, in addition to making it more stable, and supporting more cases. The time to optimize and compile graphs containing Scan Apply nodes has been reduced a lot, and the execution time of the resulting function has improved as well.\n\nThe optimizations related to Scan (pushing computation out of the loop, removing useless computation) have been improved so they can be applied faster. Additional optimizations have been added, so that more computation can be moved out of the loop, for increased execution speed.\n\nThe execution back-end of Scan has been made more efficient as well, by removing some of the bookkeeping overhead, and making the internal function write directly into the right output buffer at each execution step, rather than having to copy the intermediate results each time.\n\nThe `grad` method of Scan has been rewritten to scale better in the case of large numbers of input and output variables, and to generate a cleaner graph. That cleaner graph can lead to a faster optimization time, since less rewriting is needed and the inner graph is smaller, and faster execution as well. In the case of nested symbolic loops, the observed speed up in compilation time was sometimes huge, going from hours to minutes.\n\nFinally, an additional keyword, `strict`, has been added to the `scan` function. It prevents shared variables from being implicitly added as non-sequence inputs to the inner function. This forces the user to explicitly provide all non-sequences needed in the inner function, which may not be the shared variables themselves, but rather outputs of some computation done of them. In that case, doing so prevents pulling that computation inside the loop, which can speed up the optimization as well as the execution.\n\n### New gpuarray-based back-end\n\nTheano now features a new GPU backend based on libgpuarray\u00a0. This new back-end brings in several improvements over the previous one. The most visible improvement is that it supports all the usual data types, instead of being limited to float32 data. In particular, it supports half-precision floating point values (float16). As did the previous back-end, this one supports views and strides to avoid copies and reuse memory whenever possible.\n\nlibgpuarray[^13] is a separate project with the aim of providing a ndarray-like object on the GPU. It has a C interface so that it can be reused in other projects that don't use Python. It also supports 64-bit indexing, so that arrays with more than $2^{32}$ elements are supported.\n\nAnother noticeable improvement is that we have basic support for OpenCL, however a sizable portion of the GPU Ops in Theano do not currently support it. This could be fixed with some porting effort.\n\nThe new back-end also allows using multiple GPUs in the same function to do model parallelism. One example of such a model is the two-stack variant of AlexNet\u00a0. This however may be hampered by the Python Global Interpreter Lock (GIL) in some cases, meaning that one will get correct results, but may lose parallelism.\n\nSeveral new features that help performance are present, but not obvious. One of these is that all computations are transparently asynchronous, which allows the CPU part of the Ops to execute in parallel with the GPU part. There is a mechanism keeping track of the dependencies between operations to ensure that the right data is always used. Data transfers are automatically done on a separate stream, so they can overlap with the computation.\n\nThe new back-end is now fully functional, and well tested for correctness. It supports almost all the operations of the old back-end on CUDA-capable devices, including wrapping cuDNN for efficient convolutions, but we are still in the process of tuning some of its kernels for a better performance. In particular, int64-based indexing can be significantly slower than int32, so some adjustments have to be made.\n\n### Data parallelism with Platoon\n\nTo take advantage of multiple computing devices, there are two main approaches: model parallelism and data parallelism. Model parallelism consists in splitting the model itself into multiple parts and have those parts computed by different devices. It requires a careful balancing of the size of the parts and of the communication costs to ensure optimal performance. Data parallelism on the other hand is about splitting your input data in multiple parts, and running multiple copies of the model. It requires attention to model synchronization so that the copies don't drift apart too much during training, and to the way of aggregating the results produced.\n\nUsually, data parallelism on a single machine is done using multiple threads, but this approach is unworkable in Python because of the Python GIL. Because of this, we have to turn to multiple processes and this presents a new set of challenges. Platoon[^14] is a package that has been developed to to address those challenges and help train Theano models faster by using data parallelism.\n\nPlatoon features a central controller process, that communicates with different worker processes, each using Theano to train a copy of the model on a CPU or GPU. It uses shared memory to share model parameters between workers, in order to avoid inter-process communication overhead. The communications with the central controller are sent asynchronously, so that the worker does not have to wait for a reply. There is also a script to launch all the workers and monitor them while running that provides a central \"job\" to wait for on clusters.\n\nTwo ways of performing the updates on the central parameters are currently implemented: Asynchronous SGD (ASGD), similar to Downpour SGD\u00a0, and Elastic Averaging SGD (EASGD)\u00a0. Other algorithms can be added by implementing additional parameter synchronization rules.\n\n## Faster compilation of graphs\n\n### Faster, simpler optimizer\n\nAs mentioned in Section\u00a0, some sets of optimizations are pre-defined and can be easily specified. One of these optimizers, `'fast_compile'`, has recently been upgraded to include the optimizations that transfer computation to a GPU, as well as the optimizations necessary to make those optimizations apply. This drastically shortens the graph optimization time, at the cost of a slightly slower execution time and increased memory usage. That option can speed up the development or prototyping phase of a model, allowing the developer to iterate faster.\n\n### Swapping updates without recompiling\n\nIt is now possible to copy functions using the `function.copy()` method. This can be useful when creating functions that are similar but use different shared variables or update parameters, for instance when creating test and validation functions. Most importantly, the optimized graph of the original function is copied, meaning compilation only occurs once.\n\nThe interface for `copy` lets users specify which shared variables to swap, and whether or not updates are carried over. It is also possible to have copied functions share intermediate storage in memory (storage that is not input or output). When this is combined with disabled garbage collection, this can increase execution speed and save memory.\n\n### Save and reload optimized graphs\n\nOptimized computation graphs, such as the ones in Theano functions, can now be serialized using the `pickle` module, and get de-serialized without being optimized again. It is possible to force the re-optimization, for instance if the set of optional dependencies available has changed between saving and reloading, in which case the function may not run (if a dependency has been removed) or be sub-optimal (if one has been added). This is especially useful when check-pointing and restoring running experiments. Note that the C++ or CUDA code may still need to be recompiled.\n\n## Visualization, debugging, and diagnostic tools\n\nSince the definition of Theano functions is separate from their execution, some specific tools have been developed to help users visualize parts or the whole of the computation graph, pinpoint the origin of errors, and understand what is happening at execution time.\n\n### Interactive visualization with d3viz\n\nInteractive visualization of computation graphs is now possible with the `d3viz` module, which extends Theano's printing module. Instead of outputting a text representation (like `debugprint`) or creating a static picture (like `pydotprint`), it creates an HTML file, which can be opened with current web browsers. An example is shown in Figure\u00a0.\n\nSeveral features are supported. Users can zoom different regions, move graphs via drag and drop, and position nodes both manually and automatically. The visualisation can retrieve additional information about nodes and edges such as their data type or definition in the source code, edit node labels and visualize profiling information. Nested graphs such as `OpFromGraph` nodes can also be explored by expanding or shrinking the nodes as needed.\n\nInternally, `d3viz` represents a compute graph in the Graphviz DOT language, using the pydot package, and defines a front-end based on the d3.js library to visualize it. However, any other Graphviz front-end can be used, which allows to export graphs to different formats such as PNG and PDF.\n\n### Test values\n\nDetecting errors in the way a mathematical expression is implemented in Theano can be a challenge, since it is not possible to directly map an intermediate Variable node to the value that will be associated to it at execution time. To mitigate this problem, it is possible to associate a *test value* to input variables, and to compute automatically values associated to intermediate variables as soon as they are defined. This makes it much easier to detect shape mismatches, for instance, or unexpected values.\n\nNote that these values are computed only once, when the graph is built. That means that stability optimizations will not be applied to these values, so NaN (not-a-number) values could be produced during that phase, even if they would not be present when evaluating the optimized graph.\n\n### NanGuardMode\n\nA frequent symptom of issues when optimizing a model is the appearance of NaN (not-a-number), infinity, or very large values. They can indicate a wide range of issues, e.g., use of un-initialized memory, lack of numerical stability in the computation, divergence of the algorithm itself.\n\nTo help diagnosing the appearance of such values, NanGuardMode is an instrumented version of the runtime environment that can check the values of inputs and outputs of each Apply node during execution, and raise an error when some problematic values are detected.\n\n### The PdbBreakPoint Op\n\n`PdbBreakPoint` is an Op designed to check the value of a *condition*, which is a symbolic expression, during the execution of a Theano function. If the condition is met, then the program will drop into the Python debugger (`pdb`), and make available the values associated to a list of pre-defined *monitored* variables. This is especially useful when something goes wrong during the training of a model, but only after a number of iterations, so it is not practical to log all values all the time.\n\n### Keeping the creation stack trace\n\nWhen a variable is created, part of the stack trace is recorded, in particular the line of the call that created it. For instance, if variable `z` is created by calling `z = a + b`, then the line where that expression is called is associated to `z`. If evaluating that expression fails, for instance because `a` and `b` have incompatible shapes, then the error message will mention that file, line, and line number.\n\nA challenge of that mechanism is that, when optimizations are applied, the replacement variables are not created at the same place as the ones they replace (or that \"correspond\" to them in a more general sense). In fact, they are created inside the optimization, so no stack trace is associated to them. For instance, if the expression above is optimized to move `a` and `b` to a GPU, and `z` gets replaced by `host_from_gpu(gpu_z)` where `gpu_z = gpu_add(gpu_a, gpu_b)`, then the replacement for `z` can easily retain the original stack trace, but `gpu_z` would not.\n\nTo improve this feature, we are currently in the process of going through all optimizations, so that they assign the creation stack trace of the original variable (or variables) to the \"corresponding\" or equivalent one when they create replacements or new intermediate variables.\n\n# Benchmarks\n\nThis section aims at giving a sense of the performance might expect from Theano against some of its largest competitors among machine learning research software, on different kinds of models. We used publicly-available software to compare against, when possible. We have made some of the benchmarking code public as well already, and will try to provide the remaining code as well in the future.\n\nThe goal of having more extensive benchmarks, on a wider variety of models and frameworks, is more easily attained by online projects, that can provide a picture more up-to-date. Among these projects, we can cite convnet-benchmarks,[^15] rnn-benchmarks,[^16] and hopefully DeepMark[^17] in the future.\n\nWe benchmarked Theano against Torch and TensorFlow (Section\u00a0), on three kinds of popular machine learning models: convolutional networks (Section\u00a0), recurrent neural networks (Section\u00a0), and recurrent neural networks for sequence-to-sequence mapping (Section\u00a0). Finally, we show how the computation speed scales when using multiple GPUs with Platoon (Section\u00a0).\n\n## Setup\n\nAll the benchmarks were run on a NVIDIA Digits DevBox, with 4 Titan X GPUs, and a Core i7-5930K CPU. All the benchmarks except for data-parallelism were run on only one GPU, which was not the one used for running the X server (using `CUDA_VISIBLE_DEVICES`). We used Cuda 7.5.17, with cuDNN v4 (version 4007), and data type float32, for all frameworks and all experiments.\n\nThe compared software were installed as follow:\n\n- Theano was installed from the development version, at commit `1bd371c`. The following configuration flags were used: `floatX=float32`, `lib.cnmem=0.45`, `device=gpu0`, `optimizer_including=unsafe`, `dnn.conv.algo_fwd=time_once`, `dnn.conv.algo_bwd_filter=time_once`, `dnn.conv.algo_bwd_data=time_once`. For fast_compile experiments, the additional option `optimizer=fast_compile` was provided.\n\n- TensorFlow 0.8 was installed from the binary package.\n\n- Torch7 was installed from at commit `ffffc39`.\n\n## Convolutional networks\n\nWe measure the performance of four different convolutional models, that have been successfully used on the Imagenet dataset:\n\n- AlexNet, the one-column variant from\u00a0, with a batch size of 128;\n\n- OverFeat, the *fast* variant from\u00a0, with a batch size of 128;\n\n- VGG, also known as OxfordNet, model A\u00a0, with a batch size of 64;\n\n- GoogLeNet V1\u00a0, with a batch size of 128.\n\nWe used the code from at commit `84b5bb1` for Theano, Torch, and TensorFlow. We report the processing time per minibatch, for the forward and the backward pass.\n\nThe results, presented in Figure\u00a0, show that Theano is slightly slower than Torch and TensorFlow, but the performance is comparable, both for the forward and the backward passes. Furthermore, using the `fast_compile` optimizer shows a slow-down between 10% and 25% only, which is a reasonable trade-off when developing or exploring a new model.\n\n## Recurrent neural networks: LSTM on Penn Treebank\n\nTo showcase recurrent network models, we benchmarked variants of the LSTM model applied to the Penn Treebank dataset described in\u00a0. We compared:\n\n- the Torch implementation available at ;\n\n- the TensorFlow implementation showcased at ;[^18] and\n\n- the Theano implementation available at .\n\nWe measured words per second during training, and report results on the following models:\n\n- Small: Single Layer, 200 hidden units, sequence length: 20;\n\n- Medium: Single Layer, 600 hidden units, sequence length: 40;\n\n- Large: Two Layers, 650 hidden units each, sequence length: 50.\n\nAll three models used dropout on non-recurrent connections during training, following\u00a0. The batch size was set to 20.\n\nFigure\u00a0 shows that Theano comes second behind TensorFlow for the small model, but is slightly faster on the medium and large model. Torch was slower than Theano on all three models, and perhaps more surprisingly, slower than the fast_compile version of Theano on the two larger models.\n\n## Sequence-to-sequence: Caption generation from video\n\nIn this section, we use the sequence-to-sequence mapping model from\u00a0. The input is a series of video frames and the output is a one-sentence English description of the input. Each input video frame is preprocessed by a GoogLeNet that was pre-trained for classification on ImageNet. The representation of the frame is thus a 1024 vector. The entire input is therefore represented by (M, F, 1024) where M is the minibatch size, and F is the number of frames. The output size is (M, L), where M is the minibatch size and L the sentence length (padding is used within a minibatch to ensure the same length, but different minibatches could have different L). Specifically, the model is written as $P(S|V)$, an LSTM on the sentence $S$, conditioned on the video $V$. $V$ is a weighted sum of frames representations.\n\nThe original code for\u00a0 is available at . We used simplified versions, in Theano and TensorFlow, instrumented for profiling, which will be made public in the future. There was no publicly available implementation in Torch. Theano with fast_compile could not run because it was requiring too much memory. We report the processing time per minibatch, for the forward and backward passes, using three different batch sizes.\n\nFigure\u00a0 shows a small advantage to Theano for the forward pass, but a disadvantage for the backward pass. The total time was comparable overall, with Theano being slightly faster on smaller batches, and TensorFlow being faster on larger ones. As expected, the time per minibatch grows slower than the minibatch size, because the potential for parallel computation is greater with larger batches.\n\n## Data parallelism for LSTM\n\nWe re-use the models from Section\u00a0, this time using Platoon to train on multiple GPUs on the same machine, using ASGD. We report results for 2 GPUs (using devices `gpu1` and `gpu2`) and 4 GPUs, compared against the results on 1 GPU obtained without Platoon and reported in Section\u00a0. We measured the overall processing speed (words per second) during training when synchronizing the models after every minibatch, and when synchronizing only every 100 batches. The benchmarking code using Platoon will be made public soon.\n\nFigure\u00a0 shows a consistent increase in processing speed when adding more GPUs. As can be seen on the left, communication and synchronization overhead make that scaling sub-linear when synchronizing after every single batch, we found a speed-up between 1.6 and 1.7 for 2 GPUs and around 3.2 for 4 GPUs across all three models. Synchronizing only every 100 batches, on the right, brings the computation speed-up close to the theoretical optimum, at 2 for 2 GPUs and between 3.9 and 4 for 4 GPUs.\n\n# Limitations and challenges\n\nDespite the progress made in recent years and our best efforts, there remain some limitations or shortcomings in Theano. Some of these issues have been addressed by competing frameworks mentioned in Section\u00a0, and by other projects like CGT (Computation Graph Toolkit).[^19]\n\n## Limitations from Python\n\nSince Theano uses Python as its core language, and uses NumPy arrays and other Python objects to store values, it is affected by Python's limitations. The main one is the Python GIL, that limits concurrent execution of threads. We have seen that it is possible to make single-threaded execution fast by compiling binary modules that are then loaded in Python (Sections\u00a0 and\u00a0), and it would also be possible to release the GIL during the execution of these functions. However, the GIL has to be acquired again each time references to Python objects are added or removed, when using the C API of Python and NumPy. Since the execution of such functions is usually quite short, most threads would spend their time waiting for the lock instead of performing actual computation.\n\nSince Python has a concept of threads and expects to be in charge of threading, it is also not possible to launch different, independent Python interpreters in different threads of the same process, as is possible with Lua for instance.\n\nTo avoid that issue, we could use a different n-dimensional array structure, that is accessible directly from C++ without actually being a Python object, like the one libgpuarray provides on the GPU. It would require Theano to explicitly manage memory allocation and deallocation, in a thread-safe way. It would also require to rewrite all the C++ and CUDA code for existing Ops, so that they use a different interface for reading their input data and writing their output data. Finally, it could make it harder to create new Ops by integrating existing Python code.\n\n## Graph optimization time\n\nThe execution time of the graph optimization phase is not scaling well with graph size. Currently, it is scaling supra-linearly relative to the number of nodes. One issue is that some groups of local optimizations try to apply over and over, until none of them can be applied any more, and the graph stops changing. In practice, it can force a number of passes through the whole graph that becomes bigger for bigger graphs (the chances of some local optimization applying somewhere are higher).\n\nAn option would be to completely reorganize the existing optimizations so that they are more lightweight, and can be applied in a fixed number of passes through the graph. It could be possible, for instance, to use a one-pass or two-pass optimization phase, like CGT does. Doing that without any regressions in the stability optimizations could be a large-scale project.\n\n## Code compilation time\n\nCurrently, the same Theano Op can generate a large quantity of different C++ or CUDA modules, depending on its properties at compile time, such as the data type of inputs and outputs, whether it will run in place, and other flags determining its behaviour. Compiling and loading those modules can take time and add a load on the file system.\n\nTo alleviate those issues, it would be possible in most cases to pass that information dynamically at runtime, instead of hard-coding it in the generated code. This approach is already being used in the new back-end to specify which GPU should be used for the execution of a particular Apply node, but it could be generalized.\n\n## Loops and control-flow structures\n\nUsing Scan for loops, and the `ifelse` lazy Op for conditionals, has proven a useful way of expressing control-flow operations. However, with an increasing need for more flexibility (attention mechanisms, nested loops, recursive loops, changes in shape between iterations of the same loop), we may need a more principled way of expressing these structures.\n\nOne appealing way would be to use *switch* and *merge* Apply nodes in the computation graph, like in a dataflow graph\u00a0. This is the approach taken by TensorFlow\u00a0 for symbolic loops. This would require adding support for cycles in the computation graph in these circumstances, extending the runtime to be able to recompute values inside the loop, and rewriting all the graph optimizations currently existing for Scan, including the ones limiting memory consumption.\n\n## Multi-node parallelism\n\nScaling model execution and training to multiple machines is outside of the scope of Theano's core, but additional packages could be developed to interface with Theano, in the same way Platoon does for multiple GPUs in a single node. In fact, tools like parameter servers and coordinators do not have to be specific to Theano, and could be common to different frameworks.\n\n## Improving memory usage\n\nGiven the limited availability of on-board GPU memory, memory consumption is often a bottleneck for training machine learning algorithms. This can limit the size and modelling power of trainable models, and make the processing power of GPUs under-used, for instance when batch sizes have to be reduced. In addition to storing intermediate values in a lower-precision format (for instance, storing data as float16 is supported in Theano's new GPU back-end), different options could be explored and combined:\n\n- Change the order of execution of computations, so the peak memory usage is reduced. This can be done statically before the function is executed, or dynamically, for instance by detecting that memory is insufficient and waiting for some other computation to finish and free intermediate values.\n\n- Move intermediate values to the main (CPU) memory, or to another GPU's memory, if it is not needed for a while, and transfer it back before it is used again. This method has been successfully implemented by\u00a0.\n\n- Free intermediate values, and recompute them when they are needed again. This approach has been used in\u00a0, and can be especially useful for fast operations that have large outputs.\n\n## The future of gradient-based computation frameworks\n\nTools like Theano and TensorFlow are compilers for mathematical expressions, in that they require the code (or computation graph) to be defined first, and then executed. On the other hand, Torch works more like an interpreter: the computation is done as soon as the expression is called. It could be interesting to explore how to apply JIT (just-in-time) compiler ideas to the computation graph, to combine the immediate response and flexibility of an interpreter (including using control flow statements like `if`, `for`, `while`, from the language directly), and the performance gains of a compiler when an expression has to be evaluated multiple times.\n\nMost machine-learning frameworks can now share efficient implementations of GPU kernels, such as the ones published by NVIDIA (cuDNN) and Nervana. Graph optimizations could be another component shared between projects, maybe through a common language to define computation graphs and such optimizations. It could be common to machine learning frameworks and computer algebra systems (CAS) such as SymPy\u00a0 and SympyCore.[^20]\n\n# Conclusion\n\nTheano pioneered ideas for efficient gradient-based computation that are now part of most mainstream machine-learning research libraries, for instance combining a high-level scripting language with highly-optimized computation kernels, especially using GPUs, symbolic computation graph, and symbolic differentiation. Some other features of Theano, like graph rewriting and optimizations, and automatic generation and compilation of kernels, are starting to become more widely used as well.\n\nContinuous improvements have been made to Theano's functionality, usability, and performance, for instance wrapping libraries like cuDNN, and integrating ideas that have been successfully explored and implemented by other frameworks, like data parallelism and model parallelism for distributed computation. Computation performance is on par with other major research software, like Torch and TensorFlow.\n\nThere are ways to improve Theano (and other frameworks as well) by taking inspiration from other machine learning software (sometimes more experimental). Longer-term improvements could be the result of collaborations with other fields, for instance CAS, and language and compiler design, in order to build a next generation of mathematical computation software.\n\n[^1]: code available at \n\n[^2]: Some OpenCL support is available in the new GPU back-end, but it is still limited and experimental.\n\n[^3]: \n\n[^4]: \n\n[^5]: \n\n[^6]: \n\n[^7]: \n\n[^8]: For instance, the deep learning tutorials at \n\n[^9]: \n\n[^10]: \n\n[^11]: \n\n[^12]: The original code is available at , Theano includes a copy of it.\n\n[^13]: , code available at \n\n[^14]: \n\n[^15]: \n\n[^16]: \n\n[^17]: \n\n[^18]: Code at \n\n[^19]: \n\n[^20]: ","meta":{"dup_signals":{"dup_doc_count":12},"filename":"out\/1605.02688_extract_theano2016.tex.md"},"subset":"arxiv"} +{"text":"abstract: This work proposes a markovian memoryless model for the DNA that simplifies enormously the complexity of it. We encode nucleotide sequences into symbolic sequences, called words, from which we establish meaningful length of words and group of words that share symbolic similarities. Interpreting a node to represent a group of similar words and edges to represent their functional connectivity allows us to construct a network of the grammatical rules governing the appearance of group of words in the DNA. Our model allows to predict the transition between group of words in the DNA with unprecedented accuracy, and to easily calculate many informational quantities to better characterize the DNA. In addition, we reduce the DNA of known bacteria to a network of only tens of nodes, show how our model can be used to detect similar (or dissimilar) genes in different organisms, and which sequences of symbols are responsible for the most of the information content of the DNA. Therefore, the DNA can indeed be treated as a language, a markovian language, where a \"word\" is an element of a group, and its grammar represents the rules behind the probability of transitions between any two groups.\nauthor: S. Srivastava$^{1}$ and M. S. Baptista$^{1}$\ntitle: Markovian language model of the DNA and its information content\n\n$^{1}$ Institute for Complex Systems and Mathematical Biology, SUPA, University of Aberdeen, Aberdeen, AB24 3UE, United Kingdom\n\n# Introduction\n\nOne of the most studied complex systems in biology is the genome of living organisms, composed by Deoxyribonucleic Nucleic Acid (DNA). The central dogma of life is related to how DNA transcribes into mRNA which finally translates into proteins. A big challenge is to understand the complexity of the dynamical organisation of the DNA, a system that is the result of millions of years of evolution. The genome of any organism is its hereditary information. It is encoded either in the form of DNA composed of four different types of chemical molecules $[$*Adenine (A), Guanine (G), Cytosine (C)and Thymine(T)*$]$ or in the form of Ribonucleic Nucleic Acid (RNA) as present in many viruses. The genome includes both the genes and the non-coding sequences of the DNA\/RNA . Sequencing and high-throughput experiments have contributed much to the genomic data in the last 10-15 years. Still the data has to be analysed .\n\nNatural language analysis has been a topic of interest in the last decade . Natural language written texts can be considered as being composed by a series of letters, syllables, words or phrases. During the 19$^{th}$ century many linguists like Schleicher and Haeckel interpreted language as a living system . Based on this concept Darwin also proposed that evolution of species and language are similar . Many researchers have introduced the concept of linguistic into biology . Brendel et al. used formal linguistic concepts to define a basic grammar for genes, based on the idea that mutating a piece of genetic information was similar to modifying words . Similar to works that aimed at finding the relevant words, their relationships and their information content in natural languages, many studies have focused on analysing genomic sequences like DNA and proteins as if they were a language, using similar methodological approaches as the one used to model natural languages . Formal linguistic concepts were used in Ref. to define basic grammatical rules that describe how genes can mutate, inspired in the grammatical rules of languages that regulate how and which word follows a previous word. Gramatikoff *et. al* have used lexical statistics to identify and represent structural, functional and evolutionary relationships for multiple genomic sequences and texts from natural languages.\n\nDNA sequences have also been analysed using approaches to characterise complex systems, for example by converting the DNA sequences to numerical signals using different mappings . A commonly used mapping is to convert the DNA into binary sequences . Other mappings of the DNA look for spatial patterns by considering inter-nucleotide distances . These models of the DNA capture the recurrence property of codons (a word formed by 3 nucleotides) by measuring the statistics of the symbolic distance separation between two codons. In the work of Ref. , they have shown how to analyse the long DNA sequences by converting it to an image. The DNA was characterized by the fractal-like patterns appearing in this image as a result of the forbidden words.\n\nOur model is constructed using some concepts and tools from Ergodic Theory and Information Theory to interpret genomic data (nucleotide sequences) as if it were a language that can be analysed by the tools of symbolic dynamics. Each language has its rules. Our motivation in this work is to propose and study a meaningful language for the DNA. To establish a language for the DNA, we specify the length of words and a set of relevant groups of words, and create a network of words from functional connections, linking how topological complexity in the functional networks arises and how this complexity is connected to the complexity of life (production of proteins). In order to achieve this goal, we analysed the genome of *Escherichia coli* or *E.coli*, *Shigella dysenteriae*, *Rhodococcus fascians* and *Saccharomyces cerevisiae*. These organisms are commonly used model systems: their genome and genes are well known, well studied, and can be used to test mathematical approaches towards modelling the DNA. Firstly, we represent the genome of these organisms on a symbolic space. The words of the genome are encoded in such a way that the nucleotide sequences are represented by real numbers that can be plotted in a symbolic space. This space allows a straightforward characterisation of the DNA through informational quantities (Shannon entropy rate, mutual information rate, and statistical measures), and ergodic quantities (correlation decay, and transition probabilities). To group the words and specify their lengths, we find a partition of the symbolic space composed by $N^2$ equal boxes. A box is a region in this symbolic space whose points within encode and define a group of words of length $2L$ that are all formed by the same small sequence of symbols with length $2L_{n}$, where $L_n = \\frac{1}{2}\\log_{2}(N)$, $N=4^i$, $i \\in N$, and $L_{n}www.ons.gov.uk<\/a>). The constituency boundary areas were calculated from geographic shape files of the Ordnance Survey Boundary Line dataset. Crime data were obtained from the Home Office via their open data portal (). Property data were obtained from the Land Registry. These data were collated on the UKCrimeStats () data platform and provided as monthly reports. The crime data from 2014 were captured on 10\/6\/2015 and property transaction value data on 17\/7\/2015. Prior to analysis the monthly values from each constituency were summed over the 12 months of study. If a constituency did not have any crime or property transaction of a particular type over the 12 month period it was removed from the analysis. Only the Cities of London and Westminster (Semi-detached) and Bethnal Green and Bow (Detached) in England reported no property transactions of a particular type in the period and were dropped from the respective property analyses. The entire data set is maintained and made freely available by the Office of National Statistics and the UK Home Office. As these data are subject to updates, the snapshot has been provided as .\n\n```latex\n\\begin{adjustwidth}{1.5cm}{0in}\n\\begin{tabular}{l|l}\n\\hline\n\\multicolumn{2}{c}{Constituency metrics, $Y$}\\\\\n\\hline\n{Crime type} & {Property Type}\\\\\n\\hline\nAnti-Social Behavior (ASB) & Detached\\\\\nBike Theft & Flats\\\\\nBurglary & Freehold\\\\\nCriminal Damage and Arson (CD \\& A) & Leasehold \\\\\nDrugs & New \\\\\nOrder & Old\\\\\nOther Crime & Semi-detached\\\\\nOther Theft & Terraced \\\\\nRobbery & Total Property \\\\\nShoplifting & \\\\\nTheft from the Person & \\\\\nTotal Crime and ASB & \\\\\nVehicle Crime & \\\\\nViolence & \\\\\nWeapons & \\\\\n\\hline\n\\end{tabular}\n\\label{tab:0}\n\\end{adjustwidth}\n```\n\n## Overview of Parliamentary Regions\n\nParliamentary Constituencies were selected as regions with clearly defined shapes and similar populations while not being exclusively urban. Parliamentary constituency data were obtained for all 573 constituencies in England and Wales. The regions ranged in area from 331,440 ha (Penrith and The Border) down to 738 ha (Islington North). Constituency populations were from 56,651 (Aberconwy, Wales) to 163,398 (West Ham, England) while population density ranged from 0.22 people per hectare (Brecon and Radnorshire, Wales) up to 150 p\/ha (Westminster North, England). Similar values for daytime population were from 55,453 (Aberconwy, Wales) to 946,397 (Cities of London and Westminster, England) and daytime population densities from 0.22 p\/ha (Brecon and Radnorshire, Wales) to 550.3 p\/ha (Cities of London and Westminster, England). This range of population densities includes regions that exceed the density of many of the world's largest cities when considered as a whole. It is notable that constituency populations for England and Wales fall within a factor of 3; however, total reported crime and anti-social behavior varied by a factor of 17 and total property transactions by a factor of 65.\n\n# Results and Discussion\n\nUrban power-law scaling has been observed in many parts of the world\u00a0. Aspects remain controversial in part due to uncertainty about how best to define cities and concern about the use of population as a definitive metric\u00a0. Bettencourt *et al.*\u00a0 defined the urban scaling of a particular metric at a particular time as $$\\label{eq_usualscaling}\nY= Y_0\\, N^\\beta~~{\\text{or its linearized version}~~} \\log Y = \\log Y_0 + \\beta \\log N\\,.$$ In this, $Y$ is a metric (*e.g.* energy, patents, serious crime), $Y_0$ is a constant, $N$ is the population, and $\\beta$ the power-law (or allometric) exponent. When $\\beta < 1$ the metric decreases proportionally with scale (such as road surface or petrol stations) and when $\\beta > 1$ the metric accelerates (examples include GDP and new AIDS cases).\n\nThe form of Eq.\u00a0 can be adapted to consider other metrics. For our data, the scaling behavior of property transaction values and police reported crime were tested by comparing 8 models considering population, daytime population, population density, and daytime population density as predictors of crime and property metrics expressed directly (*e.g.* number of crimes) or as a density (crimes per hectare). For instance, when considering both population and indicator density, Eq.\u00a0 can be rewritten as $$\\label{eq_usualscaling_d}\n\\log y = \\log y_0 + \\beta \\log d\\,,$$ where $y=Y\/A$ is the indicator density (*e.g.* a particular crime per hectare) and $d=N\/A$ is the population density. Figure\u00a0 illustrates some of these models by showing scatter plots of $\\log Y \\times \\log N$, $\\log Y \\times \\log d$, $\\log y \\times \\log N$ and $\\log y \\times \\log d$ for the metrics total crime and total property value. By including all categories of crime and property together in a single analysis, we found that the density metrics were superior with daytime population density slightly better for prediction of crime and resident population density better for predicting property transaction values. For this set of metrics, both $R^2$ and predicted residual sum of squares (PRESS) statistics from general prediction models confirmed the density metrics were superior (Table\u00a0).\n\n```latex\n\\begin{adjustwidth}{-2.25in}{0in}\n%\\renewcommand{\\arraystretch}{.8}\n\\caption{\\textbf{Comparison of metrics for prediction of crime and property transaction values.} All models included categorical variables describing the type of crime or property as: Predictor, Type, Predictor*Type. The model with the best $R^2$ and PRESS statistics have been highlighted in bold. }\n\\centering\n\\begin{tabular}{llrr}\n\\hline\n{Dependent } & {Predictor} & $R^2$ (\\%) & PRESS \\\\\n\\hline\nLog(Crime) & Log(Population) & 84.97 & 541\\\\\nLog(Crime) & Log(Daytime Population) & 85.70 & 519 \\\\\nLog(Crime Density) & Log(Population Density) & 95.36 & 380 \\\\\n\\textbf{Log(Crime Density)} & \\textbf{Log(Daytime Population Density)} & \\textbf{95.92} & \\textbf{333} \\\\\nLog(Transaction Value) & Log(Population) & 59.63 & 703 \\\\\nLog(Transaction Value) & Log(Daytime Population) & 55.44 & 774 \\\\\n\\textbf{Log(Transaction Value Density)} & \\textbf{Log(Population Density)} & \\textbf{80.31} & \\textbf{689} \\\\\nLog(Transaction Value Density) & Log(Daytime Population Density) & 79.90 & 703 \\\\\n\\hline\n\\end{tabular}\n\\label{tab:1}\n\\end{adjustwidth}\n```\n\nTo appreciate the superiority of the density metrics, it is helpful to view the correlations in isolation (Fig\u00a0 and [S1\u00a0Fig](#S1_Fig)). A large improvement in correlation is seen when moving to density metrics and in some cases this changed the sign of the correlation. This was also apparent in the general models where a qualitative change from models dominated by categorical variables to ones dominated by continuous variables was observed when density was used. The improvement obtained from population density metrics was not surprising given the data set used. Parliamentary Constituencies were chosen due to having relatively small variations in total population while varying greatly in area.\n\nThe shift from population density to daytime population density gave a comparatively marginal change in outcome (Fig\u00a0). Across all property and crime categories, 13\/24 were more highly correlated with daytime property density than resident population density, roughly the expectation if the two predictors were equal. However, property and crime had distinct profiles. For property, resident population density was always more highly correlated than daytime population density giving $p = 0.0078$ for a binomial test; however, it is not significantly better when considered in isolation (Fig\u00a0). For crime, 12 out of 15 categories ($p = 0.0352$ for a binomial test) were more highly correlated with daytime population density with only 2 cases (Other Theft and Shoplifting) significant when considered in isolation. Other theft includes a range of non-violent theft offenses where large daytime crowds may facilitate commission of the crime. Also, we find no significant difference between population density and daytime population density for all property and crime categories when considering the maximal information coefficient (MIC, [S1\u00a0Fig](#S3_Fig))\u00a0. As the improvement overall going to daytime population data was marginal and the availability of similar data across the world is limited, we focused on resident population density metrics in our subsequent presentation.\n\nAs in the case shown for total property (Fig\u00a0), we found that several density metrics displayed a more complex scaling behavior and a single power law (Eq.\u00a0) was insufficient to describe the observed data. Complex scaling has been observed in other types of scaling. For example, it has been noted in fluctuation scaling of crime\u00a0, disease\u00a0, and a variety of physical processes\u00a0 and scientists have been encouraged to test alternative models to power laws when appropriate\u00a0. Here, visually inspired by the behavior of our data, we tested whether a double power-law provided a significantly better fit between a density metric ($y$) and the population density ($d$) than a single power law, that is, $$\\label{eq_doublescaling}\n\\log y = \n\\begin{cases}\n\n\\log y_0 + \\beta_{\\text{L}}\\log d & (\\text{for}~\\log d\\leq \\log d^*)\\\\\n\\log y_0 + (\\beta_{\\text{L}}-\\beta_{\\text{H}})\\log d & (\\text{for}~\\log d> \\log d^*)\\\\\n\\end{cases}\\,,$$ where $d^*$ is a population density threshold, $\\beta_{\\text{L}}$ ($\\beta_{\\text{H}}$) is the power-law exponent for low (high) population density, $y_0$ and $y_1$ are constants. In particular, we have chosen $\\log y_1 = \\log y_0 + (\\beta_{\\text{L}}+\\beta_{\\text{H}})\\log d^*$, holding the continuity of $y(d)$. Thus, the model of Eq.\u00a0 has two additional parameters when compared with the single power-law model of Eq.\u00a0. This approach provides a picture of the data based on the prevailing view of population scaling (a single power law) against a simple alternative of a double power law. In all cases, the parameters reported were highly significant, which does not rule out that another function or set of functions may fit the data better.\n\nWe compared the models provided by Eqs.\u00a0 and\u00a0 and tested whether the double power-law model gave statistically significant improvement. For the single power-law (Eq.\u00a0), we employed ordinary least squares regression in the log transformed data for obtaining the parameters $y_0$ and $\\beta$ as well as the adjusted $R^2$. We then used bootstrapping to determine the confidence intervals for the adjusted $R^2$. Simulated annealing\u00a0 was used for fitting the double power-law model (Eq.\u00a0) to the log transformed data by considering the residual sum of squares as the cost function, yielding the parameters $y_0$, $y_1$, $\\beta_{\\text{L}}$ and $\\beta_{\\text{H}}$, and also the adjusted $R^2$. Again, the confidence intervals for the adjusted $R^2$ were calculated via bootstrapping. We further considered two-sample bootstrap tests for testing the null hypothesis that the adjusted $R^2$ from Eqs.\u00a0 and\u00a0 are equal\u00a0. Figure\u00a0 compares the values of the adjusted $R^2$ for both models, where we noticed that double power-law model is superior in 19 out 24 metrics. Similar conclusions regarding the model selection were obtained by considering the Akaike Information Criterion (AIC) or Bayesian Information Criterion (BIC)\u00a0 ([S2](#S2_Fig) and [S3\u00a0Figs](#S3_Fig)).\n\nThe improvement in the scaling laws using the density metrics thus revealed segmented scaling in several but not all metrics (Fig\u00a0) indicating the onset of complex scaling (Eq.\u00a0). The scaling parameters are shown in Table\u00a0, where we observe that 5 crime metrics followed a single scaling law over all densities with no evidence for a specifically \"urban\" scaling law, only a continuation of low density behavior. The remaining metrics all exhibited complex scaling and all thresholds fell between 10 and 70 p\/ha.\n\n```latex\n\\begin{adjustwidth}{-2.25in}{0in}\n%\\renewcommand{\\arraystretch}{.9}\n\\caption{\\textbf{Scaling parameters for police crime report density and property transaction value density with population density.}}\n\\centering\n\\begin{tabular}{lrrrrr}\n\\hline\n{Crime Type} & \\multicolumn{1}{c}{$\\log(y_0)$} & \\multicolumn{1}{c}{$\\beta_{\\text{L}}$ or $\\beta$} & \\multicolumn{1}{c}{$\\log(y_1)$} & \\multicolumn{1}{c}{$\\log(d^*)$} & \\multicolumn{1}{c}{$\\beta_{\\text{H}}$}\\\\\n\\hline\nASB & $-1.62\\pm0.02$ & $1.13\\pm0.02$ & $-1.30\\pm0.13$ & $1.47\\pm0.13$ & $0.91\\pm0.08$\\\\\nBike Theft & $-3.26\\pm0.02$ & $1.27\\pm0.02$ & $-4.62\\pm0.77$ & $1.80\\pm0.12$ & $2.03\\pm0.43$\\\\\nBurglary & $-2.35\\pm0.01$ & $1.18\\pm0.01$ & \\multicolumn{1}{c}{-} & \\multicolumn{1}{c}{-} & \\multicolumn{1}{c}{-}\\\\\nCD and A & $-2.21\\pm0.01$ & $1.14\\pm0.01$ & $-1.55\\pm0.11$ & $1.52\\pm0.05$ & $0.71\\pm0.07$\\\\\nDrugs & $-2.77\\pm0.02$ & $1.08\\pm0.03$ & $-3.13\\pm0.08$ & $1.13\\pm0.10$ & $1.40\\pm0.05$\\\\\nOrder & $-2.91\\pm0.02$ & $1.16\\pm0.03$ & $-3.20\\pm0.07$ & $1.06\\pm0.12$ & $1.43\\pm0.05$\\\\\nOther Crime & $-3.29\\pm0.01$ & $1.15\\pm0.01$ & \\multicolumn{1}{c}{-} & \\multicolumn{1}{c}{-} & \\multicolumn{1}{c}{-}\\\\\nOther Theft & $-2.26\\pm0.01$ & $1.11\\pm0.01$ & $-2.57\\pm0.08$ & $1.40\\pm0.09$ & $1.33\\pm0.05$\\\\\nRobbery & $-3.98\\pm0.02$ & $1.55\\pm0.03$ & $-4.73\\pm0.14$ & $1.32\\pm0.08$ & $2.12\\pm0.10$\\\\\nShoplifting & $-2.56\\pm0.02$ & $1.26\\pm0.02$ & $-1.61\\pm0.16$ & $1.50\\pm0.06$ & $0.63\\pm0.10$\\\\\nTheft from the Person & $-3.68\\pm0.03$ & $1.36\\pm0.03$ & $-4.84\\pm0.18$ & $1.39\\pm0.06$ & $2.20\\pm0.12$\\\\\nTotal Crime and ASB & $-1.22\\pm0.01$ & $1.16\\pm0.01$ & \\multicolumn{1}{c}{-} & \\multicolumn{1}{c}{-} & \\multicolumn{1}{c}{-}\\\\\nVehicle Crime & $-2.54\\pm0.01$ & $1.27\\pm0.01$ & \\multicolumn{1}{c}{-} & \\multicolumn{1}{c}{-} & \\multicolumn{1}{c}{-}\\\\\nViolence & $-2.06\\pm0.01$ & $1.12\\pm0.02$ & $-2.28\\pm0.06$ & $1.17\\pm0.13$ & $1.30\\pm0.04$\\\\\nWeapons & $-3.78\\pm0.02$ & $1.23\\pm0.02$ & \\multicolumn{1}{c}{-} & \\multicolumn{1}{c}{-} & \\multicolumn{1}{c}{-} \\\\\n\\hline\nProperty Type & & & & &\\\\\n\\hline\nDetached & $3.30\\pm0.03$ & $0.77\\pm0.04$ & $4.47\\pm0.14$ & $1.21\\pm0.06$ & $-0.20\\pm0.10$\\\\\nFlats & $2.13\\pm0.05$ & $1.13\\pm0.05$ & $-1.65\\pm0.48$ & $1.55\\pm0.04$ & $3.57\\pm0.30$\\\\\nFreehold & $3.55\\pm0.02$ & $0.83\\pm0.02$ & $2.48\\pm0.42$ & $1.70\\pm0.10$ & $1.46\\pm0.25$\\\\\nLeasehold & $2.24\\pm0.04$ & $1.26\\pm0.04$ & $-1.83\\pm0.69$ & $1.68\\pm0.04$ & $3.68\\pm0.40$\\\\\nNew & $2.30\\pm0.03$ & $0.86\\pm0.03$ & $-1.88\\pm1.06$ & $1.80\\pm0.05$ & $3.19\\pm0.58$\\\\\nOld & $3.55\\pm0.02$ & $0.89\\pm0.02$ & $0.92\\pm0.42$ & $1.71\\pm0.04$ & $2.43\\pm0.24$\\\\\nSemi Detached & $2.90\\pm0.02$ & $1.05\\pm0.03$ & $3.84\\pm0.14$ & $1.41\\pm0.06$ & $0.38\\pm0.09$\\\\\nTerraced & $2.83\\pm0.02$ & $1.00\\pm0.02$ & $1.23\\pm0.22$ & $1.55\\pm0.04$ & $2.04\\pm0.14$\\\\\nTotal Property & $3.57\\pm0.02$ & $0.90\\pm0.02$ & $0.69\\pm0.46$ & $1.73\\pm0.04$ & $2.56\\pm0.26$\\\\\n\\hline\n\\end{tabular}\n\\label{tab:2}\n\\end{adjustwidth}\n```\n\nComparison of exponents (Figs\u00a0 and\u00a0) revealed four types of density scaling including three specifically related to \"urban effects\" . The \"non-urban scaling\" was found for burglary, other crime, total crime and antisocial behavior, vehicle crime, and weapons. This designation was applied to metrics where no threshold value could be discerned in the data. Of the three types of urban scaling, the first is \"accelerated urban scaling\" where $\\beta_{\\text{L}} < \\beta_{\\text{H}}$. This was observed in the majority of metrics and applied to: bike theft, drugs, order, other theft, robbery, theft from the person, violence, flats, freehold, leasehold, new, old, terraced, and total property. Metrics following accelerated urban scaling are specifically enhanced in an urban environment. The second urban category is \"inhibited urban scaling\" ($\\beta_{\\text{L}}>\\beta_{\\text{H}} > 0$). Inhibited urban scaling was observed for anti-social behavior, criminal damage and arson (CD and A), shoplifting, and semi-detached properties. Metrics following inhibited urban scaling undergo specifically urban economies of scale. The last urban category is \"collapsed urban scaling\" ($\\beta_{\\text{L}}>\\beta_{\\text{H}}$, with $\\beta_{\\text{H}}<0$). Only a single category (detached housing) followed this type of scaling and to our knowledge this is the first time a negative exponent has been reported in the context of urban scaling.\n\nScaling studies of many of the crime metrics used here have not been reported nor have their transitions in urban environments. The variable effects of high population density are noteworthy. For example, criminal damage which undergoes inhibited urban scaling has been associated with binge drinking in the UK\u00a0 while property crimes including criminal damage have been linked to foreclosures in the US\u00a0. Finding general scaling laws for such behaviour suggests many of these have a wider context. In the case of criminal damage and arson, opportunities appear to be reduced at high population density and high amounts of property crime associated with foreclosures may be a symptom of loss of an inhibitory population density rather than foreclosures directly. A detailed review aligning the scaling laws reported here with the extensive criminological literature should provide considerable insight.\n\nSpecifically urban scaling phenomena were associated with transitions between 10 and 70 p\/ha. The high end of the density thresholds (63 p\/ha) exceeds the highest density threshold considered by Arcuate et al. (40 p\/ha)\u00a0. It also exceeds the average population density of London (59 p\/ha) and all the large cities in Europe and North America outside of Mexico (*e.g.* Moscow (35 p\/ha), Paris (38 p\/ha), and Zurich (32 p\/ha))\u00a0. It is noteworthy that the highest population density found in a US or Canadian city (Los Angeles) is 24 p\/ha when considered as a whole\u00a0. This suggests that most of the transitions seen here may be unobservable in much of Europe and North America unless cities are subdivided into high density regions as was done here.\n\n# Conclusion\n\nThis study significantly refines the urban scaling hypothesis. It set out to investigate regions that are reasonably well matched in population to accentuate scaling behaviours that might arise from inhomogeneity within cities and other density related features. Despite relatively small population variation, there is support for the existing view of population scaling, however in this data set density metrics were universally better. For some metrics, a single power law is sufficient to explain scaling at all population densities over a continuum from rural to urban. These metrics are subject to a single rural-urban scaling law and, in such cases, the scaling behavior of human environments is simpler than previously thought. As there is no clear distinction to be made between urban and rural environments for these metrics, there is less need to define city boundaries precisely. For other metrics, there is indisputable evidence for specifically rural and specifically urban scaling.\n\nThe results indicate that many metrics are not scale invariant in what are currently understood as urban settings. Observed transitions from rural to urban behaviour were in the range of 10-60 p\/ha which is roughly in the midrange of the top 1000 cities with $>$``{=html}500,000 of population when sorted by population density\u00a0. These scaling transitions are associated with acceleration, inhibition, or collapse of the scaling law within the high population density environments of cities. Such behavior is intuitive for some metrics. For example, detached housing is clearly an unsustainable property type at high population density and a collapse in transactions of this type is unsurprising in a high density urban environment. Finding a transition at urban population densities clearly supports the notion of uniquely urban behavior underlying the urban scaling hypothesis. However, most currently published studies have not examined the low side of these density thresholds in detail and will miss the transition from rural to urban scaling. It is also of interest to do more extensive studies on the great cities of Asia, Africa, and the Americas south of the Mexico-USA border. Cities in these parts of the world have particularly high population densities not found in Europe and other parts of North America and may yield more interesting behavior.\n\nImplicit in the design of this study is the notion that both rural and urban environments are non-uniform. A city the size of London is heterogeneous in its distribution of population, property and crime. Greater London includes 73 constituencies allowing the non-uniformity of this region to be considered in the scaling models rather than as a single monolithic conurbation or metropolitan region.\n\nFinally, this study adds evidence to the long-standing challenge to crime rates and per capita comparisons\u00a0. It is clear that high or low per capita crime rates are uninterpretable outside of the context of the scaling law to which they belong and, based on the current study, similar considerations are appropriate for the study of property transactions.\n\n# Supporting Information\n\n## S1 Dataset\n\n**Data employed in this study.** Snapshot of police reported crime captured 10\/6\/2015 and property transaction values captured 17\/7\/2015 for the 12 months of 2014. (XLSX)\n\n## S1 Fig.\n\n![image](figs0.pdf)\n\n**Comparison of maximal information coefficient (MIC) for different property and crime types.** Similarly to adjusted $R^2$, markedly improved correlations are observed using density metrics which were superior in all cases. Here the error bars stand for 99% confidence interval obtained via bootstrap. Unlike adjusted $R^2$, MIC indicates no significant difference between population density and day population density (via bootstrap two-sample mean test with 99% confidence) for other theft and shoplifting.\n\n## S2 Fig.\n\n![image](figs1.pdf)\n\n**Comparison of the single power-law model (Eq.\u00a0) and the double power-law model (Eq.\u00a0) using the Akaike Information Criterion (AIC).** Error bars stand for 99% bootstrap confidence intervals and the asterisk marks indicate a significant difference (via bootstrap two-sample mean test with 99% confidence). Notice that the AIC criteria differs from the adjusted $R^2$ only for bike theft.\n\n## S3 Fig.\n\n![image](figs2.pdf)\n\n**Comparison of the single power-law model (Eq.\u00a0) and the double power-law model (Eq.\u00a0) using the Bayesian Information Criterion (BIC).** Error bars stand for 99% bootstrap confidence intervals and the asterisk marks indicate a significant difference (via bootstrap two-sample mean test with 99% confidence). Notice that the BIC criteria differs from the adjusted $R^2$ only for ASB, bike theft and freehold.\n\n## S4 Fig.\n\n![image](figs3.pdf)\n\n**Comparison of the double power-law model statistics for adjusted $R^2$, AIC and BIC for ASB, Bike Theft and Freehold Property (Eq.\u00a0)** These three metrics were the only cases where the criteria diverged. These can be considered marginal cases of urban scaling transitions.\n\n# Acknowledgements\n\nThe authors are grateful to the Office of National Statistics and the UK Home Office for making these data publicly available. HVR thanks the financial support of CNPq (grant 440650\/2014-3)\n\n# Authors Contributions\n\nConceived and designed the experiments: QH and HVR. Performed the experiments: QH, HVR, and DL. Analyzed the data: QH and HVR . Contributed reagents\/materials\/analysis tools: QH, HVR, and DL . Wrote the paper: QH, HVR, and DL.","meta":{"dup_signals":{"dup_doc_count":11,"dup_dump_count":3,"dup_details":{"curated_sources":1,"2024-30":1,"unknown":9}},"filename":"out\/1602.05596_extract_density_scaling_arxiv.tex.md"},"subset":"arxiv"} +{"text":"abstract: I draw attention to statistical and probabilistic and computer science aspects of the very, very close by topic of quantum internet.\nbibliography: references.bib\ntitle: Quantum Computation, Data Science, and Bell games\n\n| | |\n|:------------------------------------------:|:---:|\n| Richard D. Gill | |\n| Mathematical Institute, Leiden University | |\n\n***Keywords:*** Bell game, loophole-free Bell experiments, Bell inequality, quantum non-locality, classical versus quantum distributed computing\n\n# Media Summary\n\nQuantum computing seems to be on the way, but still there are many hurdles to overcome and still there are some who have fundamental doubts that it will ever deliver the goods. In the meantime, a parallel research effort is being made in designing and implementing quantum internet. How about the prospect of uploading a quantum state to a quantum computer over a quantum communication channel? In fact, that is exactly what quantum teleportation is about. Quantum internet could also have application in generating shared classical cryptographic keys in a way which ensures that two distant parties can each be sure of having a copy of the key, and simultaneously be sure that nobody else can know it. I make a connection between quantum internet and one of Yazhen Wang's topics, namely the topic of Bell games. Already, collaboration between statisticians and probabilists on the one hand, and quantum physicists both experimental and theoretical on the other hand, has led to important developments, connected to the challenging problem of performing successful loophole-free Bell experiments. This story is by no means finished though 2015 saw major breakthroughs. The author contributed to that through his earlier work 20 years ago on whether or not it is possible to fake quantum correlations on distributed classical computers. The answer is, of course, that it cannot be done. Martingale methods have been used to found \"evidence based physics\". Randomisation can be used to neutralise important confounding factors such as time, and to lead to statistical tests of quantum-ness based on the probabilistic properties of the experimenters' randomisers, rather than on dubious statistical assumptions about the physics.\n\n# Discussion\n\nAs a mathematical statistician who has throughout his career both contributed to mathematical statistics and worked again and again in various applied fields, I have always considered myself a data-scientist, combining (I hope!) skills in pure mathematics (probability theory and mathematical statistics), data analysis (always taking advantage of advances in computational possibilities), and understanding of the applied field scientific or societal issues of my collaborators. They have been medical doctors (survival analysis and cancer research), lawyers (miscarriages of justice), and last but not least, quantum physicists, both theoretical and experimental. The best research groups in quantum computation and quantum information have excellent theoreticians as well as experimentalists on their teams. I first got involved in the field of quantum information more than 20 years ago, see Yazhen Wang's references \\[8\\] and \\[11\\]. It seemed that the statistical community was not ready to get involved, and I fear that they really missed the boat. Somehow, people with a classical statistical background, at least 20 years ago, were so in awe of theoretical physics, and so influenced by the folk-lore that quantum probability is not \"our\" probability, that almost nobody in my circles dared to get involved. Of course, when we started, there were no accessible introductions to quantum information theory. The bible of quantum information (see reference \\[44\\]) first came out in the year 2000.\n\nI hope that things are changing at last and I believe that Yazhen Wang's paper will help provide the needed impetus. I would like to comment on what is perhaps a side topic in his paper, but one where again there is enormous scope for data scientists to get involved. And that is because not only is quantum computing a burgeoning field, but also quantum internet is on its way; indeed, possibly it will be with us sooner.\n\nThe connection with Yazhen's paper lies in his Section 3.4: tests and nonlocal games. The archetypical situation here is of two distant laboratories, in each of which is some measurement apparatus. Alice and Bob are two friends in those two labs, and they built the apparatus just how they liked. Well coordinated by prior arrangement, they each insert a binary input into their apparatus. Think of those inputs as being supplied from the outside world, by an opponent who wants to thwart Alice and Bob's plans to coordinate the outputs they will see in relation to the inputs which they will be supplied by their opponent. The machines whir or hum and very fast, each delivers a binary output. The spatio-temporal arrangements are such that, even if transferred by a signal propagating at the speed of light, Alice's input would not be available at Bob's lab till after Bob's output has been generated, and vice-versa.\n\nNaturally, thanks to prior coordination, and having complete control of the apparatus in their labs, Alice and Bob can arrange that their results (on many repetitions of the same \"game\") are strongly correlated. However, if the inputs are completely unpredictable and completely uncorrelated with the stuff in the machines, it is not difficult to derive the Bell-CHSH inequality displayed in Yazhen's Section 3.4. Here I will derive what is actually an equivalent form, starting from a trivial deterministic result. And from that, I will derive a stronger probabilistic inequality.\n\nUnder a classical picture of the world, any mathematical physical model of the situation at hand would allow one to define so-called counterfactual variables, standing for the outcomes which Alice or Bob would have seen, if given either of the two possible inputs. Because of the space-time constraints, those potential outputs (one for each potential input and in each wing of the experiment) could not possibly also depend on the actual input given to their friend. I'll denote the two possible values of each input by '1', '2'. The outputs will take the numerical values $+\/-1$. (Many computer scientists prefer to rewrite all this using bits \"0\" and \"1\" for the possible values of both inputs and outputs.) I'll use the symbols $a$ and $b$ to denote inputs, and use the symbols $x$ and $y$ to denote outputs. (Some physicists use the opposite convention). The counterfactual outputs are denoted by $x_1, x_2, y_1, y_2$. Their relation with the actual inputs and outputs are the consistency relations $x = x_a$, $y = y_b$. In words: Alice determines whether the output she actually will observe will be $x_1$ or $x_2$ by whether she is supplied with $a=1$ or $a= 2$ by her opponent; Bob determines whether the output he actually observes will be $y_1$ or $y_2$ by whether he is supplied with $b=1$ or $b= 2$. The reader will see from m y terminology that I am thinking in terms of the modern statistical theory of causality pioneered and promoted by Judea Pearl and many others.\n\nNotice that whatever values $\\pm1$ the four counterfactual outcomes take, $(x_1y_1)(x_2 y_1)(x_2 y_2)(x_1y_2)= +1$. It follows that the number of equalities $x_1= y_1$, $x_2 = y_1$, $x_2 = y_2$, $x_1 = y_2$ which are true must be even, and the number of inequalities which are true must be even, too. Three inequalities and one equality is impossible. Thus, given a list of one inequality and three equalities, at least one of the four statements must be true. Using the notation $I\\{\\dots\\}$ to stand for an indicator variable equal to 1 or 0 depending on whether the statement in curly brackets is true or not, this means that $$I\\{ x_1 \\ne y_1 \\} + I\\{x_2 = y_1\\} + I\\{x_2 = y_2\\} + I\\{x_1 = y_2\\}~ \\ge ~1.$$ At least one of the four indicator variables must take the value $1$. Now let $A$ and $B$ be independent fair coin tosses taking values in $\\{\\textrm{`1'}, \\textrm{`2'}\\}$. By $AB$ I will denote the random two character string formed by concatenating $A$ and $B$. It takes the values '11', '12', '21', '22', with equal probabilities $1\/4$. Multiply the previous equation throughout by $1\/4$. This gives us, $$I\\{x_1 \\ne y_1\\}P(AB = 11)$$ $$\\hskip2em+ I\\{x_2 = y_1\\} P(AB = 21)$$ $$\\hskip2em+ I\\{x_2 = y_2\\} P(AB = 22)$$ $$\\hskip2em+ I\\{x_1 = y_2\\} P(AB = 12)$$ $$\\ge ~ \\frac14.$$ The game which Alice and Bob (who previously built and set up their machines) play against an opponent (who supplies the inputs) actually consists of many rounds, and in each round, Alice and Bob win if their outcomes are different and neither setting equals \"1\", or their outcomes are the same and both settings are \"1\". We have just proven that the chance that Alice and Bob *lose* any particular round is *at least* 1\/4. The chance they win a round therefore cannot exceed 3\/4 = 0.75.\n\nEverything I just said is also true for each separate round, conditional on all preceding rounds. The conditional probability of winning a given round conditional on the entire past history of any number of rounds and given the current state of their machines is bounded by 3\/4. It easily follows that if they play $N$ rounds, their number of wins is stochastically bounded by a binomial random variable with number of trials equal to $N$ and with success probability 3\/4.\n\nNote that it is allowed that the states of their apparatus in the $n$th round can depend on all past inputs and outputs in any way whatsoever. All kinds of physical parameters might be changing as time goes by, and this might happen in a way which is correlated in the two labs. Past results in Alice's lab might also influence what goes on in Bob's a short time later, and vice versa. As long as they just keep on fixing settings by tossing fair coins, it does not matter. In statistical terms, strict randomization of treatments of our $2N$ subjects means that we do not have to worry about the possible hidden confounder *time*.\n\nAccording to quantum mechanics however, they could have a success probability of a little bit more than 0.85 if only they were able using quantum internet to set up, in each round, before the settings of that round are delivered to them, a maximally entangled pair of qubits in the appropriate state. The 2015 loophole free Bell experiments played this game using quantum entanglement (and indeed, sophisticated entanglement swapping techniques and other innovations). They showed, modulo some statistical imperfections, that the Bell game can be won with a success probability larger than 0.75 by exploiting quantum non-locality. Even better experiments should be ready any day now. (The teams in Delft and Munich only just achieved statistical significance at the 5% level, their value of $N$ was rather small; their success rates were very nice, namely about $0.8$; the Vienna and NIST teams got an astronomically high statistical significance but a success rate of only about 0.750001)\n\nThis whole story goes back to papers I wrote in 2001 in which I studied distributed computer simulations of the Bell game. I needed to take account of the fact that a small network of computers simulating the different locations of the Bell game might reasonably be allowed to communicate between rounds and to change strategy as time goes on. A lot of statistical analysis goes into these experiments. Quantum informed data scientists need to be involved!\n\nThe techniques used in the 2015 Delft and Munich experiments used an elegant \"entanglement swapping\" technique, essentially a form of quantum teleportation. This technique is now being refined and extended to give us quantum repeater networks by which one will be able to teleport a qubit from one quantum computer to another. For some recent results, see","meta":{"dup_signals":{"dup_doc_count":13,"dup_dump_count":4,"dup_details":{"curated_sources":3,"2024-10":1,"2024-26":1,"unknown":8}},"filename":"out\/2111.07881_extract_comment-RDG-rev.tex.md"},"subset":"arxiv"} +{"text":"abstract: Traditional crime prediction models based on census data are limited, as they fail to capture the complexity and dynamics of human activity. With the rise of ubiquitous computing, there is the opportunity to improve such models with data that make for better proxies of human presence in cities. In this paper, we leverage large human mobility data to craft an extensive set of features for crime prediction, as informed by theories in criminology and urban studies. We employ averaging and boosting ensemble techniques from machine learning, to investigate their power in predicting yearly counts for different types of crimes occurring in New York City at census tract level. Our study shows that spatial and spatio-temporal features derived from Foursquare venues and checkins, subway rides, and taxi rides, improve the baseline models relying on census and POI data. The proposed models achieve absolute $R^2$ metrics of up to $65\\%$ (on a geographical out-of-sample test set) and up to $89\\%$ (on a temporal out-of-sample test set). This proves that, next to the residential population of an area, the ambient population there is strongly predictive of the area's crime levels. We deep-dive into the main crime categories, and find that the predictive gain of the human dynamics features varies across crime types: such features bring the biggest boost in case of grand larcenies, whereas assaults are already well predicted by the census features. Furthermore, we identify and discuss top predictive features for the main crime categories. These results offer valuable insights for those responsible for urban policy or law enforcement.\naddress: , , ,\nauthor: ; \nbibliography: epj.bib\ntitle: Mining large-scale human mobility data for long-term crime prediction\n\n# Introduction\n\nCrime prediction is inherently difficult. Crime analysis has already confirmed that crimes are unequally distributed in time and space . Furthermore, crime is a highly dynamic and complex phenomenon driven by the people and the environment where they meet , and scholars in different disciplines are still investigating various elements for predictive power. Knowing when and where crime is more likely to occur can help various actors engaged in crime reduction: urban planners to design safer cities and police forces to better direct their patrols .\n\nInitially, criminological studies have focused solely on socio-demographic attributes as factors correlating with victimization and have noticed that specific groups of people tend to have lifestyles that exposed them to higher risk of victimization compared to other groups \u2013 as explained by the *Lifestyle Exposure Theory* . For instance: men, young adults, and African Americans have been found to experience higher risk of victimization in general . Under the umbrella of the *Social Disorganization Theory*, a series of criminological studies have explained crime as a product of the ecological attributes of the neighborhood: ethnicity, income level, and residential stability .\n\nCohen and Felson extended the model beyond the attributes of the underlaying populations towards opportunity \u2013 according to their *Routine Activity Theory* there are three elements which need to be present in time and space for a crime to occur: a motivated offender, a suitable target, and a lack of guardianship. Finally, Brantingham and Brantingham analyzed criminogenic places in cities \u2013 places that make crime easy and profitable and are the by-products of the environments we build to support the requirements of everyday life (e.g. homes, shops, offices, government buildings, parks, bus stops or sports stadia) \u2013 and divided them into crime attractors and crime generators. Crime attractors are places which attract criminals, because there are known opportunities in those areas. As a consequence, the probability of a crime happening in those places is higher compared to other places (e.g. night life district). In turn, crime generators are places in which crime emerges at times where large number of people are attracted to those places for reasons other than to offend (e.g. massive sports events).[^1]\n\nOther, more qualitative works in urban planning, have also looked at the relationship between the built environment, population and safety. Specifically, two notable works do not agree whether the density and diversity of human activity within an area are attracting crime or not. In the *Eyes on the Street Theory* , Jacobs postulates that higher densities of people and buildings, pedestrian areas and a mix of activities in the neighborhood act as crime deterrents. On the other hand, Newman suggests that less built areas with more segregated activities are safer .\n\nIn terms of data, traditionally, quantitative models explaining crime have leveraged the socio-demographic and economical data available from the census, describing the resident population of a given neighborhood . From a theoretical point of view, these models have relied on the initial victimization theories in criminology.\n\nBut census data has an intrinsic limitation, in that it only offers a static and sometimes obsolete image of the city, without capturing the people dynamics over time and space. There is now the opportunity for non-conventional factors to be integrated in crime prediction models by tapping into novel data sources that reflect the structure and dynamics of our cities. With the emergence of mobile phones and other types of ubiquitous computing, a plethora of geo-tagged crowd-generated data can now offer an approximation of the ambient population. In particular, location-based social networks (LBSNs) like Foursquare offer a very vivid image of the city, being able to not only provide time and location of human activity, but also the context (like traveling, shopping, working, going out, etc.) in which activities occur. For example, researchers have successfully showed that Foursquare can be used to automatically infer urban clusters which reflect the local dynamics and character of life of the area . Furthermore, mobility data, such as public transportation or taxi data, have the capability of capturing the population in and outer flows in different parts of the city. For example, researchers have mined subway usage data to identify deprived areas in the city . All this leads to the current unique chance of empirically measuring aspects of criminological theories relying on dynamic data which was previously prohibited at large scale.\n\nHence, in this work, we investigate the potential of geo-tagged human dynamics data for long-term crime prediction models. We use such data to model crime attractors, crime generators and the ambient population in a neighborhood and add these factors on top of the classical factors from census that model the resident population in a neighborhood. The full models for the total number crime incidents achieve absolute $R^2$ metrics of up to $65\\%$ when testing on neighborhoods of the same city which have not been used during the training phase of the models, and up to $89\\%$ when testing on the full data of the next year. In comparison to the census-only baselines, this translates to improvements of 30 percentage points (on a geographical out-of-sample test set) and of 7 percentage points (on a temporal out-of-sample test set). Furthermore, we look at the major crime types and show that we can achieve improvements of up to 43 percentage points and of up to 9 percentage points, respectively (for the case of grand larcenies).\n\n# Related Work\n\n## Urban Computing\n\nNowadays, sensing technologies and large-scale computing infrastructures produce a variety of big data in urban spaces: geographical data, human mobility, traffic patterns, communication patterns, air quality, etc. The vision of urban computing, an emerging field coined by Zheng and collaborators , is to unlock the power of big and heterogeneous data collected in urban spaces and apply it to solve major issues our cities face today. They identify seven application areas of urban computing: urban planning, transportation systems, environmental issues, energy consumption, social applications, commercial applications, and public safety and security.\n\nA special category of this urban data consists of human dynamics data and researchers in the different application areas started to leverage it. For example, within the urban planning and transportation domains, the authors in attempt to infer the functions of different regions in the city of Beijing by analyzing the spatial distribution of commercial activities and GPS taxi traces, while the authors in mine different urban open data sources including LBSNs in the cities of Washington, D.C. and Hangzhou for optimal bike sharing station placement. Furthermore, for commercial purposes, researchers mine LBSNs for optimal retail store placement or the London metro data for insights into the financial spending of transport users , and a variety of urban big data sources for predicting commercial activeness . Within the public safety and security sector, scholars have just recently started to investigate the potential use of social media , of mobile data , and of taxi flow data for the purpose of crime inference\/prediction. In a related literature stream, authors in exploit POIs from different sources to build classifiers of urban deprivation (a composite score of seven domains, with crime being just one of them) for neighborhoods in the UK, while authors in , assess the potential of subway flow data to identify areas of high urban deprivation in the city.\n\n## Crime Prediction\n\nResearchers in a wide range of fields like criminology, physics and data mining have looked at predicting crime at various scales and using different techniques. In this section we present a short overview of the existing literature.\n\nOne approach is to model crime and cities as complex systems, through the lenses of **urban scaling laws**. A series of papers has found that crime indicators scale super-linearly with the population sizes of cities . In general, these studies carry out uni-variate or multi-variate analysis of crime, i.e. crime as a function of population or of other socio-economic variables, and at a high aggregation level (that of cities). Also, at lower resolution, researchers have confirmed that crime concentrates regardless of city and have found relevant allometric relations between peace disturbance and the resident population, as well as between property crimes and the floating population .\n\nAt intra-city level and using methods from statistical learning, we distinguish between two types of prediction models. The first type of models, consisting of **long-term crime prediction models**, aim at modeling long-term crime level by looking at aggregated crime rates over 1 to 5 years. In terms of techniques, these models rely on classical inference models like the, sometimes geographically-weighted , Poisson and Negative Binomial regressions, where the task is to predict crime levels and the performance of the model is evaluated in terms of in-sample goodness of fit. In terms of data, the traditional models in criminology make use of the classical demographic crime correlates, such as residential instability, ethnic heterogeneity, poverty rates, or income rates . Moving to the data mining community, authors in use census data, OpenstreetMap POI data, and features of the road network to predict annual burglary levels for municipalities in Switzerland by means of regularized linear regressions tested on a one year left-out sample. Most recent work on long-term crime prediction makes use of novel nodal features (Foursquare POI data next to demographic data) and edge features (geographical influence of direct neighbors or as computed by taxi flow data) to explain crime rates at community level by means of geographical linear and negative-binomial regressions. Similarly, authors in employ spatial econometrics techniques where they compare and contrast the explanatory power of a limited set of census and Foursquare features for aggregated census tract crime levels.\n\nThe next category of models is the category of **short-term crime prediction models**, also called spatio-temporal prediction models, where the dependent variable is aggregated over short time periods varying from 1 day to 1 month. The most basic and widely applied model for that is the hot spot model . It clusters past incidents into regions of high risk (the so-called hot spots) using statistical methods like kernel density estimation (KDE) or mixture models. In this case the past is prologue for the future: crime is likely to occur where crime has already occurred! Another set of models that use crime data only are repeat and near-repeat models. Here, researchers have characterized each location by a dynamic attractiveness variable and have represented each criminal as a random walker , or have adapted self-exciting point processes that were initially developed for earthquake modeling to crime modeling . The assumption is that some future crimes will occur very near to current crimes in time and place. The biggest disadvantage of models exploiting solely the historical crime records is that they cannot be generalized to areas without historical data. The spatio-temporal generalized additive model (ST-GAM) and the local spatio-temporal generalized additive model (LST-GAM) start looking at socio-demographic data (like population density, unemployment rate, education level, net income, social aid, etc.), and spatial data (like spatial proximity to bus stations, governmental buildings, pawn shops, night life establishments, stores, parks, etc.), and temporal data (like time of day\/week\/year, temporal proximity to special events such as football games, etc.) describing a criminal incident. These models are extensions of regression models on grids, where the features can be indexed by time. Only very recent research has started to utilize human dynamics data in short-term crime prediction models. Gerber has shown that combining topics derived from the Twitter stream with the historical crime density delivered by a standard KDE under a logistic regression model leads to an increase in the prediction performance of hotspots next day versus the standard KDE approach for most of the tested crime types. Combining for the first time demographic data and aggregated and anonymized human behavioral data derived from mobile data, Bogomolov and colleagues were able to obtain an accuracy of almost 70% when predicting whether a specific area in the city will be a crime hotspot or not within the next day .\n\n# Research Gap and Contributions\n\nOur work lies within the category of long-term crime prediction models. Compared to previous work in this literature stream, we make following contributions:\n\n1. in terms of data, we are the first to craft a comprehensive set of spatial and spatio-temporal features describing the dynamics of human activity in an area, as captured by the usage of social networks, public transportation, and road transportation and use this data describing the ambient population to enhance the traditional set of features describing the resident population as modeled by the census statistics.\n\n2. in terms of techniques, we employ latest averaging and boosting ensemble techniques from machine learning, which in comparison to the current linear models in literature, can deal with the large number of features described above.\n\n3. in terms of evaluation, we test the models on geographical and temporal out-of-sample test sets, to prove generalization and compare them against a weak-baseline based solely on census data and a against a strong-baseline based on census and POI data. We furthermore compare the individual predictive power of the considered data sources of human mobility: Foursquare venues\/checkins, NYC subway rides, and NYC yellow and green taxis rides.\n\n4. in terms of unit of analysis, we analyze crime at a granular level, with counts of various types of urban crime being effectively predicted at a high degree of geographic resolution, namely census tracts. We notice different degrees of predictive performance across the different crime types.\n\n5. in terms of interpretability and unlike most studies within the urban computing community, we motivate the choice of features in criminal theory and discuss and interpret the results of the models in this context.\n\n# Datasets\n\nNew York City (NYC) is a city that has experienced crime across time, though the levels have dropped since the 1990s , some attributing the success to new policing tactics and the end of the crack epidemic . Furthermore, as part of an initiative to improve the accessibility, transparency, and accountability of the city government, the NYC Open Data platform[^2] provides massive data in machine-readable formats on buildings, streets, infrastructure, businesses, permits, licenses, crime, 311 complaints, public transportation, and many more. Furthermore, NYC's 8.5 million inhabitants leave rich digital footprints of their daily activity in various location-based online services, NYC being the most popular city on Foursquare[^3] with about 132 million checkins as of May 2016[^4].\n\n## Crime Data\n\nThe raw crime dataset was downloaded from the NYC Open Data platform. For anonymization reasons, in case the offense has not occurred at an intersection, the New York Police Department (NYPD) projects the location of the incident to the center of the block (street segment). Furthermore, crime complaints which involve multiple offenses are classified according to the most serious offense[^5]. Next to the total number of incidents, we concentrate on the following five felony types: grand larceny (which is the theft of another's property, including money, over a certain value), robbery, burglary, felony assault, and grand larceny of motor vehicle \u2013 leaving out the murder and rape cases which have very different underlying causal mechanisms and are also reported on a higher aggregation level. We keep for analysis the data of the last 2 complete years (2014 and 2015). This yields a total number of 174,682 incidents across the five boroughs of NYC: Bronx, Brooklyn, Manhattan, Queens and Staten Island.\n\n## Census Data\n\nThe census data for NYC was obtained from two separate sources, the 2010 Decennial Census, as well as the 2010-2014 and the 2011-2015 American Community Survey (ACS). In both cases, the data was fetched from the FTP sites of the US Census Bureau[^6], and was filtered out to keep only the data on a census tract level.\n\nThe Decennial Census includes basic demographic figures, which are based on actual counts of persons dwelling in the US and is conducted only once every 10 years. The Summary File 1, used for this study, includes items describing the population, such as gender, age, race, origin, household relationship, household type and size, family type and size, etc. In addition, housing characteristics are captured through the occupancy\/vacancy status and tenure. The ACS estimates are based on yearly collected survey data over a sample of the US population. For the purposes of this study, the 5-year estimates were used, as the largest and most reliable sample, where the data is available on a census tract (and smaller) geography level. Apart from the demographics, ACS contains a rich set of social, housing and economic features, with residential stability, poverty and income being of interest for this study.\n\n## Foursquare Venues Data\n\nThe Foursquare dataset was collected via the Foursquare API, using the venues search and venue details endpoints. The Foursquare API has been serving both the Foursquare 8.0 and the Swarm apps since the 2014 split of the original Foursquare app. While Foursquare continues to provide a local search-and-discovery service for places near a user's current location, Swarm lets the user share their location with friends at different precision levels (at city and neighborhood levels, or by checking-in to a specific venue).\n\nThe collected data consists of NYC venues with compact metadata like id, name, location, checkins count (total checkins ever done in that venue), users count (total users who have ever checked in), associated categories, menu, opening and popular hours, user-generated tips, etc. We have queried the API by searching for venues in the proximity of every incident location described previously, and this resulted into an extensive database of 273,149 different venues, that have experienced in total over 122 million checkins since their creation on the platform until the time of the data collection (June 2016). From these, 250,926 venues have an assigned category. The Foursquare categories span a broad ontology, headed by the following top ten categories: Arts and Entertainment (11,794 venues), College and University (7,082), Event (84), Food (47,590), Nightlife Spot (11,140), Outdoors and Recreation (18,011), Professional and Other Places (64,055), Residence (14,632), Shop and Service (62,627), Travel and Transport (13,911). The distribution of the top categories across the venues is uneven and biased towards establishments where people go out for services, working, shopping, or dining.\n\n## Subway Usage Data\n\nSubway usage data, commonly referred to as turnstile data, is released regularly by the Metropolitan Transportation Authority (MTA) and contains entries and exits audit data, generated from the Control Areas from its three main divisions: Interborough Rapid Transit Company (IRT), Independent Subway System (IND) and Brooklyn-Manhattan Transit Company (BMT). While the original dataset contains data from several other associated agencies, for consistency reasons these were left out of the final dataset since the corresponding stations were not located within NYC, or represent train, bus or cable car stations. We downloaded the turnstile data from the New York State Open Data portal[^7] and the MTA website[^8] for the two full years of 2014 and 2015. In addition, a geocoded list of MTA stations was also obtained from the same portal[^9].\n\nTo perform the preliminary data cleaning and combine the two data sources, a careful manual examination of station names was conducted. The goal was to resolve situations where the same station appeared with different names in the turnstile dataset, e.g. both '18 AV' and '18 AVE' where coded as '18 AV', and to unify the names used in both datasets. Once the data was cleaned and merged, each station was further examined for location accuracy, by comparing and adjusting it with the corresponding station geolocation provided by Google Maps. In the end, 455 distinct subway station locations were compiled. In the two years of analysis, they have experienced almost 21 million turnstile updates (the turnstile counters updated every 4 hours).\n\n## Taxi Usage Data\n\nThe taxi dataset was downloaded from the official website of the City of New York, specifically the Taxi and Limousine Commission[^10] and combines the 2014 and 2015 complete records of both yellow and green taxi trips. These are the two types of services permitted to pick up passengers via street hails, thus offering a great footprint of human activity. Furthermore, yellow cabs are concentrated around Manhattan and the two main airports (JFK International Airport and LaGuardia Airport), while green cabs are allowed above the 110th Street in Manhattan and in the outer-boroughs of New York City. With the two datasets joined, we obtain a good coverage of the whole city. The trip records include fields capturing pick-up and drop-off timestamps and locations, next to other meta-data like driver-reported passenger counts and trip distances. We have processed in total over 340 millions taxi drives for this work.\n\n# Model Specification\n\n## Unit of Analysis\n\nWe cast the problem as a regression task on the log-transformed crime counts in each census tract. For each census tract, we sum all crime incidents (total and per crime type) occurring in 2014 and in 2015 within the census tract. We opt for crime counts and not crime rates (which are crime counts normalized by the census population), as we like to show the explicit effect of both the resident population (as recorded by census) and of the ambient population (as recorded by the different proxies) on the raw counts. As a technical remark: we look in the following at points situated in the area of each census track, buffered by 50 feet (which is half the width of the main Manhattan avenues), to account for potential precision inaccuracies in the different spatial data types and to integrate the crime locations that lie on the bordering streets. The same applies for venues, subway, and pickup\/drop-off locations.\n\nCensus tracts provide a stable set of geographic units for the presentation of statistical data and generally have a population size between 1,200 and 8,000 people, with an optimum size of 4,000 people[^11]. In the case of NYC they span a few blocks and offer a natural unit for crime analysis at a detailed level. NYC has a total of 2,167 official census tracts. A few of these consist only of water or shoreline areas, which have not been experiencing any crime incidents in either of the analysis years. Furthermore, some NYC census tracts consist fully of military posts or jail facilities (like e.g. Fort Hamilton and Rikers Island) which exhibit different crime reporting schemes, next to restricted human presence. We remove these census tracts, and remain with a final of $N = 2,154$ census tracts. Please note we still include many census tract with no resident population, like parks or airports, as these still experience crime, and now we have the possibility to model it by means of the ambient population measured by the alternative data sources. For visualization purposes, Figure\u00a0 depicts the 2015 aggregated crime counts per census tract, together with some example features computed at census tract level. All maps in this paper have been generated using the open source software QGIS[^12].\n\nTable\u00a0 presents the descriptive statistics of crime counts of all types, while Figure is depicting the histograms of the total incidents counts per census tract. We can observe that the distribution of the data is positively skewed with many observations having low count values. The various crime types expose also similar power law distributions, so for the prediction task below, we log-transform the dependent variable to correct for the positively skewed distribution, and use this as our dependent variable $y$.\n\n\\[0.9\\]\n\n| Incident type | Min | Q1 | Median | Mean | Q3 | Max |\n|:---------------:|:---:|:---:|:------:|:----:|:---:|:---:|\n| ***2015*** | | | | | | |\n| total incidents | 1 | 29 | 52 | 68 | 91 | 661 |\n| grand larceny | 0 | 10 | 18 | 28 | 31 | 519 |\n| robbery | 0 | 4 | 8 | 12 | 18 | 90 |\n| burglary | 0 | 4 | 7 | 9 | 12 | 90 |\n| assault | 0 | 4 | 8 | 13 | 19 | 95 |\n| vehicle larceny | 0 | 2 | 4 | 5 | 6 | 38 |\n| ***2014*** | | | | | | |\n| total incidents | 2 | 31 | 53 | 70 | 93 | 644 |\n| grand larceny | 0 | 11 | 19 | 29 | 33 | 512 |\n| robbery | 0 | 3 | 9 | 12 | 17 | 67 |\n| burglary | 0 | 5 | 8 | 10 | 14 | 83 |\n| assault | 0 | 3 | 8 | 13 | 19 | 92 |\n| vehicle larceny | 0 | 2 | 4 | 5 | 7 | 61 |\n\nDescriptive statistics of the crime data: counts per census tract for each year.\n\n## Prediction Features\n\nIn what concerns the independent variables $\\boldsymbol{x}$, we craft an extensive set of features based on the collected massive datasets. Each feature represents a numeric score that characterizes a given census tract and is motivated by domain knowledge in criminology or urban computing, as explained below. We classify the features into three broad categories: (1) socio-demographic and economical features derived from the census sources, (2) spatial features which exploit solely the static information about the venues and subway stations, and (3) spatio-temporal features which integrate knowledge about the way the population moves around the city (by means of check-ins, subway entries\/exits, taxi pick-ups\/drop-offs). We have imported all data into a PostGIS-enabled Postgres database[^13], which offers in-built optimized temporal and spatial queries that are required to process the data for feature generation per unit of analysis, as described in the remainder of this section.\n\n### Census Features\n\nTo account for the fact that the units of analysis are heterogeneous, we include the census tract's **area** (in square miles) and **total population** as controls in the regression. We then proceed with a standard set of factors deemed in past criminological studies as significantly influential of crime and have been used also in related work in data mining, like .\n\nWe start by operationalizing the concepts of the *Lifestyle Exposure Theory* and *Social Disorganization Theory*. We start with indicators of population at risk and of concentrated disadvantage : **fraction of male population**, **fraction of black population**, **fraction of hispanic population**, **fraction of population under the poverty level**. As violence has been associated with residential instability of neighborhoods , we compute the **fraction of vacant households**, the **fraction of rented households** from the occupied ones, and the **fraction of stable population** (individuals who moved in prior to 2010).\n\nFurthermore, population diversity has been shown to play a role in the crime phenomenon so we computed several diversity indexes based on the socio-demographic and economical information: a **racial ethnic diversity index**, an **age index**, and an **income diversity index**. The racial ethnic index is defined by the plurality of multiple ethnic and racial groups within a certain area and is computed based on five exhaustive and mutually exclusive aggregates (non-Hispanic whites, non-Hispanic blacks, Hispanics of any race, Asians, and others \u2013 Native Americans, members of other races, and multi-racial persons) . The age index measures the variance in ages of the residents across four main age groups (under 18, 18-34, 35-64, and over 65 years), and the income index measures the variance in household income across three main income levels (low, medium, and high-income households) .\n\n### Spatial Features\n\nThis category of features describes the characteristics of a neighborhood, as captured by the spatial distribution of the Foursquare venues and subway stations within its perimeter. In general, the venues can be seen as *crime attractors* \u2013 particular places to which offenders are attracted because of the known opportunities for particular types of crimes .\n\nThe **number of venues of each category** measures the venues counts within a census tract and it is a static popularity metric of that area. The **fractions of venues of each category** capture the specifics of the life within a census tract, and it is an empirical metric for the functional decomposition of that particular area in the city. The **venues diversity index** is then a single measurement capturing the diversity of this decomposition. Inspired by , we use the entropy measurement from information theory as a diversity metric. Intuitively, the entropy quantifies the uncertainty in predicting the category of a venue that is taken at random from the area.The final formula models the normalized Shannon diversity index (also called the Shannon equitability index ), which is the Shannon diversity index divided by the maximum diversity. For a given census tract $t_i$, we denote the count of included venues of category $c$ with $V_c(t_i)$ and the total number of included venues with $V(t_i)$ and formally define the venues diversity index of that census tract as follows (we employ smoothing by adding the constant 1 to the numerator and denominator to prevent zero divisions): $$- \\sum_{c \\in C} (\\frac{ 1 + V_c(t_i)}{ 1+ V(t_i)} \\times \\ln{\\frac{ 1 + V_c(t_i)}{ 1 + V(t_i)}}) \/ \\ln{|C|}$$ The higher the index, the more heterogeneous the area is in terms of types of places, and following that, in terms of functions and activities of the neighborhood, whereas a least entropic area would indicate an area with a dominant function. For example, a census tract dominated by venues from the College and University category, would indicate a part of the city where people primarily study and would have a low diversity index.\n\nMotivated by the work in , we generate a metric called the **offering advantage** which denotes to what extent a particular neighborhood offers more venues of a particular category in comparison to the average neighborhood. Intuitively, the presence of one venue of an unpopular category, is more informative in profiling a neighborhood than the presence of one venue from a well-spread category. The offering advantage of category $c$ in each census tract $t_i$ of the total $N$ census tracts in NYC, is computed with the following formula: $$\\frac{ 1 + V_c(t_i)}{ 1 + V(t_i)} \\times \\frac{total\\_venues}{\\sum_{i=1}^{N} V_c(t_i)}$$ where $total\\_venues$ is the number of total venues in NYC with an assigned category.\n\nFinally, based on the MTA dataset, we compute the **total number of subway stations** within each census tract, to reflect whether the area is subject to high-volume population transit from other parts of the city.\n\n### Spatio-temporal Features\n\nIn this section, we derive metrics of human activity in that area. We compute, analog to the census data, metrics of density and diversity \u2013 but, while the census features exploit information about the reported residential population, the human dynamics features are computed based on the ambient population, as measured by their usage of public venues and transportation. Overall, the features in this categories describe possible *crime generators*. Crime generators produce crime by creating particular times and places that provide appropriate concentrations of people and other targets . These features can also be connected to the *Routine Activity Theory*, as they model the activity nodes where motivated offenders meet vulnerable targets.\n\nThe **number of checkins per category** measure the popularity of the area. The empirically observed Foursquare checkins can be regarded as a more accurate measure of human activity than the traditional population density statistics from the census.\n\nWe further exploit Foursquare usage in each census tract, by looking at the popular hours of the venues (those times of the week where the venues experience most activity \u2013 checkins, reviews, etc.) and compute the **number of venues that are popular in a typical morning, afternoon, evening or night \u2013 split by weekdays and weekends** in each. These features give valuable information about the temporal break-down of human activity in the area.\n\nWe then compute, analog to the previous section, the **fraction of checkins of each category** in the area. These can be seen as measurements of the intensity of the different activity contexts in which the population engages. For instance, an area with many checkins in the Residence category would correspond to a residential neighborhood, which is very different to an entertainment district, that would in turn be characterized by a high number of checkins in the Food, Nightlife Spot, and Shop and Service categories. We proceed by computing the **checkins diversity index**, as an index of the distribution of human activity within the census tract. It can be seen, that the venues and checkins diversity indexes are the best operationalization of Jacobs' and Newman's concept of mixed land use.\n\nInspired by recent work on digital neighborhoods , we compute **local quotients** of (digital) social activity within an area. Let $C(t_i)$ denote the total number of checkins and $P(t_i)$ the total population count within a census tract. We then compute the concentrations of checkins relative to the number of businesses and to the reference census population: $$\\frac{ 1 + C(t_i)}{total\\_checkins} \\times \\frac{total\\_venues}{1 + V(t_i)}$$ $$\\frac{ 1 + C(t_i)}{total\\_checkins} \\times \\frac{total\\_population}{1 + P(t_i)}$$ where $total\\_checkins$ denotes the number of total checkins in NYC, and $total\\_population$ the total census population of NYC. Neighborhoods with local quotients $>> 1$ can be regarded as (digital) hot spots, while neighborhoods with local quotients $<< 1$ can be regarded as (digital) deserts.\n\nIt can be observed, that the offering advantage and the local quotient metrics are both refined measures of the *relative intensity of human activity* in an area as opposed to the whole city (one being based on the static distribution of the venues, and the other on the more dynamic distribution of the checkins).\n\nTo make use of the temporal dimension of the turnstile subway data, we aggregate it to **weekly averages of the number of individuals entering and exiting the subway stations \u2013 split into Mon-Fri and Sat-Sun intervals**. We also compute a **subway rides diversity index**, by considering these four different categories: subway entries\/exists in week\/weekend.\n\nFinally, we exploit the taxi ride data and computed **weekly averages of the number of passengers being picked up or dropped off in the census tracts \u2013 split into Mon-Fri and Sat-Sun intervals**. Complementary to the popular hours of the venues, and the subway features, these features should give an additional indication of the average in- and out-flows of the population traveling to and from the area. Finally, we compute a **taxi rides diversity index**, by considering the numbers of pick-up\/drop-off rides within the neighborhood.\n\nAcross all three feature categories, we end up with a total of 89 features. For exemplification purposes, Figure\u00a0 depicts a selection of the 2015 features computed at census tract level. Spearman correlation tests and linear regressions have revealed significant correlations between many of the features and the different $y$ variables \u2013 see supplementary material (section Descriptive Statistics). We decided to keep them all for the following step, where the chosen machine learning algorithms, due to their internal structure, will be able to deal with higher number of (potentially correlated) features and rank them according to their predictive power.\n\nWe ought to acknowledge that other approaches to generating features would have been possible, all the way to completely automatically generating higher-level features from the raw data using techniques such as deep learning. We chose the middle way where we exploit a high number of features but use domain knowledge to generate them. This approach is prevalent in the urban computing and data science literature, used for instance: to identify optimal retail store placement , to quantify the relationship between urban form and socio-economic indexes , or to understand economic behavior in the city .\n\n# Results\n\n## Model Evaluation\n\nWe train three different tree-based machine learning models: a Random Forest regressor , an Extra-Tree (Extremely Randomized Tree) regressor , and a Gradient-Boosting regressor \u2013 all known in the literature for their ability to yield competitive prediction quality in high-dimensional heterogeneous feature spaces. Due to their non-parametric nature, they make no assumption about the data and can work with many, collinear features, while also requiring little preparation of the data . On the other hand, linear models assume that the explaining variables are non-collinear, which is not the case in our data-rich setup. Furthermore, a linear model has proved to yield poor performance on our datasets and is not reported.\n\nRandom forests are very popular in practice, as they are easy to use, robust, and yield good performance. An entire set of decision trees are grown at training time, and their mean prediction is output at testing time, thus lowering the variance of the individual learners. The Extra-Trees add a third level of randomization in comparison to the random forests, in that the split tests at each node of the decision trees are random, next to the chosen sub-sets of samples and features. In practice, they yield sometimes better performance thanks to the introduced smoothing effect, and also remove computational burdens linked to the determination of optimal cut-points in random forests. While these first two models are averaging models and build their constituent decision trees in parallel, Gradient-Boosting builds the model in a stage-wise fashion. It constructs additive regression models by sequentially fitting a simple base learner on the current pseudo-residuals. Boosted trees have been shown to be the best performing models across a variety of tasks, at least in the pre-deep-learning era .\n\nIn addition, all these tree-based ensemble methods can be exploited to infer the relative importance of the input variables (based on the order in which they appear in the constituent decision trees) and to rank them accordingly .\n\nInternally, the regressors always optimize the mean squared error ($MSE$) total number of log-transformed incidents $y$, and we report two metrics: $MSE$, as well as the coefficient of determination ($R^2$). The $MSE$ metric is given by $\\frac{1}{n}\\sum_{i=1}^{n}{(y_i-\\hat{y_i})}^2$, with lower scores being preferred. The $R^2$ metric measures the percentage of variance in the dependent variable that the model at hand explains: $1- \\frac{\\sum_{i=1}^{n}{(y_i-\\hat{y_i})}^2}{\\sum_{i=1}^{n}{(y_i-\\bar{y_i})}^2}$, where $y_i$ are the true values, $\\hat{y_i}$ are the predicted values, and $\\bar{y_i}$ is the mean of the sample. Best possible score is $1.00$ and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of $y$, disregarding the input features, would get a score of $0.00$. It primarily helps us to compare models between the different feature configurations, but it can also be used to compare the performance on the different incident types, as it is independent of the sample range.\n\nWe look at the performance of the algorithms across different model specifications, utilizing different subsets of the features introduced previously. The first model is a weak baseline consisting only of the socio-demographic and economical factors derived from the census sources. The second model is a strong baseline consisting additionally of the numbers of Foursquare venues\/POIs per category. This model specification is designed to reproduce the nodal features from . We ought to note that the venues dataset might be slightly different from a standard dataset of POIs inferred for example from OpenStreetMap or Google Maps, as the Foursquare venues set is biased towards establishments where people spend time, and map already better to the concept of crime attractors then standard POIs. Hence, we expect that venues counts would outperform standard POI counts as features in crime prediction models. The third model is making use of all human dynamics features inferred from the mobility data sources, while the forth model is a full specifications, exploiting the complete set of features.\n\nFurthermore, in the supplementary material, we create three further model specifications, where each makes use, additionally to the standard census features, of the full feature set of a given data source: Foursquare, subway rides, and yellow\/green taxi rides. This enables a direct comparison of the ubiquitous data sources in terms of their predictive power for the crime domain \u2013 in case in practice a model selection decision should be required.\n\nFor each machine learning model, incident type, and features subset combination, we estimate the performance of the algorithms on new unseen data. To asses their *geographical* out-of-sample generalization, we do the following *model evaluation* experiment using nested cross-validation. In a nested cross-validation, two cross-validation loops are performed: one outer loop to measure the prediction performance of the estimator and one inner loop to choose the best hyper-parameters of the estimator. We implement this approach with 5 outer loops for **model assessment** (i.e. setting the size of the test set to 20%), and 2 inner loops for **model selection** (i.e. setting the size of the training and validation sets to 40%, respectively). Table presents the final average $MSE$ and $R^2$ scores and standard deviations of the models on the left-out test subsets. The resulting scores are therefore unbiased estimates of the prediction score on new geographical samples. We also provide a *temporal* evaluation of the approaches, by training a model on the complete 2014 data (with 5-fold CV for hyper-parameter tuning, i.e. **model selection**) and testing it on the unseen 2015 data for **model assessment**.\n\nAcross all experiments, the hyper-parameters optimized in the validation phase of the Random Forest and Extra-Trees are the number of trees in the ensemble (values ranging from 50 to 400) and the maximal depth (values ranging from one third, to one half, to the full set of features). The first parameter controls the model complexity, while the second controls the level of pruning of the trees, in other words performing regularization to avoid overfitting. For Gradient Boosting, we perform a grid search over the number of trees (values ranging from 100 to 400), the maximal depth (values ranging from 1 to 4), and also the learning rate (values ranging from 0.01 to 0.2). The models were implemented in Python v2.7, with the help of the scikit-learn[^14] and pandas[^15] libraries. The supplementary material (section Model Assessment) presents validation and learning curves of the employed models. The validation curves show that we have properly chosen the parameter ranges for hyper-parameter tuning. Also, the learning curves show that, in our case, the models keep improving with more data, so we should use all available samples.\n\n```latex\n\\makebox[\\textwidth]{\n\\scalebox{0.65}[0.65]{\n\\small\n\\begin{tabular}{ccccccccc}\n\\hline\n&\\multicolumn{2}{c}{Census}\n&\\multicolumn{2}{c}{Census + POI}\n&\\multicolumn{2}{c}{Human Dynamics}\n&\\multicolumn{2}{c}{Census + Human Dynamics} \\\\\n\\hline\n&\\multicolumn{1}{c}{MSE}\n&\\multicolumn{1}{c}{$R^2$}\n&\\multicolumn{1}{c}{MSE}\n&\\multicolumn{1}{c}{$R^2$}\n&\\multicolumn{1}{c}{MSE}\n&\\multicolumn{1}{c}{$R^2$}\n&\\multicolumn{1}{c}{MSE}\n&\\multicolumn{1}{c}{$R^2$}\\\\\n\\hline\n%\\multicolumn{13}{|c|}{\\textbf{\\textit{2015}}}\\\\\n\\textbf{\\textit{2015}}\\\\\n\\hline\n\\textbf{Total incidents}\\\\\n\\hline \n%Linear Regression &{0.58}$\\pm${0.10} &{0.33}$\\pm${0.15} &{0.46}$\\pm${0.04} &{0.59}$\\pm${0.04} &{}$\\pm${} &{}$\\pm${}\n%&{0.65}$\\pm${0.38} &{-0.01}$\\pm${1.07}\\\\ \nRandom Forest &{0.58}$\\pm${0.11} &{0.33}$\\pm${0.19} &{0.46}$\\pm${0.05} &{0.58}$\\pm${0.07} &{0.55}$\\pm${0.07} &{0.38}$\\pm${0.20} &{0.44}$\\pm${0.03} &{0.62}$\\pm${0.03}\\\\ \nExtra-Tree &{0.57}$\\pm${0.10} &{0.35}$\\pm${0.16} &{0.45}$\\pm${0.04} &{0.60}$\\pm${0.06} &{0.55}$\\pm${0.03} &{0.40}$\\pm${0.07} &{0.43}$\\pm${0.03} &{0.63}$\\pm${0.03}\\\\ \nGradient Boosting &{0.57}$\\pm${0.10} &{0.35}$\\pm${0.16} &{0.44}$\\pm${0.04} &{0.61}$\\pm${0.06} &{0.57}$\\pm${0.06} &{0.36}$\\pm${0.08} &\\textbf{{0.42}$\\pm${0.03}} &\\textbf{{0.65}$\\pm${0.03}}\\\\ \n\\hline\n\\textbf{Grand larcenies}\\\\\n\\hline \nRandom Forest &{0.72}$\\pm${0.17} &{0.14}$\\pm${0.18} &{0.53}$\\pm${0.05} &{0.52}$\\pm${0.08} &{0.53}$\\pm${0.05} &{0.52}$\\pm${0.10} &{0.50}$\\pm${0.04} &{0.57}$\\pm${0.06}\\\\ \nExtra-Tree &{0.70}$\\pm${0.15} &{0.18}$\\pm${0.12} &{0.52}$\\pm${0.05} &{0.53}$\\pm${0.08} &{0.53}$\\pm${0.05} &{0.52}$\\pm${0.08} &{0.50}$\\pm${0.04} &{0.57}$\\pm${0.06}\\\\ \nGradient Boosting &{0.71}$\\pm${0.15} &{0.16}$\\pm${0.13} &{0.53}$\\pm${0.05} &{0.52}$\\pm${0.08} &{0.53}$\\pm${0.05} &{0.52}$\\pm${0.08} &\\textbf{{0.49}$\\pm${0.03}} &\\textbf{{0.59}$\\pm${0.07}}\\\\ \n\\hline\n\\textbf{Robberies}\\\\\n\\hline \nRandom Forest &{0.70}$\\pm${0.05} &{0.36}$\\pm${0.11} &{0.65}$\\pm${0.05} &{0.46}$\\pm${0.10} &{0.77}$\\pm${0.06} &{0.23}$\\pm${0.13} &\\textbf{{0.62}$\\pm${0.04}} &\\textbf{{0.50}$\\pm${0.08}}\\\\ \nExtra-Tree &{0.69}$\\pm${0.06} &{0.38}$\\pm${0.12} &{0.64}$\\pm${0.04} &{0.47}$\\pm${0.07} &{0.77}$\\pm${0.04} &{0.23}$\\pm${0.10} &{0.62}$\\pm${0.04} &{0.49}$\\pm${0.08}\\\\ \nGradient Boosting &{0.68}$\\pm${0.05} &{0.40}$\\pm${0.11} &{0.63}$\\pm${0.05} &{0.48}$\\pm${0.09} &{0.77}$\\pm${0.03} &{0.22}$\\pm${0.09} &{0.62}$\\pm${0.04} &{0.49}$\\pm${0.08}\\\\ \n\\hline\n\\textbf{Burglaries}\\\\\n\\hline \nRandom Forest &{0.60}$\\pm${0.04} &{0.19}$\\pm${0.03} &{0.55}$\\pm${0.03} &{0.31}$\\pm${0.05} &{0.62}$\\pm${0.04} &{0.13}$\\pm${0.12} &{0.56}$\\pm${0.03} &{0.30}$\\pm${0.06}\\\\ \nExtra-Tree &{0.59}$\\pm${0.04} &{0.21}$\\pm${0.06} &{0.56}$\\pm${0.03} &{0.31}$\\pm${0.04} &{0.61}$\\pm${0.03} &{0.16}$\\pm${0.08} &{0.55}$\\pm${0.04} &{0.31}$\\pm${0.05}\\\\ \nGradient Boosting &{0.57}$\\pm${0.03} &{0.27}$\\pm${0.04} &\\textbf{{0.55}$\\pm${0.03}} &\\textbf{{0.32}$\\pm${0.04}} &{0.63}$\\pm${0.02} &{0.11}$\\pm${0.06} &{0.56}$\\pm${0.03} &{0.29}$\\pm${0.04}\\\\ \n\\hline\n\\textbf{Assaults}\\\\\n\\hline \nRandom Forest &{0.68}$\\pm${0.04} &{0.46}$\\pm${0.09} &{0.61}$\\pm${0.03} &{0.56}$\\pm${0.05} &{0.78}$\\pm${0.05} &{0.27}$\\pm${0.14} &{0.61}$\\pm${0.03} &{0.56}$\\pm${0.07}\\\\ \nExtra-Tree &{0.67}$\\pm${0.02} &{0.47}$\\pm${0.07} &\\textbf{{0.60}$\\pm${0.04}} &\\textbf{{0.58}$\\pm${0.05}} &{0.79}$\\pm${0.03} &{0.27}$\\pm${0.10} &{0.60}$\\pm${0.03} &{0.58}$\\pm${0.06}\\\\ \nGradient Boosting &{0.66}$\\pm${0.04} &{0.48}$\\pm${0.07} &{0.61}$\\pm${0.04} &{0.57}$\\pm${0.06} &{0.80}$\\pm${0.05} &{0.26}$\\pm${0.08} &{0.60}$\\pm${0.03} &{0.57}$\\pm${0.07}\\\\ \n\\hline\n\\textbf{Vehicle larcenies}\\\\\n\\hline \nRandom Forest &{0.62}$\\pm${0.08} &{0.10}$\\pm${0.12} &{0.61}$\\pm${0.07} &{0.12}$\\pm${0.10} &{0.63}$\\pm${0.03} &{0.04}$\\pm${0.06} &\\textbf{{0.58}$\\pm${0.03}} &\\textbf{{0.19}$\\pm${0.04}}\\\\ \nExtra-Tree &{0.61}$\\pm${0.05} &{0.13}$\\pm${0.06} &{0.62}$\\pm${0.06} &{0.10}$\\pm${0.05} &{0.64}$\\pm${0.03} &{0.00}$\\pm${0.10} &{0.61}$\\pm${0.05} &{0.12}$\\pm${0.03}\\\\ \nGradient Boosting &{0.62}$\\pm${0.08} &{0.09}$\\pm${0.12} &{0.61}$\\pm${0.07} &{0.11}$\\pm${0.08} &{0.62}$\\pm${0.02} &{0.07}$\\pm${0.06} &{0.59}$\\pm${0.04} &{0.16}$\\pm${0.04}\\\\ \n\\hline\n\\textbf{\\textit{2014}}\\\\\n\\hline\n\\textbf{Total incidents}\\\\\n\\hline \nRandom Forest &{0.58}$\\pm${0.10} &{0.29}$\\pm${0.18} &{0.45}$\\pm${0.06} &{0.57}$\\pm${0.09} &{0.56}$\\pm${0.06} &{0.35}$\\pm${0.10} &\\textbf{{0.44}$\\pm${0.05}} &\\textbf{{0.59}$\\pm${0.06}}\\\\ \nExtra-Tree &{0.58}$\\pm${0.10} &{0.30}$\\pm${0.17} &{0.45}$\\pm${0.05} &{0.58}$\\pm${0.09} &{0.57}$\\pm${0.06} &{0.32}$\\pm${0.09} &{0.44}$\\pm${0.05} &{0.59}$\\pm${0.06}\\\\ \nGradient Boosting &{0.58}$\\pm${0.08} &{0.29}$\\pm${0.14} &{0.45}$\\pm${0.05} &{0.58}$\\pm${0.06} &{0.56}$\\pm${0.08} &{0.34}$\\pm${0.14} &{0.45}$\\pm${0.06} &{0.59}$\\pm${0.08}\\\\ \n\\hline\n\\textbf{Grand larcenies}\\\\\n\\hline\nRandom Forest &{0.70}$\\pm${0.15} &{0.13}$\\pm${0.17} &{0.52}$\\pm${0.07} &{0.52}$\\pm${0.09} &{0.53}$\\pm${0.06} &{0.49}$\\pm${0.08} &{0.50}$\\pm${0.06} &{0.56}$\\pm${0.07}\\\\ \nExtra-Tree &{0.69}$\\pm${0.14} &{0.17}$\\pm${0.13} &{0.51}$\\pm${0.07} &{0.53}$\\pm${0.08} &{0.54}$\\pm${0.06} &{0.49}$\\pm${0.06} &{0.50}$\\pm${0.07} &{0.56}$\\pm${0.06}\\\\ \nGradient Boosting &{0.72}$\\pm${0.16} &{0.09}$\\pm${0.17} &{0.52}$\\pm${0.08} &{0.52}$\\pm${0.08} &{0.53}$\\pm${0.07} &{0.49}$\\pm${0.05} &\\textbf{{0.49}$\\pm${0.06}} &\\textbf{{0.57}$\\pm${0.05}}\\\\ \n\\hline\n\\textbf{Robberies}\\\\\n\\hline\nRandom Forest &{0.70}$\\pm${0.04} &{0.35}$\\pm${0.11} &{0.65}$\\pm${0.06} &{0.44}$\\pm${0.12} &{0.80}$\\pm${0.05} &{0.16}$\\pm${0.16} &{0.64}$\\pm${0.05} &{0.47}$\\pm${0.10}\\\\ \nExtra-Tree &{0.70}$\\pm${0.04} &{0.36}$\\pm${0.11} &{0.64}$\\pm${0.06} &{0.47}$\\pm${0.10} &{0.81}$\\pm${0.05} &{0.13}$\\pm${0.18} &{0.63}$\\pm${0.05} &{0.48}$\\pm${0.10}\\\\ \nGradient Boosting &{0.69}$\\pm${0.05} &{0.37}$\\pm${0.12} &{0.64}$\\pm${0.06} &{0.46}$\\pm${0.11} &{0.81}$\\pm${0.08} &{0.12}$\\pm${0.25} &\\textbf{{0.62}$\\pm${0.04}} &\\textbf{{0.50}$\\pm${0.08}}\\\\ \n\\hline\n\\textbf{Burglaries}\\\\\n\\hline\nRandom Forest &{0.64}$\\pm${0.04} &{0.19}$\\pm${0.02} &{0.59}$\\pm${0.03} &{0.30}$\\pm${0.05} &{0.63}$\\pm${0.05} &{0.21}$\\pm${0.08} &{0.58}$\\pm${0.03} &{0.31}$\\pm${0.05}\\\\ \nExtra-Tree &{0.63}$\\pm${0.03} &{0.20}$\\pm${0.03} &{0.58}$\\pm${0.02} &{0.32}$\\pm${0.03} &{0.64}$\\pm${0.05} &{0.18}$\\pm${0.07} &{0.58}$\\pm${0.03} &{0.32}$\\pm${0.05}\\\\ \nGradient Boosting &{0.61}$\\pm${0.03} &{0.27}$\\pm${0.01} &\\textbf{{0.58}$\\pm${0.03}} &\\textbf{{0.33}$\\pm${0.02}} &{0.64}$\\pm${0.06} &{0.18}$\\pm${0.09} &{0.58}$\\pm${0.04} &{0.32}$\\pm${0.05}\\\\ \n\\hline\n\\textbf{Assaults}\\\\\n\\hline\nRandom Forest &{0.70}$\\pm${0.04} &{0.43}$\\pm${0.08} &{0.64}$\\pm${0.05} &{0.53}$\\pm${0.09} &{0.84}$\\pm${0.07} &{0.18}$\\pm${0.15} &{0.64}$\\pm${0.04} &{0.53}$\\pm${0.08}\\\\ \nExtra-Tree &{0.69}$\\pm${0.04} &{0.45}$\\pm${0.07} &\\textbf{{0.62}$\\pm${0.03}} &\\textbf{{0.56}$\\pm${0.06}} &{0.86}$\\pm${0.05} &{0.14}$\\pm${0.12} &{0.62}$\\pm${0.04} &{0.56}$\\pm${0.07}\\\\ \nGradient Boosting &{0.68}$\\pm${0.04} &{0.47}$\\pm${0.08} &{0.62}$\\pm${0.03} &{0.56}$\\pm${0.06} &{0.84}$\\pm${0.06} &{0.19}$\\pm${0.10} &{0.66}$\\pm${0.04} &{0.50}$\\pm${0.07}\\\\ \n\\hline\n\\textbf{Vehicle larcenies}\\\\\n\\hline \nRandom Forest &{0.61}$\\pm${0.04} &{0.11}$\\pm${0.09} &{0.62}$\\pm${0.04} &{0.10}$\\pm${0.08} &{0.63}$\\pm${0.02} &{0.05}$\\pm${0.05} &\\textbf{{0.59}$\\pm${0.03}} &\\textbf{{0.17}$\\pm${0.05}}\\\\ \nExtra-Tree &{0.62}$\\pm${0.03} &{0.08}$\\pm${0.05} &{0.63}$\\pm${0.04} &{0.07}$\\pm${0.08} &{0.63}$\\pm${0.02} &{0.05}$\\pm${0.05} &{0.60}$\\pm${0.02} &{0.14}$\\pm${0.03}\\\\ \nGradient Boosting &{0.62}$\\pm${0.05} &{0.08}$\\pm${0.09} &{0.62}$\\pm${0.04} &{0.08}$\\pm${0.08} &{0.64}$\\pm${0.02} &{0.03}$\\pm${0.04} &{0.59}$\\pm${0.03} &{0.16}$\\pm${0.04}\\\\ \n\\hline \n\\end{tabular}\n}\n}\n```\n\n### Geographical Evaluation\n\nLooking at the 2015 geographical prediction in Table , we observe that the novel behavioral features derived from the different data sources improve significantly the census-only and census + POI baselines for all incident types, with the exception of burglaries and assaults, where the models already saturate at the hard baseline of census + POI. For the total number of incidents we achieve a competitive $R^2$ score of 65%, followed by the grand larcenies, robberies, and assaults categories with scores from 50% to 59%, while for burglaries and especially for vehicle larcenies the scores are lower. This can be explained by the fact that the latter categories of crime are not driven by the population characteristics, but by the characteristics of the target: house and car, respectively. As we do not include attributes of the built environment and of the stollen goods in the models, it was expected that these specific two categories would generally perform worse in comparison to the other categories.\n\nFor the total number of incidents, the best model on the full data set achieves scores of 65%, which is 30 percentage points better than the best model in the weak-baseline and 4 percentage points better than the hard-baseline. But the highest improvement that we observe in comparison to the census-only baseline is in the case of grand larcenies: roughly 41 and 7 percentage points, respectively. This crime category includes different kinds of thefts, including pickpocketing. It was therefore expected that data describing the popularity of an area would be most informative, yet the improvement is spectacular. The weak baseline performs best for the assaults category. This category groups offenses that involve inflicting injury upon others, and it is already well explained by the collected socio-demographic and economical attributes of the neighborhood.\n\nFurthermore, for the case of total incidents and grand larcenies, we observe that models based solely on attributes of the ambient population outperform the models based on the classical demographic features \u2013 and, in the case of grand larcenies, even reach performance levels comparable with those of the census + POI baseline. Finally, comparing the datasource-specific models (provided in supplementary material \u2013 section Additional Model Specifications), we conclude that the census + FS consistently outperforms the census + subway and the census + taxi models \u2013 with the exception of the vehicle larcenies crime category, which performs poorly across the board. Comparing the additional predictive power of the subway vs taxi rides, we notice a significant advantage of the taxi usage data in case of the grand larcenies category.\n\nInspecting the results for the 2014 geographical prediction, we deduce very similar insights: the full models for the total incidents, grand larcenies and the robberies categories perform best, with their absolute achieved MSE\/$R^2$ scores being slightly bigger\/lower than on the 2015 data.\n\n```latex\n\\makebox[\\textwidth]{\n\\scalebox{0.65}[0.65]{\n\\small\n\\begin{tabular}{ccccccccc}\n\\hline\n&\\multicolumn{2}{c}{Census}\n&\\multicolumn{2}{c}{Census + POI}\n&\\multicolumn{2}{c}{Human Dynamics}\n&\\multicolumn{2}{c}{Census + Human Dynamics} \\\\\n\\hline\n&\\multicolumn{1}{c}{MSE}\n&\\multicolumn{1}{c}{$R^2$}\n&\\multicolumn{1}{c}{MSE}\n&\\multicolumn{1}{c}{$R^2$}\n&\\multicolumn{1}{c}{MSE}\n&\\multicolumn{1}{c}{$R^2$}\n&\\multicolumn{1}{c}{MSE}\n&\\multicolumn{1}{c}{$R^2$}\\\\\n\\hline\n\\textbf{Total incidents}\\\\\n\\hline \nRandom Forest &0.11 &0.82 &0.07 &0.88 \n&0.09 &0.84 &0.07 &0.88\\\\ \nExtra-Tree &0.11 &0.82 &0.07 &0.89 \n&0.08 &0.87 &\\textbf{0.07} &\\textbf{0.89}\\\\ \nGradient Boosting &0.22 &0.64 &0.09 &0.85 \n&0.12 &0.80 &0.08 &0.87\\\\ \n\\hline\n\\textbf{Grand larcenies}\\\\\n\\hline \nRandom Forest &0.19 &0.73 &0.14 &0.81 \n&0.14 &0.81 &\\textbf{0.13} &\\textbf{0.82}\\\\ \nExtra-Tree &0.21 &0.71 &0.14 &0.81 \n&0.14 &0.80 &0.14 &0.80\\\\ \nGradient Boosting &0.28 &0.61 &0.17 &0.77 \n&0.16 &0.78 &0.15 &0.79\\\\ \n\\hline\n\\textbf{Robberies}\\\\\n\\hline \nRandom Forest &0.27 &0.71 &0.24 &0.75 \n&0.28 &0.70 &\\textbf{0.23} &\\textbf{0.75}\\\\ \nExtra-Tree &0.26 &0.72 &0.23 &0.75 \n&0.27 &0.70 &0.27 &0.71\\\\ \nGradient Boosting &0.38 &0.59 &0.29 &0.69 \n&0.32 &0.66 &0.28 &0.70\\\\ \n\\hline\n\\textbf{Burglaries}\\\\\n\\hline \nRandom Forest &0.25 &0.47 &0.24 &0.50 \n&0.25 &0.47 &0.24 &0.49\\\\ \nExtra-Tree &0.25 &0.47 &0.24 &0.50 \n&0.33 &0.32 &0.32 &0.34\\\\ \nGradient Boosting &0.30 &0.38 &0.25 &0.47 \n&0.27 &0.42 &\\textbf{0.23} &\\textbf{0.51}\\\\ \n\\hline\n\\textbf{Assaults}\\\\\n\\hline \nRandom Forest &0.24 &0.75 &0.22 &0.77 \n&0.28 &0.72 &0.22 &0.77\\\\ \nExtra-Tree &0.24 &0.76 &\\textbf{0.22} &\\textbf{0.78} \n&0.28 &0.71 &0.27 &0.73\\\\ \nGradient Boosting &0.34 &0.65 &0.29 &0.71 \n&0.46 &0.53 &0.24 &0.76\\\\ \n\\hline\n\\textbf{Vehicle larcenies}\\\\\n\\hline \nRandom Forest &0.31 &0.31 &0.29 &0.34 \n&0.31 &0.31 &0.30 &0.34\\\\ \nExtra-Tree &0.33 &0.27 &\\textbf{0.29} &\\textbf{0.36} \n&0.37 &0.16 &0.38 &0.15\\\\\nGradient Boosting &0.32 &0.28 &0.31 &0.30 \n&0.34 &0.23 &0.30 &0.33\\\\ \n\\hline\n\\end{tabular}\n}\n}\n```\n\n### Temporal Evaluation\n\nSwitching to the temporal prediction presented in Table and in supplementary material (section Additional Model Specifications), we can observe that predicting future crime aggregates within the same neighborhoods appears to be easier than predicting crime aggregates in new neighborhoods as the ecological attributes of a neighborhood, as well as the aggregated crime levels, do not vary that much between the two years. The total number of incidents proves to be the most predictable from one year to the other \u2013 with an $R^2$ score of 89%. In terms of crime sub-types: grand larcenies, robberies and assaults remain the types that can be best predicted by the data. Similarly to the geographical evaluation, the human dynamics only models outperform the census only models in the case of total incidents and grand larcenies. With the exception of the census baseline, all model specifications including ubiquitous data perform similarly good, whereby the models including FS-derived features (census + POI, census + FS, and the full model) achieve the highest absolute scores.\n\n## Model Interpretation\n\nWe now turn to **model interpretation**, where the focus will be (1) on examining the importance and the contribution of the individual features defined in Section and (2) on understanding where in the city do the ambient population features improve the baseline models.\n\n### Feature Importance\n\nThis exercise will return those features that proved to be most discriminative for geographical crime prediction task. By examining them, we will be able to understand what type of factors are most relevant for the predictive algorithms, and also identify those criminological theories that have informed the best features. It is important to stress the fact that, these techniques would not allow us to infer any causal relationships between the features and the crime counts. The identified factors are most discriminative in the context of the used model, but they not necessarily best explain crime levels.\n\nThe supplementary material (section Feature Importances across Models) provides a complete view of the feature importances plots of all machine learning models, while here we concentrate on providing a stable ranking of the features within the most adequate model for this task: Gradient Boosting. To test the stability of the features rank, we perform following bootstrapping procedure: we calculate the importance of the features for 100 random different samples (80% of the data) and provide a box-plot ranked by the median importance of the outputs returned by the different samples. Figure visualizes the top one third variables in these rankings: in white features inferred from the census, in blue features inferred from human mobility data.\n\nThe traditional census features score indeed high across all three crime categories and across all algorithms. Specifically, we observe their very high contribution in the assaults model. As already hinted in the previous section, this type of violent crime remains best predicted by the attributes of the residential population in an area.\n\nAlso the spatial features from Foursquare have significant contributions across all models. The shopping venues contribute most in the grand larcenies category, followed by professional and travel venues. On the other hand, the food establishments, followed by the shopping establishments have a significant contribution in the assaults models.\n\nIn terms of spatio-temporal features from Foursquare, we see importance assigned to many features derived from checkins data, like checkins in food and shops and checkins diversity index. We also see that the number of afternoon popular venues during the week receives a high weight for the grand larcenies category.\n\nIn terms of human dynamics features inferred from the taxi data, we notice especially high loadings for the diversity index of the taxi drives and the total number of pickups and for the in the larcenies and total incidents categories. The human dynamics features inferred from the subway data have in general a lower predictive contribution, with the diversity index ahaving the relative higher scores in this features subgroup and making it into the top features for total incidents and grand larcenies.\n\n### Partial Dependence Plots\n\nThe above feature importance rankings only tell us *which* features are predictive of crime, but not *how* they contribute to the models. There are several approaches on how to achieve that. One approach is to plot partial dependency plots of the gradient boosting learners, another approach is to fit simple decision trees on the top discriminative features of the full models and extract prediction rules.\n\nPartial dependence plots visualize the marginal effect of a given single feature on the crime outcome. Figure depicts the contributions of some of the features identified in the previous section as having higher predictive importance. We look at the same three types of crime: total incidents, grand larcenies, and assaults. The tick marks on the x-axis represent the deciles of the feature values in the training data. We notice that census tracts with higher population numbers, higher poverty, and higher percentage of rented houses tend to have higher crime levels. Also, neighborhoods in NYC with a higher percentage of minorities tend to have higher crime levels, with a stronger effect noticed in the assaults category. On the other hand, we also notice that highly diverse neighborhoods might be slightly safer. The POIs features exhibit strong marginal effects: especially census tracts with shopping establishments tend to experience more grand larcenies, and census tracts with food establishments tend to experience more assaults. From the spatio-temporal features, taxi drives diversity exhibits a positive relationship with the crime level across all three categories. Finally, neighborhoods with more popular venues during working day afternoons are associated with higher number of larcenies.\n\n### Geographical Improvement\n\nFinally, to understand the additive predictive power of the human dynamics features in the case of the temporal prediction, we do a deeper analysis of the residuals. Figure presents the absolute error (computed as $y_i-\\hat{y_i}$, rounded to integer precision) of the best models (Random Forest regressors) on the different model specifications for the 2015 grand larcenies crime category. There are 1652 (out of 2154) census tracts with an absolute error between $-0.5$ and $0.5$ in the census weak-baseline. This number increases to 1838 in the census + POI strong-baseline, and to 1850 in the full model specification. Notably, the human dynamics specification achieves a competitive high number of 1808 census tracts with low errors. The supplementary material depicts the absolute errors achieved by the remaining model specifications.\n\nLooking at the different boroughs, the models incorporating features from FS and taxi trips consistently perform better in comparison to the census baseline in the Manhattan and Bronx boroughs, while some areas in Queens remain poorly predicted across all models. Looking at the function of the neighborhoods, these models bring improvements for parks (e.g. Central Park or Prospect Park), entertainment areas (e.g. around the NY Aquarium or the College Point Multiplex Cinemas) or the JFK airport. Between the hard-baseline incorporating only FS venues information and the model incorporating also FS check-in information, we notice improvements for instance in the Brooklyn promenade recreational areas or in the shopping areas south-east of College Point.\n\n# Conclusions\n\n## Implications\n\nIn this paper, long term crime prediction has been investigated at a fine-grained level, with yearly crime data being analyzed at census track level and across several crime categories. In constructing the prediction features, we exploited census data, Foursquare venues data, subway usage data, and taxi usage data by operationalizing different concepts from criminological and urban theories. Our work has both theoretical and practical implications.\n\nFirst, we have identified new crime predictors derived from massive ubiquitous data sources and so extended the empirical literature in urban computing and computational social science. Our results show that, enriching the traditional census features describing the characteristics of the residential population with spatial and spatio-temporal features describing the activities of the ambient population, substantially improves the quality of the prediction models. Factors describing criminogenic places (crime attractors and crime generators) prove therefore essential for competitive crime prediction models. The highest improvement they bring has been observed in predicting crime in busy public parts of the city: recreational area and parks, shopping areas, entertainment areas, and airports. The human dynamics features improve the baseline models for the total number of incidents, for grand larcenies, and for robberies. In terms of the analyzed sources of timestamped geo-referenced human activity data, LBSNs achieve the highest predictive power. Enhancing the models with subway or with taxi data yields similar results, with the exception of the grand larcenies category, where the taxi features exhibit a higher predictive ability.\n\nIn general, the best performing novel features for all crime incidents have been: the total number of shopping\/eating\/travel venues and checkins as proxy for the general popularity of that area, the number of popular venues in a normal afternoon as proxy for the temporal break-down of human activity in the area, the total number of taxi pick-ups as proxy for the population outer flow to more remote areas, and the taxi drives index as proxy for the entropy of the human movement in the area. Many of these top features can be mapped as crime attractors or crime generators and have been informed by the theories that the place and time where the offenders and victims meet are strong crime predictions . While the mixed land use concept theorized by Jacobs and Newman have not been found as particularly discriminative for crime prediction in comparison to the other features, Jacob's metrics of raw human density and activity have been found to strongly improve the models. Furthermore, specific novel predictors emerge for specific crime types.\n\nFrom the census features, the metrics of concentrated disadvantage have scored highest across all crime types, which is aligned with the findings within the frameworks of the Social Disorganization Theory .\n\nOn the practical side, a direct application of our results would be to have a first estimation of the safety of new developments and public spaces, for instance shopping and recreational areas. So far, crime prevention through environmental design (CPTED) has concentrated mostly on the attributes of the built environment (e.g. lightning, visibility, access and height of buildings) and less so on the the human activity that will be generated within the new created space. A derived product can also be used by individuals (either locals or tourists) to assess the incidents risk when traveling, going out or going shopping to new areas that they are not familiar with. Furthermore, an extension of the presented prediction models could be operationally deployed by local police agencies for short term risk assessment and effective deployment of patrol resources. Forces on the ground could better target specific types of crimes expected in a small geographical area. Current software solutions like PredPol[^16] only work effectively for burglaries and rely mostly on recent crime (near-repeat victimization) and less on attributes of the environment or of the ambient population. Our findings therefore expand the scope to street crimes and utilize further information on the time and place of potential crimes.\n\n## Discussion\n\nOur results add to existing body of empirical literature. Compared to , we go beyond correlation analysis between human dynamics features and crime counts, and explore a highly multi-variate non-linear prediction setup. While our diversity and ratio metrics do not match one-to-one, similar metrics to the ones used in this work make it also to our top most discriminative features, e.g. the age diversity index. Yet, we are careful to interpret the results as supporting or opposing Jacobs'\/Newman's theories, as the relationships between the population density and diversity and crime are non-causal and non-linear in our case. Similarly to , we generally find that features derived from the venues consistently improve the basis models based solely on census data. In comparison to their work, we go beyond simple POI counts and derive second-order features from Foursquare informed by works in criminology and urban computing, and also additionally exploit sources of mobility patterns: subway and taxi drives. While they employed standard regression models, we employed non-parametric machine learning models, which boosted the performance. Also, similar to , we demonstrate the potential of human dynamics features for the crime domain. In comparison to their work, we leverage Foursquare, subway, and taxi data instead of telecommunication data, which is arguably easier to access for research and poses less ethical questions. We also run a more comprehensive analysis leveraging: (1) more extensive datasets in terms of temporal coverage of the collected data (weeks versus years) and (2) several machine learning techniques for a more difficult prediction task (regression versus binary classification). Finally, compared to all of these previous works, we are the only ones to take deeper dives into the different crime types and do careful model interpretation.\n\nWe also contribute to the methodological literature. The main strengths of the employed machine learning algorithms are their very high predictive power and their ability to deal with heterogeneous data sources and potentially collinear factors. This opens the door for future incorporation of new variables as features for which, a priori, there is no substantive theory underlying their association with crime, but might be found to have a strong predictive power. The model interpretation techniques available for tree-based ensemble models (feature importance rankings, partial dependence plots) make the models more transparent and offer insights in terms of the predictive power of each feature. On the weaknesses side, as for any supervised learning technique, the presented models can be used for prediction, but not for inferring a causal effect between the features and the dependent variable.\n\nWe should acknowledge the geographical (more urban areas) and social bias (younger, more educated, wealthier users) of Foursquare in general, though the choice of NYC (as the city with most activity on Foursquare) and of the complete aggregated information on venues level (as opposed to incomplete extracts of checkins on users level which are common in literature) are good mitigation approaches. Quantifying such biases would become relevant once comparing different locations , but are for now out of scope for this study.\n\nAlso, we ought to acknowledge the reporting bias present in the crime data itself. Bias in police records can be attributed to: (1) levels of community trust in police, in case of self-reported crimes, and (2) patrolling focus on certain ethnic groups and neighborhoods, in case of police-reported crimes. Even if we do not have the ambition of solving the perpetuation of racial biases in police work, we should note that this can introduce dangerous biases . Training models on biased historical data and having police focus on certain communities, will lead to even more arrests of minorities, but will not lead to solving the crime problem. The solution is not trivial, as it lies at the heart of the interaction between the police and the communities. At higher levels of aggregation, \"ground truth\" crime data could be estimated from crime victimization surveys and demographically representative synthetic populations .\n\nFinally, to be aligned with previous work in criminology and to be able to benchmark against prior work on crime prediction , we have used the race of the inhabitants when crafting several of the census features for the prediction problem. A potential mitigation would be to show how well the models do without taking race into consideration, especially if planned to be used operationally. In this work, we have already shown that, for certain types of crime, models using only human mobility data can out-perform the models based only on the census data. We believe this to be a significant contribution and an important step towards more fairness in crime prediction.\n\n## Future Work\n\nFor future work and to make more general claims about the predictive power of such factors for long-term crime prediction globally, we plan to apply the same methodology on data from other major cities around the globe. Furthermore, the models can be enhanced by exploiting further ubiquitous data sources describing the pulse of our cities, like additional social media signals, 311 calls, and IoT devices. Especially for some specific type of crimes, like burglaries and vehicles thefts, incorporating spatial features describing the built environment (houses, streets, land use, etc.), has the potential to improve the models significantly. Finally, introducing temporal crime correlates (weather data, near-repeat patterns, entertainment events, etc.) has support in criminology and the potential to improve our prediction models towards short-term prediction.\n\n# Abbreviations\n\nACS - American Community Survey \nBMT - Brooklyn-Manhattan Transit Company \nET - Extra-Trees \nGB - Gradient Boosting \nIND - Independent Subway System \nIRT - Interborough Rapid Transit Company \nLBSN - Location-Based Social Networks \nMTA - Metropolitan Transportation Authority \nMSE - Mean Squared Error \nNYC - New York City \nNYPD - New York Police Department \nRF - Random Forests \n\n[^1]: We have limited our survey of theories in criminology to the main theories that look at victims and offenders and their routine activities, and are relevant for this study. Indeed, there are also other factors that influence criminal behavior, such the attributes of the built environment. For instance, Wilson and Kelling proposed in their *Broken Windows Theory* that degraded urban environments (such as broken windows, graffiti, excessive litter) enhance criminal activities in the area.\n\n[^2]: \n\n[^3]: \n\n[^4]: \n\n[^5]: \n\n[^6]: \n\n[^7]: \n\n[^8]: \n\n[^9]: \n\n[^10]: \n\n[^11]: \n\n[^12]: \n\n[^13]: \n\n[^14]: \n\n[^15]: \n\n[^16]: ","meta":{"dup_signals":{"dup_doc_count":19,"dup_dump_count":19,"dup_details":{"curated_sources":1,"2023-14":1,"2022-40":1,"2022-21":1,"2021-49":1,"2021-17":1,"2021-04":1,"2020-45":1,"2020-34":1,"2020-10":1,"2019-47":1,"2019-39":1,"2019-26":1,"2019-18":1,"2019-09":1,"2018-51":1,"2018-43":1,"2018-34":1,"2023-50":1}},"filename":"out\/1806.01400_extract_epj.tex.md"},"subset":"arxiv"} +{"text":"abstract: Bitcoin, the first peer-to-peer electronic cash system, opened the door to permissionless, private, and trustless transactions. Attempts to repurpose Bitcoin's underlying blockchain technology have run up against fundamental limitations to privacy, faithful execution, and transaction finality. We introduce *Strong Federations*: publicly verifiable, Byzantine-robust transaction networks that facilitate movement of any asset between disparate markets, without requiring third-party trust. *Strong Federations* enable commercial privacy, with support for transactions where asset types and amounts are opaque, while remaining publicly verifiable. As in Bitcoin, execution fidelity is cryptographically enforced; however, *Strong Federations* significantly lower capital requirements for market participants by reducing transaction latency and improving interoperability. To show how this innovative solution can be applied today, we describe *Liquid*: the first implementation of *Strong Federations* deployed in a Financial Market.\nauthor: \nEmail: johnny, andrew, jonathan, marta, ben, mark @blockstream.com\ntitle: Strong Federations: An Interoperable Blockchain Solution to Centralized Third-Party Risks\n\n# Introduction\n\nBitcoin, proposed by Satoshi Nakamoto in 2008, is based on the idea of a *blockchain*\u00a0. A blockchain consists of a series of blocks, each of which is composed of time-stamped sets of transactions and a hash of the previous block, which connects the two together, as presented in Figure\u00a0.\n\nThe underlying principle of Bitcoin's design is that all participants in its network are on equal footing. They jointly trust proof-of-work\u00a0 to validate and enforce the network's rules, which obviates the need for central authorities such as clearinghouses. As a result, Bitcoin empowers a wide range of participants to be their own banks \u2013 storing, transacting, and clearing for themselves without the need for a third-party intermediary. Bitcoin's network automatically enforces settlements between participants using publicly verifiable algorithms that avoid security compromises, expensive (or unavailable) legal infrastructure, third-party trust requirements, or the physical transportation of money. For the first time, users of a system have the ability to cryptographically verify other participants' behaviors, enforcing rules based on mathematics that anyone can check and no one can subvert.\n\nDue to its design, Bitcoin has characteristics that make it a vehicle of value unlike anything that previously existed. First, it eliminates most counterparty risk from transactions\u00a0. Second, it offers cryptographic proof of ownership of assets, as the knowledge of a cryptographic key defines ownership \u00a0. Third, it is a programmable asset, offering the ability to pay to a program, or a \"smart contract\", rather than a passive account or a singular public key\u00a0. Fourth, and finally, it is a disruptive market mechanism for use cases such as point-to-point real-time transfers, accelerated cross-border payment, B2B remittance, asset transfers, and micropayments\u00a0.\n\n## Problem Statement\n\nBecause it is a global consensus system, Bitcoin's decentralized network and public verifiability come with costs. Speed of execution and insufficient guarantees of privacy are two of Bitcoin's limitations.\n\nBitcoin's proof-of-work methodology was designed to process transactions on average only once every ten minutes, with large variance. As a result, Bitcoin is slow from a real-time transaction processing perspective. This creates spontaneous illiquidity for parties using bitcoin[^1] as an intermediary, volatility exposure for those holding bitcoin for any length of time, and obstacles for the use of Bitcoin's contracting features for fast settlements. Even after a transaction is processed, counterparties must generally wait until several additional blocks have been created before considering their transaction settled. This is because Bitcoin's global ledger is at constant risk of *reorganization*, wherein very recent history can be modified or rewritten. This latency undermines many commercial applications, which require real-time, or nearly instant, execution[^2]. Today, solving this requires a centralized counterparty, which introduces a third-party risk.\n\nDespite issues of short-term validation, Bitcoin excels on settlement finality, providing strong assurance against transaction reversals after adequate block confirmations. In contrast, legacy payment networks leave absolute final settlement in limbo for up to 120 days typically, though chargebacks have been allowed up to 8 years late\u00a0, depending on policies imposed by the centralized network owner\u00a0\u00a0.\n\nWhile a popular prevailing belief is that Bitcoin is anonymous\u00a0, its privacy properties are insufficient for many commercial use cases. Every transaction is published in a global ledger, which allows small amounts of information about users' financial activity (e.g., the identities of the participants in a single transaction\u00a0) to be amplified by statistical analysis\u00a0. This limits the commercial usefulness of the network and also harms individual privacy\u00a0, as user behavior frequently reflects the pervasive assumption that Bitcoin is an anonymous system. Further, it can damage the fungibility of the system, as coins that have differing histories can be identified and valued accordingly.\n\nOvercoming these two problems would be of significance and have positive impact on the Bitcoin industry and the broader global economy\u00a0. Unfortunately, previous attempts to solve similar tasks with electronic money have encountered a variety of issues: they fail to scale (e.g., BitGold\u00a0); they are centralized (e.g., the Liberty Reserve or Wei Dai's B-money\u00a0); or they raise other security concerns\u00a0. Moreover, higher trust requirements are often imposed through reliance on a centrally controlled system or on a single organization. This effectively replicates the problems of pre-Bitcoin systems by establishing highly permissioned arenas that have substantial regulatory disadvantages and user costs, that create onboarding and offboarding friction, and that introduce restrictions on both users and operators of the system\u00a0. Needless to say, if a solution is run by a central party, it is inevitably subject to systemic exposure, creating a single point of failure (SPOF) risk\u00a0. The recent Ripple attack is an example of such a situation. It has been shown, that although interesting and successful otherwise, both Ripple and Stellar face the SPOF risk\u00a0. Similarly, introduction of stronger trust requirements can lead to dangerous risks of consensus failure, as the consensus methods of Tendermint and Ethereum have proven\u00a0. Finally, there are exchanges and brokerages that require explicit trust in a third-party\u00a0. Such systems leak their intrinsic insecurity into any solutions built on top of them, creating a \"house of cards\" arrangement where any instability in the underlying system may result in a collapse of the dependent arrangements.\n\n## Contributions\n\nThis paper describes a new blockchain-based system that addresses these problems and contributes to the field in the following ways:\n\n1. **Public Verifiability** \u2013 While not fully decentralized, the system is distributed and publicly verifiable, leaving users with the ultimate spending authority over their assets.\n\n2. **Liquidity** \u2013 Users can move their assets into and out of the system, giving them access to its unique characteristics while also allowing them to exit at any time.\n\n3. **No Single Point of Failure** \u2013 The system maintains Bitcoin's permissionless innovation and avoids introducing SPOFs, all while providing novel features.\n\n4. **Multiple Asset\u2013Type Transfers** \u2013 The system supports multiple asset\u2013type transfer on the same blockchain, even within the same atomic transaction.\n\n5. **Privacy** \u2013 By extending earlier work on Confidential Transactions\u00a0 through Confidential Assets, the system supports nearly instant, trustless, atomic exchange of arbitrary goods, in a publicly verifiable but completely private way.\n\n6. **Implementation** \u2013 Liquid, an implementation of a Strong Federation, is presented with lessons learned from the use case of high speed inter-exchange transfers of bitcoin.\n\nThe rest of the paper is organized as follows: the next section discusses the general design of the solution to the problems identified above \u2013 Strong Federations. Next, in Section \u00a0 more in-depth, technical details are provided. Section\u00a0 is devoted to the applications of Strong Federations in different areas. Here, Liquid is presented, the first market implementation of the system. Strong Federations are very novel in many aspects, thus some time is spent discussing various innovations in Section\u00a0, to then move to a thorough evaluation of the security and comparison of the system in Section\u00a0. Finally, Section\u00a0 discusses methodologies to further improve them and Section\u00a0 presents conclusions.\n\n# Strong Federations as a General Solution\n\nAs mentioned in Section\u00a0, a consensus mechanism based on proof-of-work introduces the problem of latency. However, moving to a centralized system would create significant risks of its own. To combat these problems, this paper builds on a design introduced by Back et. al. called \"Federated Pegs\"\u00a0, a methodology for two-way movement of assets between Bitcoin and *sidechains*. Sidechains are parallel networks that allow parties to transfer assets between blockchains by providing explicit proofs of possession within transactions, as shown in Figure\u00a0.\n\n## Sidechains\n\nSidechains are blockchains that allow users to transfer assets to and from other blockchains. At a high level, these transfers work by locking the assets in a transaction on one chain, making them unusable there, and then creating a transaction on the sidechain that describes the locked asset. Effectively, this moves assets from a parent chain to a sidechain.\n\nThis works as follows:\n\n1. The user sends their asset to a special address that is designed to freeze the asset until the sidechain signals that asset is returned.\n\n2. Using the \"in\" channel of a federated peg, the user embeds information on the sidechain stating that the asset was frozen on the main chain and requests to use it on the sidechain.\n\n3. Equivalent assets are unlocked or created on the sidechain, so that the user can participate in an alternative exchange under the sidechain rules, which can differ from the parent chain.\n\n4. When the user wishes to move her asset, or a portion thereof, back via the \"out channel\", she embeds information in the sidechain describing an output on the main blockchain.\n\n5. The Strong Federation reaches consensus that the transaction occurred.\n\n6. After consensus is reached, the federated peg creates such an output, unfreezing the asset on the main blockchain and assigning it as indicated on the sidechain.\n\n## Improving Sidechains with Strong Federations\n\nBitcoin demonstrates one method of signing blocks: the use of a Dynamic Membership Multiparty Signature (DMMS) using a dynamic set of signers called *miners*. A dynamic set introduces the latency issues inherent to Bitcoin. A federated model offers another solution, with a fixed signer set, in which the DMMS is replaced with a traditional multisignature scheme. Reducing the number of participants needed to extend the blockchain increases the speed and scalability of the system, while validation by all parties ensures integrity of the transactions.\n\nA *Strong Federation* is a federated sidechain where the members of the federation serve as a protocol adapter between the main chain and the sidechain. One could say, essentially, that together they form a Byzantine-robust smart contract. In a Strong Federation the knowledge of private keys is sufficient for the \"right to spend\" without the permission of any third-party, and the system has a mechanism that allows settlement back to a parent chain in the case of a complete failure of the federation. Not only are the code updates open and auditable and rejectable in case of coercive behavior, but the state of the system also provides a consistently reliable log that maintains immutability of state. Most importantly: the members of the federation cannot directly control any users' money inside the system other than their own.\n\nThe network operators of a Strong Federation consist of two types of *functionaries*. Functionaries are entities that mechanically execute defined operations if specific conditions are met . To enhance security, certain operations are split between entities to limit the damage an attacker can cause. In a Strong Federation, functionaries have the power to control the transfer of assets between blockchains and to enforce the consensus rules of the sidechain. In the next section further details will be provided on why dividing those responsibilities is critical. The two types of functionaries are:\n\n1. *Blocksigners*, who sign blocks of transactions on the sidechain, defining its consensus history.\n\n2. *Watchmen*, who are responsible for moving the assets out of the sidechain by signing transactions on the main chain.\n\nThe two components can be independent. Blocksigners are required to produce the blockchain consensus and to advance the sidechain ledger, which they do by following the protocol described in the next section. Watchmen are only required to be online when assets are to be transferred between blockchains. As an extreme example, one could imagine a scheme where watchmen were only brought online once daily to settle a pre-approved batch of inbound and outbound transactions.\n\nThese two functions are performed by separate dedicated hardened boxes, configured by their owners with the secret key material required for their operation. The interaction between the elements of the network is presented in Figure\u00a0. Between blocksigners and watchmen, only the former are required to produce consensus, which they do by following the protocol described in the next section.\n\n# Technical Details\n\nSupporting Strong Federations on a technical level requires the development of two types of federation: the *Federated Peg* and *Federated Blocksigning*.\n\n## Federated Peg\n\nThe authors of \"Enabling Blockchain Innovations with Pegged Sidechains\"\u00a0 suggested a way to deploy federated sidechains without requiring any alterations to the consensus rules of Bitcoin's blockchain. In their methodology, a sidechain used a $k$-of-$n$ federation of mutually distrusting participants, called the functionaries, who validate and sign the blocks of the chain (blocksigners) and the pegs (watchmen) respectively.\n\nA Federated Peg is a mechanism that uses functionaries to move assets between two chains. The functionaries observe at least the two chains \u2013 the Bitcoin blockchain and the sidechain \u2013 to validate asset transfers between them. To meet the criteria of a Strong Federation, a set of geographically and jurisdictionally distributed servers is used, creating a compromise-resistant network of functionaries. This network retains a number of the beneficial properties of a fully decentralized security model.\n\nMembers of the Federated Peg each operate a secure server that runs Bitcoin and sidechain nodes along with software for creating and managing cross-chain transactions. Each server contains a hardware security module that manages cryptographic keys and signs with them. The module's job is primarily to guard the security of the network, and if a compromise is detected, to delete all of its keys, causing the network to freeze by design. If one or a few functionaries are attacked \u2013 even if their tamper-resistant hardware is totally compromised \u2013 the system is unaffected, as long as enough other functionaries are still intact. Successfully tampering with the federated peg system requires a compromise of at least the majority of functionaries, both blocksigners and watchmen. Even then, tampering is always detectable and usually immediately observable because the blockchain is replicated and validated on machines other than the functionaries. A compromise of a majority of blocksigners would be observable as soon as a non-conforming block was published. If the majority of watchmen remain secure, the value held by the sidechain can be redeemed on the parent blockchain.\n\n## Byzantine Robustness\n\nOne of the most important aspects of Bitcoin's mining scheme is that it is *Byzantine robust*, meaning that anything short of a majority of bad actors cannot rewrite history or censor transactions\u00a0. The design was created to be robust against even long-term attacks of sub-majority hash rate.\n\nBitcoin achieves this by allowing all miners to participate on equal footing and by simply declaring that the *valid* chain history with the majority of hashpower[^3] behind it is the true one. Would-be attackers who cannot achieve a majority are unable to rewrite history (except perhaps a few recent blocks and only with low probability) and will ultimately waste resources trying to do so. This incentivizes miners to join the honest majority, which increases the burden on other would-be attackers. However, as discussed in Section\u00a0, this setup leads to latency due to a network heartbeat on the order of tens of minutes and introduces a risk of reorganization even when all parties are behaving honestly.\n\n## Achieving Consensus in Strong Federations\n\nIt is critical that functionaries have their economic interests aligned with the correct functioning of the Federation. It would obviously be a mistake to rely on a random assortment of volunteers to support a commercial sidechain holding significant value. Beyond the incentive to attempt to extract any value contained on the sidechain, they would also have little incentive to ensure the reliability of the network. Federations are most secure when each participant has a similar amount of value held by the federation. This kind of arrangement is a common pattern in business\u00a0. Incentives can be aligned through the use of escrow, functionary allocation, or external legal constructs such as insurance policies and surety bonds.\n\n### Blocksigning in Strong Federations\n\nIn order for a Strong Federation to be low latency and eliminate the risk of reorganization from a given hostile minority, it replaces the dynamic miner set with a fixed signer set. As in Private Chains\u00a0, the validation of a script (which can change subject to fixed rules or be static) replaces the proof-of-work consensus rules. In a Strong Federation, the script implements a $k$-of-$n$ multisignature scheme. This mechanism requires blocks be signed by a certain *threshold* of signers; that is, by $k$-of-$n$ signers. As such, it can emulate the Byzantine robustness of Bitcoin: a minority of compromised signers will be unable to affect the system.\n\nFigure\u00a0 presents how the consensus is achieved in a Strong Federation. It is referred to as federated blocksigning and consists of several phases:\n\n- **Step 1:** Blocksigners propose candidate blocks in a round-robin fashion to all other signing participants.\n\n- **Step 2:** Each blocksigner signals their intent by pre-committing to sign the given candidate block.\n\n- **Step 3:** If threshold X is met, each blocksigner signs the block.\n\n- **Step 4:** If threshold Y (which may be different from X) is met, the block is accepted and sent to the network.\n\n- **Step 5:** The next block is then proposed by the next blocksigner in the round-robin.\n\nDue to the probabilistic generation of blocks in Bitcoin, there is a propensity for chain reorganizations in recent blocks\u00a0. Because a Strong Federation's block generation is not probabilistic and is based on a fixed set of signers, it can be made to never reorganize. This allows for a significant reduction in the wait time associated with confirming transactions.\n\nOf course, as with any blockchain-based protocol, one could imagine other ways of coordinating functionary signing. However, the proposed scheme improves the latency and liquidity of the existing Bitcoin consensus mechanism, while not introducing SPOF or higher trust requirements as discussed in Section\u00a0.\n\n### Security Improvements\n\nByzantine robustness provides protection against two general classes of attack vectors. In the first case,a majority of nodes could be compromised and manipulated by the attacker, breaking integrity of the system. In the second case, a critical portion of nodes could be isolated from the network, breaking availability.\n\nBlocksigning in a Strong Federation is robust against up to $2k - n - 1$ attackers. That is, only $2k - n$ Byzantine attackers will be able to cause conflicting blocks to be signed at the same height, forking the network. For instance, a 5-of-8 threshold would be *1-Byzantine robust*[^4], while 6-of-8 would be *3-Byzantine robust*.\n\nOn the other hand, if at least $n - k + 1$ signers fail to sign, blocks will not be produced. Thus, increasing the threshold $k$ provides stronger protection against forks, but reduces the resilience of the network against signers being unavailable. Section\u00a0 explains how the same strategy can be used for applying functionary updates, which is planned as future work.\n\n# Use Cases\n\nStrong Federations were developed as a technical solution to problems blockchain users face daily: transaction latency, commercial privacy, fungibility, and reliability. Many applications for blockchains require Strong Federations to avoid these issues, two of which are highlighted here.\n\n## International Exchange and Liquid\n\nBitcoin currently facilitates remittance and cross-border payment, but its performance is hampered by technical and market dynamics\u00a0. The high latency of the public Bitcoin network requires bitcoin to be tied up in multiple exchange and brokerage environments, while its limited privacy adds to the costs of operation. Due to market fragmentation, local currency trade in bitcoin can be subject to illiquidity. As a result, many commercial entities choose to operate distinct, higher-frequency methods of exchange\u00a0. These attempts to work around Bitcoin's inherent limitations introduce weaknesses due to centralization or other failings\u00a0.\n\nWe have developed a specific solution called Liquid designed to make international exchanges more efficient by utilizing bitcoin. The solution is presented in Figure\u00a0, and it is the first implementation of a Strong Federation. As a Strong Federation, Liquid has novel security and trust assumptions, affording it much lower latency than Bitcoin's blockchain, with a trust model stronger than that of other, more centralized, systems (though nonetheless weaker than Bitcoin)\u00a0. Today, the implementation allows for one-minute blocks. It will be possible to reduce the time to the amount required for the pre-commit and agreement threshold time of the network traversal, as discussed in Section\u00a0. This trade-off is worthwhile in order to enable new behaviors, serving commercial needs that neither the Bitcoin blockchain nor centralized third parties can provide.\n\nLiquid is a Strong Federation where functionaries are exchanges participating in the network, and an asset is some currency that is transferred from Alice to Bob. As shown in Figure\u00a0, when Alice wants to send money to Bob, she contacts her preferred exchange. The local node of that exchange takes care of finding an appropriate local node of the exchange willing to trade within the Liquid Strong Federation to move assets to Bob. They negotiate the terms, meaning the exchange rate and execution time, and notify Alice about the result. If she agrees, the assets are transferred to Bob. Normally, a very similar scheme would happen on the Blockchain, only the transaction would have to be approved by the whole network, thus causing horrible delay in settlement. Because Liquid operates on a sidechain, we use multisig and if 8 out of 11 participants of the Strong Federation agree on the settlement, Bob receives his money.\n\nA decrease in *latency* in Liquid results in an increase in the speed of transaction finality. This in turn reduces the risk of bitcoin valuation changes during transaction settlement time \u2013 a key component of successful arbitrage and remittance operations\u00a0. The remitter will eventually receive the initial sender's bitcoin, but will have mitigated a substantial portion of the downside volatility risk by executing closer to the time of sale.\n\nThanks to the decrease in transfer times reducing the cost of arbitrage, Liquid participant markets will function as if they were a unified market. In addition, because Liquid assets are available at multiple fiat on- and off-ramps with relatively little delay, a remitter can settle for fiat in two or more locations in different currencies at price parity. Essentially, Liquid lowers *capital constraints* relative to money held at varied end-points in the exchange cycle as a result of the network structure.\n\nBy moving the bitcoin-holding risk, intrinsic to the operation of exchange and brokerage businesses, from a SPOF introduced by a single institution to a federation of institutions, Liquid improves the underlying security of the funds held within the network. By increasing the security of funds normally subject to explicit counterparty risk, Liquid improves the underlying reliability of the entire Bitcoin market.\n\nImprovements in *privacy* come thanks to the adoption of Confidential Transactions, a specific addition to Strong Federations that are discussed further in Section\u00a0. This provides users of the system stronger commercial privacy guarantees.\n\nStrong Federations such as Liquid improve privacy, latency, and reliability without exposing users to the weaknesses introduced by third-party trust. By moving business processes to Liquid, users may improve their efficiency and capital-reserve requirements.\n\n## Other Financial Technology\n\nSignificant portions of current financial service offerings are dependent on trusted intermediaries (and shared legal infrastructure when this trust breaks down) or centralized systems for operation\u00a0. They have the potential to be supplanted by new, publicly verifiable consensus systems such as Bitcoin, which offer improvements to security and reliability\u00a0.\n\nAs an example, liquidity provisioning is the primary business model of Prime Brokerages and Investment Banks\u00a0. Fund managers commit their funds to a single location's custodianship under the premise of reducing costs associated with investment management and improving access to both investment opportunities and liquidity. Third-party broker-dealers then grant each participant access to the liquidity of their respective counterparties \u2013 a function of aggregation of capital under a single trusted third-party custodian\u00a0. This system offers investors a means of preferential access to liquidity by enabling customers to buy, sell, and hedge trades with their respective counterparties in a single location. These centralized systems provide convenience to market participants, but are not without risks. One realized example of these risks is that of the Eurosystem following the global financial crisis, in the wake of the financial default of the Lehman Brothers. The effort of the Eurosystem to liquidate assets collateralized by 33 complex securities took more than four years, and resulted in over EUR 1 billion in losses\u00a0.\n\nThe implicit centralization and dependent trust that arise from systems like these can be resolved via a Strong Federation. It can remove the element of trust when claiming ownership and prevent transactions of uncollateralized assets, while also allowing auditing by existing and new members of the system. Furthermore, ownership of assets can be proven and verified publicly.\n\n# Innovations\n\nIn this section, major highlights of the presented design are discussed including: improvements to Determinism, Latency, and Reliability; expansions of Privacy and Confidentiality; improvements to system integrity with Hardware Security; modifications to Native Assets; and Bitcoin wallet protections via Peg-out Authorization.\n\n## Determinism, Latency, and Reliability\n\nWhile Bitcoin's proof-of-work is a stochastic process, the Strong Federation scheme is deterministic, where each block is expected to be produced by a single party. Therefore *reorganizations cannot happen*, unlike in Bitcoin where they are an ordinary fact of life. In a Strong Federation, blocksigners need only to obtain consensus amongst themselves before extending history; since they are a small, well-defined set, the network heartbeat can be *significantly* faster than in Bitcoin. This means that users of Strong Federations can consider a single confirmation to indicate irreversibility; that confirmation can occur as quickly as information can be broadcast between the federation members and processed into a block. It also means that blocks will be produced reliably and on schedule, rather than as a stochastic process where the heartbeat is actually a mean time.\n\n## Privacy and Confidentiality\n\nThough many users presume that a blockchain inherently provides strong privacy, this has repeatedly been shown to be false\u00a0\u00a0\u00a0\u00a0\u00a0. The Liquid implementation of Strong Federations uses Confidential Transactions (CT)\u00a0 to cryptographically verify users' behavior without providing full transparency of transaction details. As a result, the transfer of assets within a Strong Federation is guaranteed to be private between counterparties, while verifiably fair to network participants.\n\nIn order to protect confidentiality, CT blinds the amounts of all outputs to avoid leaks of information about the transaction size to third parties. It is also possible to combine inputs and outputs using traditional Bitcoin privacy techniques, such as CoinJoin\u00a0. In typical application, such mechanisms are substantially weakened by the presence of public amounts\u00a0, which can be used to determine mappings between owners of inputs and outputs, but in Liquid, the transaction graph no longer exposes these correlations\u00a0.\n\nThe use of CT in Liquid is important for two main reasons: commercial usability and fungibility. When it comes to the former, most companies would not be able to operate if their internal ledgers and financial actions were entirely public, since private business relationships and trade secrets can be inferred from transaction records. When CT is introduced this is no longer a problem as the detailed information about the trades is hidden. It is also important to improve the fungibility, because otherwise the history of an asset can be traced through the public record. This can be problematic in the case of \"tainted money\", which the authorities in a given jurisdiction define as illegal or suspicious\u00a0. If an asset's history can be backtraced, then users of the network may find themselves obligated to ensure they are not receiving those assets. Such forensic work puts a large technical burden on users and operators of a network and may not even be possible across multiple jurisdictions whose definitions of taint are conflicting or ill-defined\u00a0. This is a potential danger for any type of system that enables the passing back and forth of value with a history, but one that can be corrected with improved fungibility.\n\nUnfortunately, CT comes at a technical cost: transactions are much larger[^5] and take correspondingly longer to verify. All transactions in Liquid use CT by default, making operation of the network computationally intensive. Mimblewimble\u00a0 introduces a scheme by which full security may be achieved without full historical chain data, and by which transactions within blocks can no longer be distinguished from each other. This gives stronger privacy than CT alone with better scaling properties than even Bitcoin without CT. The benefits to fungibility and privacy of such a system are readily apparent. Further research will be allocated towards investigating Mimblewimble as a means of confidentially transacting.\n\n## Hardware Security\n\nIn Strong Federations, the $k$-of-$n$ signing requirement requires full security of the hardware, which will be distributed across multiple unknown locations and conditions. The signing keys need to be stored on the devices and not on the server for a simple reason: otherwise, even if the application code was flawless and the userspace code was minimized, a networking stack vulnerability could be exploited in order to gain access to the host and then any keys. While efforts have been made over the years to segment memory and create boundaries through virtualization, memory protection, and other means, the industry has not yet been completely successful\u00a0. The best solution today is to use simplified interfaces and physical isolation; Liquid specifically creates a separate hardened device for key storage and signing in order to significantly reduce the number of avenues of attack.\n\nWhile it is true that public review of cryptographic algorithms and protocols improves the security of a system, the same cannot be said for public review of hardware designs. Indeed, any measure will eventually be defeated by an attacker with an infinite supply of sample hardware. However, if a piece of hardware requires expensive, highly specialized equipment and skills to examine, it reduces the set of people who might be interested in (and capable of) attacking it. This is even more true when a technique used to break a system is destructive, requiring multiple copies of any given hardware\u00a0.\n\nUnfortunately, the value of hardware obfuscation for security purposes holds only until the system is broken. After an attack is published, the only way to protect the hardware is to change its design. Thus, Strong Federation hardware includes a reactive system that, when under attack, either sends an alert or simply deletes the information that it determines is likely to be targeted. Traditionally, hardware security modules do this when they register a significant environmental change such as sudden heating or cooling, a temperature out of expected operating ranges, persistent loss of access to the internet, or other environmental fluctuations\u00a0.\n\n## Native Assets\n\nStrong Federations support accounting of other digital assets, in addition to bitcoin. These *native assets* can be issued by any user and are accounted for separately from the base bitcoin currency. A participant issues such assets by means of an asset-generating transaction, optionally setting the conditions by which additional issuance can take place in the future:\n\n1. The asset issuer decides on policy for the asset it is generating, including out-of-band conditions for redemption.\n\n2. The issuer creates a transaction with one or more special *asset-generating inputs*, whose value is the full issuance of the asset. This transaction, and an asset's position in it, uniquely identifies the asset. Note that the initial funds can be sent to multiple different outputs.\n\n3. The asset-generating transaction is confirmed by a Strong Federation participant and the asset can now be transacted. The issuer distributes the asset as necessary to its customer base, using standard Strong Federation transactions.\n\n4. Customers wishing to redeem their asset tokens transfer their asset holdings back to the issuer in return for the out-of-band good or service represented. The issuer can then destroy the tokens (i.e. by sending to an unspendable script like OP_RETURN).\n\nToday, users can only trade with one asset type, however the design allows for multiple assets to be involved in a single transaction. In such cases, consensus rules ensure that the accounting equation holds true for each individual asset grouping. This allows the exchange of assets to be trustless and conducted in a single transaction without any intermediary. For that to happen, two participants who wish to trade asset A and asset B would jointly come to an agreement on an exchange rate out-of-band and produce a transaction with an A input owned by the first party and an A output owned by the second. Then the participants create another transaction with a B input owned by the second party and a B output owned by the first. This would result in a transaction that has equal input and output amounts, and will therefore be valid if and only if both parties sign it. To finalize it, both parties would need to sign the transaction, thus executing the trade.\n\nWhat is amazing is that this innovation allows not only for exchanging currencies but any other digital assets: data, goods, information. The protocol could be further improved with more advanced sighash mechanisms.\n\n## Peg-out Authorization\n\nWhen moving assets from any private sidechain with a fixed membership set but stronger privacy properties than Bitcoin, it is desirable that the destination Bitcoin addresses be provably in control of some user of the sidechain. This prevents malicious or erroneous behavior on the sidechain (which can likely be resolved by the participants) from translating to theft on the wider Bitcoin network (which is irreversible).\n\nSince moving assets back to Bitcoin is mediated by a set of watchmen, who create the transactions on the Bitcoin side, they need a dynamic private whitelist of authorized keys. That is, the members of the sidechain, who have fixed signing keys, need to be able to prove control of some Bitcoin address without associating their own identity to it, only the fact that they belong to the group. We call such proofs *peg-out authorization proofs* and have accomplished it with the following design:\n\n1. **Setup.** Each participant $i$ chooses two public-private keypairs: $(P_i, p_i)$ and $(Q_i, q_i)$. Here $p_i$ is an \"online key\" and $q_i$ is the \"offline key\". The participant gives $P_i$ and $Q_i$ to the watchmen.\n\n2. **Authorization.** To authorize a key $W$ (which will correspond to a individually controlled Bitcoin address), a participant acts as follows.\n\n 1. She computes $$L_j = P_j + H(W + Q_j)(W + Q_j)$$ for every participant index $j$. Here $H$ is a random-oracle hash that maps group elements to scalars.\n\n 2. She knows the discrete logarithm of $L_i$ (since she knows the discrete logarithm of $P_i$ and chooses $W$ so she knows that of $W + Q_i$), and can therefore produce a ring signature over every $L_i$. She does so, signing the full list of online and offline keys as well as $W$.\n\n 3. She sends the resulting ring signature to the watchmen, or embeds it in the sidechain.\n\n3. **Transfer.** When the watchmen produce a transaction to execute transfers from the sidechain to Bitcoin, they ensure that every output of the transaction either (a) is owned by them or (b) has an authorization proof associated to its address.\n\nThe security of this scheme can be demonstrated with an intuitive argument. First, since the authorization proofs are ring signatures over a set of keys computed identically for every participant, they are zero-knowledge for which participant produced them\u00a0. Second, the equation of the keys $$L_j = P_j + H(W + Q_j)(W + Q_j)$$ is structured such that anyone signing with $L_i$ knows either:\n\n1. The discrete logarithms of $W$, along with $p_i$ and $q_i$; or\n\n2. $p_i$, but neither $q_i$ nor the discrete logarithm of $W$.\n\nIn other words, compromise of the online key $p_i$ allows an attacker to authorize \"garbage keys\" for which nobody knows the discrete logarithm. Only compromising both $p_i$ and $q_i$ (for the same $i$) will allow an attacker to authorize an arbitrary key.\n\nHowever, compromise of $q_i$ is difficult because the scheme is designed such that $q_i$ need not be online to authorize $W$, as only the sum $W + Q_i$ is used when signing. Later, when $i$ wants to actually use $W$, she uses $q_i$ to compute its discrete logarithm. This can be done offline and with more expensive security requirements.\n\n# Evaluation\n\nThe information moved through Strong Federations will be very sensitive. As a result, a thorough understanding of the potential security threats is crucial. This is particularly important when dealing with Bitcoin, where transactions are irrevocable. In other words, continued operation of the network is a secondary priority; few would choose to have their money move rapidly into the hands of a thief over a delayed return to their own pockets. As the aggregate value of assets inside a Strong Federation increases, the incentives for attackers grow, and it becomes crucial they cannot succeed when targeting any functionary nor the maintainer of the codebase. Thankfully, as participants in a Strong Federation scale up the value of assets flowing through the system, they will be naturally incentivized to take greater care of access to the federated signers under their control. Thus, the federated security model neatly aligns with the interests of its participants.\n\n## Comparison to Existing Solutions\n\nExisting proposals to form consensus for Bitcoin-like systems generally fall into two categories: those that attempt to preserve Bitcoin's decentralization while improving efficiency or throughput and those that adopt a different trust model altogether. In the first category are GHOST\u00a0, block DAGs\u00a0\u00a0, and Jute\u00a0. These schemes retain Bitcoin's model of blocks produced by a dynamic set of anonymous miners, and depend on complex and subtle game-theoretic assumptions to ensure consensus is maintained in a decentralized way. The second category includes schemes such as Stellar\u00a0 and Tendermint\u00a0, which require new participants to choose existing ones to trust. These examples have the failure risks associated with trusted parties, which when spread across complex network topologies lead to serious but difficult-to-analyze failure modes\u00a0.\n\nOur proposal works in the context of a fixed set of mutually distrusting but identifiable parties, and therefore supports a simple trust model: as long as a quorum of participants act honestly, the system continues to work.\n\nParallel to consensus systems are systems that seek to leverage existing consensus systems to obtain faster and cheaper transaction execution. The primary example of this is the Lightning network\u00a0, which allows parties to transact by interacting solely with each other, only falling back to the underlying blockchain during a setup phase or when one party fails to follow the protocol. We observe that since these systems work on top of existing blockchains, they complement new consensus systems, including the one described in this paper.\n\nA novel proposal has been presented recently by Eyal et al.\u00a0. Although not yet available on the market, their Bitcoin-NG scheme is a new blockchain protocol designed to scale. Based on the experiments they conduct it seems like their solution scales optimally, with bandwidth limited only by the capacity of the individual nodes and latency limited only by the propagation time of the network. However, there may be game-theoretic failings or denial-of-service vectors inherent to their design that have yet to be explicated.\n\n## Protection Mechanisms\n\nAn attacker must first communicate with a system to attack it, so the communication policies for a Strong Federation have been designed to isolate it from common attack vectors. Several different measures are taken to prevent untrustworthy parties from communicating with functionaries:\n\n- Functionary communications are restricted to hard-coded Tor Hidden Service addresses known to correspond to known-peer functionaries.\n\n- Inter-functionary traffic is authenticated using hard-coded public keys and per-functionary signing keys.\n\n- The use of Remote Procedure Calls (RPC) is restricted on functionary hardware and on Liquid wallet deployments to callers on the local system only.\n\nAbove and beyond, the key policy works to protect the network. While the blocksigners are designed with secret keys that are unrecoverable in any situation, the watchmen keys must be created with key recovery processes in mind. Loss of the blocksigner key would require a hard-fork of the Strong Federation's consensus protocol. This, while difficult, is possible and does not risk loss of funds. However, loss of sufficient watchmen keys would result in the loss of bitcoin and is unacceptable.\n\nAlthough the Strong Federation design is Byzantine robust, it is still very important that functionaries avoid compromise. Given tamper-evident sensors designed to detect attacks on functionaries, if an attack is determined to be in progress, it is important to inform other functionaries in the network that its integrity can no longer be guaranteed. In this case, the fallback is to shutdown the individual system, and in a worst case scenario, where the Byzantine robustness of the network is potentially jeopardized, the network itself should shutdown. This ensures a large safety margin against system degradation \u2013 assuring both the direct security of users' funds and the users' confidence in the system's continued correct operation.\n\n## Backup Withdrawal\n\nThe blockchain for the Liquid implementation of Strong Federations is publicly verifiable, and it should be possible, in principle, for holders of bitcoin in Liquid to move their coins back to Bitcoin even under conditions where the Liquid network has stalled (due to DoS or otherwise).\n\nThe most straightforward way of doing this would be for watchmen to provide time-locked Bitcoin transactions, returning the coins to their original owners. However, this updates the recipient-in-case-of-all-stall only at the rate that time-locked transactions are invalidated, which may be on the order of hours or days. The actual owners of coins on the Liquid chain will change many times in this interval, so this solution does not work. Bitcoin does not provide a way to prove ownership with higher resolution than this.\n\nHowever, it is possible to set a \"backup withdrawal address\" that is controlled by a majority of network participants, functionaries, and external auditors. This way, if Liquid stalls, it is possible for affected parties to collectively decide on appropriate action.\n\n## Availability and Denial of Service\n\nThere are two independent thresholds involved when signing blocks in a Strong Federation: the signing and precommit thresholds. The former is an unchangeable property of the network and may be set with resilience in mind. It may also be adjusted to a more advanced policy that supports backup blocksigners that are not normally online. The precommit threshold, on the other hand, is determined only by the signers themselves and may be set to a high level (even requiring unanimity of signers) and changed as network conditions require. This means that even if the network block signature rules in principle allow Byzantine attackers to cause forks, in practice malicious users are (at worst) limited to causing a denial of service to the network, provided that the blocksigners set a high enough precommit threshold.\n\nA software bug or hardware failure could lead to a breakdown in a single functionary such that it temporarily no longer functions. Such a participant would no longer be able to take part in the consensus protocol or be able to approve withdrawals to Bitcoin. Unless enough functionaries fail so that the signing threshold is not achievable, the network will be unaffected[^6]. In such a case, funds will be unable to move (either within the sidechain or back to Bitcoin) for the duration of the outage. Once the functionaries are restored to full operation, the network will continue operating, with no risk to funds.\n\n## Hardware Failure\n\nIf a blocksigner suffers hardware failure, and cryptographic keys are not recoverable, the entire network must agree to change signing rules to allow for a replacement blocksigner.\n\nA much more serious scenario involves the failure of a watchman, as its keys are used by the Bitcoin network and cannot be hard-forked out of the current Bitcoin signature set. If a single watchman fails, it can be replaced and the other watchmen will be able to move locked coins to ones protected by the new watchman's keys. However, if too many watchmen fail at once, and their keys are lost, bitcoin could become irretrievable. As mentioned in Section , this risk can be mitigated by means of a backup withdrawal mechanism.\n\nPrevention mechanisms include extraction and backup withdrawal of watchman key material, so that bitcoin can be recovered in the event of such a failure. The encryption of extracted keys ensures that they can only be seen by the original owner or an independent auditor. This prevents individual watchmen operators (or anyone with physical access to the watchmen) from extracting key material that could be used to operate outside of the sidechain.\n\n## Rewriting History\n\nIt is possible that blocksigners could attempt to rewrite history by forking a Strong Federation blockchain. Compared to Bitcoin, it is quite cheap to sign conflicting histories if one is in possession of a signing key.\n\nHowever, rewriting the chain would require compromising the keys held in secure storage on a majority of blocksigners. Such an attack is an unlikely scenario, as it would require determining the locations of several signers, which are spread across multiple countries in multiple continents, and either bypassing the tamper-resistant devices or else logically accessing keys through an exploit of the underlying software.\n\nFurther, such an attack is detectable, and a proof (consisting simply of the headers of the conflicting blocks) can be published by anybody and used to automatically stop network operation until the problem is fixed and compromised signers replaced.\n\nIf the network was forked in this way, it might be possible for active attackers to reverse their own spending transactions by submitting conflicting transactions to both sides of the fork. Therefore, any \"valid\" blocks that are not unique in height should be considered invalid.\n\n## Transaction Censorship\n\nBy compromising a threshold number of blocksigners, an attacker can potentially enforce selective transaction signing by not agreeing to sign any blocks that have offending transactions and not including them in their own proposed blocks. Such situations might occur due to a conflict between legitimate signers or the application of legal or physical force against them.\n\nThis type of censorship is not machine-detectable, although it may become apparent that specific blocksigners are being censored if they have many unsuccessful proposal rounds. It will be obvious to the affected network participants that something is happening, and in this case the Strong Federation may use the same mechanism to replace or remove the attacking signers that is used to resolve other attacks.\n\n## Confiscation of Locked Bitcoins\n\nIf enough watchmen collude, they can overcome the multisig threshold and confiscate all the bitcoin currently in the sidechain.\n\nThe resilience against such attacks can be improved by setting a high signing threshold on the locked bitcoins. This can exclude all but the most extreme collusion scenarios. However, this weakens resilience against failures of the watchmen whose key material is lost. The cost-benefit analysis will have to be done as federated signing technology matures.\n\n# Future Research\n\nWhile Strong Federations introduce new technology to solve a variety of longstanding problems, these innovations are far from the end of the road. The ultimate design goal is to have a widely distributed network in which the operators are physically unable to interfere or interact with application-layer processes in any way, except possibly by entirely ceasing operation, with backup plans to retrieve the funds to the parent chain.\n\n## Further Hardening of Functionaries\n\nMore research should be done to ensure that functionaries cannot be physically tampered with, and that network interactions are legitimate and auditable. Methods for future hardening could include specific design improvements or further cryptographic arrangements. In a Strong Federation, compromised functionaries are unable to steal funds, reverse transactions, or influence other users of the system in any way. However, enough malicious functionaries can always stall the network by refusing to cooperate with other functionaries or by shutting down completely. This could freeze funds until an automatic withdrawal mechanism starts.\n\nAs such, it would be beneficial to research possibilities for creating incentive structures and methods to encourage functionary nodes to remain online under attack. This could be done, for example, by requiring that they periodically sign time-locked transactions. These incentives could prevent certain denial-of-service attacks.\n\n## Enlightening Liquid\n\nThe privacy and speed of a Strong Federation could be further improved by combining it with Lightning\u00a0. Just as with Bitcoin's network, the throughput of initial systems built with this architecture is limited on purpose, as the transactions are published in blocks that must be made visible to all participants in the network. This threshold is set by the need for everyone to see and validate each operation. Even with a private network that mandates powerful hardware, this is a serious drawback.\n\nWith Lightning, individual transactions only need to be validated by the participating parties\u00a0. This dramatically reduces the verification load for all participants. Because end-to-end network speed is the only limiting factor\u00a0, it also greatly reduces the effects of network latency.\n\nFurthermore, nodes in a Strong Federation could route payments via Lightning, a network of bidirectional payment channel smart contracts. This may allow for even more efficient entry and exit from the Liquid network. Finally, Lightning can replace inter-chain atomic swap smart contracts and probably hybrid multi-chain transitive trades\u00a0 without having the limitation of a single DMMS chain.\n\n## Confidential Assets\n\nConfidential Transactions (CT) hide the *amounts* but not the *types* of transacted assets, so its privacy is not as strong as it could be. However, CT could be extended to also hide the asset type. For any transaction it would be impossible to determine which assets were transacted or in what amounts, except by the parties to the transaction. Called *Confidential Assets* (CA), this technology improves user privacy and allows transactions of unrelated asset classes to be privately spent in a single transaction. However, the privacy given to assets is qualitatively different than that of CT.\n\nConsider a transaction with inputs of asset types A and B. All observers know that the outputs have types A and B, but they are unable to determine which outputs have what types (or how they are split up or indeed anything about their amounts). Therefore all outputs of this transaction will have type \"maybe A, maybe B\" from the perspective of an observer. Suppose then that an output of type A, which is a \"maybe A, maybe B\" to those not party to the transaction, is later spent in a transaction with an asset type C. The outputs of this transaction would then be \"maybe A, maybe B, maybe C\" to outside observers, \"maybe A, maybe C\" to those party to the first transaction, and known to those party to the second.\n\nAs transactions occur, outputs become increasingly ambiguous as to their type, except to individual transactors, who know the true types of the outputs they own. If issuing transactions always have multiple asset\u2013types, then non-participant observers never learn the true types of any outputs.\n\n## Byzantine-Robust Upgrade Paths\n\nMost hardening approaches rely on a central, trusted third-party who can provide upgrades: operating systems and other critical software wait for signed software packages, generated in locked down build labs, then hosts retrieve these packages, verify signatures, and apply them, often automatically. This would undermine the threat model of a Strong Federation, as any SPOF can be compromised or coerced to comply. All aspects of the change control system must instead be defensibly Byzantine secure. In any large system, one must assume some part of it may be in a state of failure or attack at any point in time. This means that what can be a simple process for a central authority becomes somewhat more complex.\n\nUnfortunately, creating an agile network, or a system that is upgradable, requires a security tradeoff. An ideal balance is hard to strike: as a network's independence grows, the cost and difficulty in upgrading also increases. As such, it is important that all changes in the code should be opt-in for all parties and the process should be consensus ($k$-of-$n$) driven across the functionary set. These changes should also be fully auditable and transparent prior to application.\n\nUltimately, the processes of maintenance, of new member additions, or of strict improvements to the network must also be Byzantine secure for the whole system to be Byzantine secure. For Bitcoin, this is achieved by a long-tail upstream path which is an audited and open-source procedure, and ultimately the consensus rules each user decides to validate are self-determined (i.e., there may be permanent chain splits in case of controversial changes).\n\nFor Strong Federations, this will be achieved through the design and implementation of an upgrade procedure that enables iterative improvement to the system without enabling attack surfaces by emulating Bitcoin's soft-fork upgrade path. This is presented in Figure\u00a0, and follows the steps:\n\n1. An upstream software provider (USP) writes software updates for the functionary network and provides those updates to the functionaries for implementation.\n\n2. An external security auditor may be used to review the software update and documentation for correctness, verifying the accuracy of the documentation and\/or the codebase itself.\n\n3. Each functionary verifies the signatures from the USP and possibly the third-party auditor, and may also review or audit the updates if it wishes.\n\n4. Each functionary signs the update on the server and returns the resulting signature to the USP.\n\n5. Once a supermajority of functionaries have signed the update, the USP combines their signatures and the update image into a single package. This file, consisting of the update image, documentation, and a supermajority of functionary signatures, is then distributed to each functionary.\n\n6. Each functionary receives the USP and supermajority signatures on their server.\n\n7. Each functionary verifies the package contents and applies the update.\n\nNote that this situation assumes honest participants. There are scenarios in which, for instance, a single group of collaborating malicious functionaries can collectively reject any given upgrade path. Methods of combating this scenario will be further investigated.\n\n# Conclusion\n\nThe popularity of Bitcoin shows that permissionless proof-of-work is an effective mechanism for developing an infrastructure, with hundreds of millions of dollars\u00a0 across dozens of companies being invested in new innovations spanning chip and network design, datacenter management, and cooling systems\u00a0. The value of the security offered by this conglomerate of resources is immense. There is, however, a drawback of the proof-of-work underlying Bitcoin\u00a0: the addition of latency (the block time) to establish widely distributed checkpoints for the shared, current state of the ledger.\n\nThis paper introduces the Strong Federation: a federated consensus mechanism which significantly mitigates a number of real-world systemic risks when used in conjunction with proof-of-work. The solution is resilient against broad categories of attacks via specific implementation decisions and minimization of attack surfaces. Strong Federations improve blockchain technology by leveraging sidechain technology. Furthermore, market enhancements utilizing Confidential Transactions and Native Assets are proposed.\n\nThis paper proposes a methodology that utilizes hardware security modules (HSMs) for enforcing consensus. Currently HSMs have limited ability to verify that their block signatures are only used on valid histories that do not conflict with past signatures. This arises both because of the performance limitations in secure hardware and because anything built into an HSM becomes unchangeable, making complex rule sets difficult *and risky* to deploy. Improved verification requires HSMs to support an upgrade path that is sufficiently capable while being hardened against non-authorized attempts at upgrading. Alternatively, every software deployment may imply a new hardware HSM deployment, but that's not cost efficient.\n\nThe first working implementation of a Strong Federation is Liquid\u00a0\u2013\u00a0a Bitcoin exchange and brokerage multi-signature sidechain that bypasses Bitcoin's inherent limitations while leveraging its security properties. In Liquid, Bitcoin's proof-of-work is replaced with a $k$-of-$n$ multisignature scheme. In this model, consensus history is a blockchain where every block is signed by the majority of a deterministic, globally distributed set of functionaries running on hardened platforms, a methodology that directly aligns incentives for the participants.\n\nStrong Federations will be useful in many general-purpose industries \u2013 especially those that seek to represent and exchange their assets digitally and must do so securely and privately without a single party that controls the custodianship, execution, and settling of transactions.\n\n# Acknowledgements\n\nWe thank Matt Corallo and Patrick Strateman for their substantial commentary and contribution in the formation of the ideas and process behind this paper. We'd also like to thank Eric Martindale, Jonas Nick, Greg Sanders, and Kat Walsh for their extensive review. Finally, we thank Kiara Robles for her excellent figures and diagrams.\n\n[^1]: The capitalized \"Bitcoin\" is used to talk about the technology and the engine, while the lowercase \"bitcoin\" is used to refer to the currency.\n\n[^2]: In most traditional systems, the speed of transaction is achieved by instant execution and delayed settlement.\n\n[^3]: The hashpower, or hash rate, is the measuring unit of the processing power used to secure the Bitcoin network\n\n[^4]: If \"Byzantine failures\" in a network are caused by nodes that operate incorrectly by corrupting, forging, delaying, or sending conflicting messages to other nodes, then Byzantine robustness is defined as a network exhibiting correct behavior while a threshold of arbitrarily malfunctioning nodes (nodes with Byzantine failures) participate in the network. \u00a0\n\n[^5]: The range of values CT can support include proofs that are often order-of-magnitude larger size than ordinary Bitcoin outputs and can be made larger depending on user requirements.\n\n[^6]: Downtime for a functionary does cause degradation in throughput performance as that signer's turn will have to be \"missed\" each round.","meta":{"dup_signals":{"dup_doc_count":11,"dup_dump_count":2,"dup_details":{"curated_sources":2,"unknown":9}},"filename":"out\/1612.05491_extract_bare_conf.tex.md"},"subset":"arxiv"} +{"text":"abstract: Astronomy and science are fields in which specific groups remain under-represented despite multiple studies that investigate this issue and propose solutions. In this article, we analyze the demographics and social behavior of the exoplanet direct imaging community. Our focus is on identifying possible under-representation among this group, and quantifying inappropriate social behaviors. During the Spirit of Lyot conference 2019 (Tokyo, Japan), we conducted a survey that gathered a participation rate of 53%. We analyzed the data collected under the prisms of gender balance and seniority representation. The proportions of women and of non-binary persons reveal a more diverse community in comparison to the other scientific groups (e.g. the IAU members), but still far from a balanced representation of all genders. Early-career scientists appear to have a lower visibility in the field than permanent researchers, with PhD students being under-represented at international conferences, and postdocs being excluded from conference Science Organizing Committees. Regarding social relations, the results are alarming, in particular when it comes to self-censoring of women or to unprofessional behavior, which was experienced by 54% of this community (gender-biased behavior: 29%; oral interruption: 33%; inappropriate behavior: 33%), and in particular by women. We recommend the community to become pro-active to build a safe environment and to continue its inclusion efforts. One aspect could be to systematically include socio-demographic surveys in conference registration forms to monitor the evolution of the community, in particular at larger scales. To do so, the survey questions available on GitHub.\nauthor: Lucie Leboulleux; \u00c9lodie Choquet; Elsa Huby; Garima Singh; Faustine Cantalloube\nbibliography: bib.bib\ntitle: A socio-demographic study of the exoplanet direct imaging community\n\n# Introduction\n\nScience, Technology, Engineering and Mathematics (STEM) are fields traditionally affected by a low workforce diversity, despite inclusion being repeatedly pointed out as necessary to increase the performance and the quality of a workplace . In particular, the gender gap has been studied from various angles: access to permanent positions , responsibilities , proposal acceptance rate , conference attendance , citations , etc. \u2013 and on various scales: in general science and technology , in astronomy , at the scale of a country , of an instrument , of an institute , of a sub-field of astronomy , etc. Despite these multiple studies, gender imbalance is still significant and an ongoing issue. In order to stimulate changes of behavior and improve the situation, under-representation and biased behaviors need to be recognized as an issue, outspoken, and monitored within the STEM community rather than minimized or disregarded comparatively to scientific questions. To do so, more studies are needed to monitor the evolution of the demographics and of the behaviors, and to extend the awareness to other minorities under-represented in science.\n\nIn astronomy, several recent studies focused on factors impacting the visibility of women and having a direct effect on their professional evolution and\/or recognition. In particular, provided a detailed overview of the reasons causing women to leave astronomy and more specifically the field of adaptive optics. The authors pointed out numerous factors including social misconduct towards women and the impostor syndrome of which women are in majority subjects in this field. They also formulated a number of propositions to counterbalance this effect in the future. Other studies have focused on gender-based biases within selection committees that affect the visibility of women in astronomy, such as conference speakers selection, for instance at the American Astronomical Society (AAS) meeting 223 or at the 2014 to 2016 American Geophysical Union (AGU) Fall Meetings , or observing program selection, for example the Hubble Space Telescope and the ESO time allocations. These biases observed in astronomy are similar to what is seen in other fields in Science: in microbiology for instance, a study by also suggests that the gender balance within Science Organizing Committees (SOC) and conference conveners directly impacts the distribution of talks per gender at conferences, which has also been observed by in six disciplines (biology, bio-engineering, political science, history, psychology, and sociology).\n\nOther studies specifically addressed the impact of seniority on the gender balance in astronomy and pointed out important dependencies between these two aspects on demographic and behavior questions. For instance, showed that the gender ratio evolves with age or career status, suggesting that some gender-based results are degenerate with the career level of the probed population. A few studies focused on disentangling the career stage with the gender. For instance, showed that during the Canadian time allocation process, gender is the only significant discrimination parameter.\n\nThese numerous studies illustrate the importance of staying alert about inequalities towards women and minorities in Science in general and in Astronomy in particular. Capturing the demographics of a community is the first step to: 1) identify under-representation issues, 2) develop solutions to improve the inclusion of all groups and, 3) set a reference point to monitor the (hopefully positive) evolution of this community. For these reasons and to complement similar studies in other sub-fields of astronomy, we probed the attendees of the Spirit of Lyot 2019 conference held in Tokyo, Japan, and obtained the first demographic snapshot of the Exoplanet Direct Imaging community.\n\nThe Spirit of Lyot (SOL) conference is a major international conference gathering the community studying extrasolar systems with high-contrast imaging instruments. Its main motivation is to bring together researchers with different expertise ranging from observation to instrumental research and development working towards the same scientific objective: the detection and characterization of exoplanetary systems. This conference brings together a large fraction of this community on a 4-year basis. The fourth edition of the SOL conference was held in Tokyo in October 2019 where around 200 researchers participated. Here, we report the outcome of the survey that was shared with all the attendees of the conference, thus providing a large and representative sample of the field of Direct Imaging of Exoplanets.\n\nIn section , we describe the survey proposed at the SOL conference and the methodology used to analyze the data. Section presents the general demographic overview of the participants. Sections and describe and analyze the results of the survey as a function of the gender and of the career position of the participants, respectively. The conclusion section summarizes the outcomes of our study, identifies its main limitations and proposes solutions to overcome them.\n\n# Methodology\n\nThe results presented in this report were obtained from a survey conducted in October 2019 at the Spirit of Lyot conference in Tokyo, Japan. The survey was initiated during an unofficial splinter meeting which included around 12 female and non binary participants interested in gender studies. Discussions during this meeting enabled to identify broad categories as well as specific points interesting to be probed.\n\nThe complete list of questions asked in the survey is presented in appendix\u00a0. An improved version is available on open source on GitHub: . Four main topics were addressed: the general demographics in the field, the visibility and the ability of the participants to self-promote during the SOL conference, their visibility and their recognition in the field in general, and the occurrence of unprofessional behaviors in this community. While most of the questions were objective and factual queries, a few others (questions 12 to 15, 17, 19, and 20) focused on the subjectivity or on the perception of the participants. We should point that none of the authors has a background in social science and the formulation and topics addressed by the questions may benefit from researchers in social science to guaranty additional neutrality. We encourage the community and in particular demographic studies experts to either commit edits to the form in the Github repository and\/or directly contact the authors to improve the survey. We further discuss the implications in Sec. and , respectively.\n\nThe survey was opened on October 24th 2019 on the fourth day of conference, and sent to all of the conference participants. It was advertised by the SOC between conference sessions and by the Local Organizing Committe (LOC) on the conference email list. The survey was open for three weeks and was closed on November 14th. A reminder email was sent to the conference participants on November 4th, at midpoint of survey open period. All the answers were collected on a voluntary basis, which may bias the accuracy of the results as some groups may be more responsive than others . These limitations are discussed specifically in sections and .\n\nIn the following sections, we analyze the answer ratios of each question, first for the participants of the survey as a whole, then for specific groups among the participants. We computed uncertainties on these ratios at $66\\%$ confidence level following the study by , with uncertainties given by $\\sigma = \\sqrt{\\frac{M(N-M)}{N}}$, where $N$ is the total number of people in the study group and $M$ the number of people of the specific category analyzed in this study group. For instance, $N$ could be the number of women while $M$ could be the number of people having answered \"yes\" to a specific question in the women group.\n\n# General demographics\n\nWe collected precisely 100 answers to the survey out of a total of 190 conference participants. This gives a participation rate of 53%, a value high enough to provide statistical results representing the SOL 2019 conference participants.\n\nIn Fig.\u00a0 we present the main demographic characteristics obtained from these 100 answers.\n\n```latex\n\\begin{figure*}%[hp!]\n \\begin{center}\n \\begin{tabular}{cc}\n \\includegraphics[height=4.5cm]{Gender_2.png} & \\includegraphics[height=4.5cm]{Status_2.png} \\\\\n \\includegraphics[height=5.25cm]{Expertise_2.png} & \\includegraphics[height=5.25cm]{Country_2.png}\n \\end{tabular}\n \\end{center}\n \\caption %bla[General] \n { \\label{fig:General} \nOverall demographics of the participants of the Spirit of Lyot 2019 conference: these graphics show the ratios of the participant to the survey as a function of their gender, career level, field of expertise, and affiliation country.}\n \\end{figure*}\n```\n\nThe gender balance in the survey participants is as follows: 69% defined themselves as men, 29% as women, and 2% as non-binary people. These proportions confirm the under-representation of women in this field and are comparable to the ones in the other fields presented in the introduction (e.g. 27% of women individuals in the field of Adaptive Optics, ). Compared to the whole field of Astronomy, this proportion of women in the field of Exoplanet Direct Imaging is lower than the one in AAS , but higher than the fraction of women in the IAU (18%, data from January 2018) and equivalent to the one attending the AGU Fall Meetings . The presence of non-binary people is also encouraging, but can hardly be compared to any reference percentage since the studies including them are unfortunately too rare and should be developed, which and propose recommendations for. Although encouraging, the representation of women and non binary people at the SOL conference shows that significant efforts need to be performed to reach an acceptable gender balance in this community.\n\nIn terms of career level distribution, 24% of the participants were PhD students, 27% post-doctorate researchers, 47% faculty researchers, and 2 people did not answer this question. The participation to the conference was thus at 51% for young scientists and 47% for permanent researchers. If we assume an average ratio of two non-permanent positions per permanent position in the community as deduced from observations in different institutes and accounting for geographical disparities (higher ratio in the USA than in European countries), we notice a significant under-representation of young-career scientists at the 2019 SOL conference. In comparison, the representation of PhD students at the AGU Fall meetings was 29% between 2014 and 2016, a ratio 5 points higher than at the Spirit of Lyot 2019 conference . This indicates a community which favors self-promotion (faculties are in charge of managing travel budgets) over the promotion of young scientists in the team. This is perceived as a negative point for this community given that PhD students and postdocs strongly rely on international conferences to progress in their career and gain visibility.\n\nThe expertise of the conference attendees was fairly spread over three main domains: instrumentation (39%), observations (33%), and a combined observational and instrumental expertise (23%). This reflects the rational of the Spirit of Lyot conference series, which specifically aims at bringing together these fields of expertise related to the direct imaging of exoplanetary systems, a field where astrophysical results depend on complex instruments. In addition, a low but noticeable fraction of the attendees claimed expertise in theory (5%), indicating that the field also requires theoretical work to interpret their observations. Monitoring the progress of the fraction of theoreticians over time may indicate an interesting evolution of the field toward more fundamental research about the formation and evolution of exoplanets and circumstellar disks.\n\nFinally, the participants of the 2019 SOL conference came in majority from the USA (45%) and from Europe (36%, mainly France and the Netherlands at 15% and 10%, respectively). The other continents were much less represented, with only 9% of the participants coming from South America (exclusively Chile), 8% from Asia (mainly Japan, 7%), and no participant from Africa or Oceania. This is a negative sign in terms of the geographic representation in this field, in particular considering that several major instruments used by this community are installed in South America and the conference was held in Asia. Among the possible reasons for these geographic disparities are financial reasons (different travel budgets), the importance of the field in each country (also linked to the financial reasons through hiring resources), and interest in the conference within each country.\n\nIn Tables and , we show the local distributions of gender and career level, respectively, among the participants as a function of their affiliation country. We excluded the countries with only one answer to the survey from this analysis (Belgium, China, Sweden). Although some ratios are affected by small sample statistics and should be used carefully, significant differences appear between some countries in terms of gender career category representation. In particular, the three most represented countries, with more than 10% of the participants, were the USA, France, and the Netherlands. We see important differences between them: French participants were more balanced in gender than the other two countries with 40% female scientists, but more strongly favored the participation of the permanent researchers over young scientists (40%). Conversely, Dutch participants had a large majority of male participants (only 20% of women), but promoted their students and postdocs more strongly (80% of young scientists) over the faculties. The US participants were more balanced on these categories, and included the only non binary scientists who answered the survey, and thus showed a more diverse environment. However, their female and young-career scientists remain under-represented, with only 24% of women, and 47% of PhD students and postdocs.\n\n| | | | |\n|:---------------|----------:|----------:|-----------:|\n| Country | Female | Male | Non binary |\n| | (%) | (%) | (%) |\n| USA | $24\\pm6$ | $71\\pm7$ | $4\\pm3$ |\n| France | $40\\pm13$ | $60\\pm13$ | $0\\pm0$ |\n| Netherlands | $20\\pm13$ | $80\\pm13$ | $0\\pm0$ |\n| Chile | $33\\pm16$ | $67\\pm16$ | $0\\pm0$ |\n| Japan | $43\\pm19$ | $57\\pm19$ | $0\\pm0$ |\n| Switzerland | $50\\pm25$ | $50\\pm25$ | $0\\pm0$ |\n| Germany | $33\\pm27$ | $67\\pm27$ | $0\\pm0$ |\n| Canada | $0\\pm0$ | $100\\pm0$ | $0\\pm0$ |\n| United Kingdom | $50\\pm35$ | $50\\pm35$ | $0\\pm0$ |\n\nGender distribution among the participants for each country with more than one respondent to the survey.\n\n| | | | |\n|:---------------|----------:|----------:|------------:|\n| Country | Faculty | Postdoc | PhD Student |\n| | (%) | (%) | (%) |\n| USA | $49\\pm7$ | $29\\pm7$ | $18\\pm6$ |\n| France | $60\\pm13$ | $20\\pm10$ | $20\\pm10$ |\n| Netherlands | $20\\pm13$ | $30\\pm14$ | $50\\pm16$ |\n| Chile | $67\\pm16$ | $11\\pm10$ | $22\\pm14$ |\n| Japan | $57\\pm19$ | $14\\pm13$ | $29\\pm17$ |\n| Switzerland | $0\\pm0$ | $100\\pm0$ | $0\\pm0$ |\n| Germany | $33\\pm27$ | $33\\pm27$ | $33\\pm27$ |\n| Canada | $50\\pm35$ | $0\\pm0$ | $50\\pm35$ |\n| United Kingdom | $50\\pm35$ | $0\\pm0$ | $50\\pm35$ |\n\nCareer level distribution among the participants for each country with more than one respondent to the survey.\n\n# Gender-based analysis\n\n## Involvement of the participants\n\nIn order to analyze the interest of the conference participants to the survey and estimate the accuracy of the results per gender category, we monitored the evolution of the male and female respondents over three weeks during which the survey was open. The results are plotted in Fig.\u00a0, with the number of participants per gender with time on the left and the fraction of each gender category with time on the right.\n\nIn Fig.\u00a0, left, we observe that the number of female and non-binary participants approach their final values very early on during the survey period (respectively within a few days and within a week of the opening date) compared to the male participation, then show a near-flat slope. Conversely, the slope of the male participation near the end of the survey period (excluding the last 5 days when no new answer was received) is steeper. This different slope indicates that the survey is more likely missing answers from male participants than from female and non-binary participants and suggests a different level of interest to gender studies between these categories.\n\nSimilarly, assuming the three gender groups had a similar interest in the survey, they would have answered at the same rate and their fractions would have quickly stabilized around their final value in Fig.\u00a0, right, ie. $69\\%$ for men, $29\\%$ for women, and $2\\%$ for non-binary people. If we exclude the last 5 days when no answer was received, we see that this expected stabilization never occurs and that the final ratios evolve until the last answer. Later, this trend is even clearer when compared to the same analysis performed between the career level groups in Fig.\u00a0, right.\n\nFirst, we conclude that we may have an over-representation of women and under-representation of men in this study compared to the total participants of the 2019 SOL conference. Second, it shows that women answered on average earlier than men, indicating a higher interest and involvement in the problems addressed in this survey. Finally, we notice that the reminder email sent on November 4th (vertical grey line of Fig.\u00a0) had a significant impact on the number of answers (+23), and mainly on the male participants (+20 answers).\n\nBecause of the low number of non-binary participants in the survey, the results for this sub-group suffer from large uncertainties and does not guaranty their anonymity in the next sections of the gender analysis. We thus chose to limit the rest of the analysis to the male and female genders only. We hope that conducting a similar study on a larger sample in the future will enable to report the responses of marginalized genders while preserving the anonymity of the people.\n\n```latex\n\\begin{figure*}\n \\begin{center}\n \\begin{tabular}{cc}\n \\includegraphics[height=5cm]{Gender_Time2bis.png} &\n \\includegraphics[height=5cm]{Gender_Timebis.png}\n \\end{tabular}\n \\end{center}\n \\caption \n { \\label{fig:RatiosvsTime} \nEvolution of the number of answers per gender with time (left) and of the proportion of each gender group with time (right). The vertical grey lines indicate the date of the reminder email. We can observe that women tend to answer faster than men to the survey, and the slight non-zero slopes on the right plot at the closure of the survey indicates an under-representation of male attendees to the survey and an over-representation of female attendees. The participation of non-binary attendees reaches a plateau well before the closure of the survey.}\n \\end{figure*}\n```\n\n## Distribution of career levels per gender\n\nDifferences in the distribution of men and women between the main career status have been observed in many scientific communities, showing that women are more often graduate students than permanent researchers in comparison to male scientists (e.g. in the field of Adaptive optics, or at the AGU Fall Meeting attenders, ). Numerous factors have been studied in , all leading to the conclusion that there is a leakage of women at each career step in the field of astronomy, ultimately leading to fewer women at higher positions. For the specific case of France, inequalities in access to the permanent positions is evident , with a success rate to permanent positions twice higher for men than for women. More generally, female scientists are less promoted and less funded than their male colleagues, generating a gap in the career distribution between the two genders . In this context, studying the career distribution per gender in our survey is of particular interest.\n\nIn Fig.\u00a0, we compare the professional positions of the male and female participants to the 2019 SOL conference survey. We see that the proportion of women with non-permanent positions is slightly higher than for men, with $31\\% \\pm 9\\%$ of female postdocs versus $25\\% \\pm 5\\%$ of male postdocs, and $45\\% \\pm 9\\%$ of female faculties versus $51\\% \\pm 6\\%$ of male faculties. It thus seems that women have more difficulties to access permanent positions compared to male associates. However, the large uncertainties does not allow us to draw a solid conclusion here. This is a phenomenon commonly observed in other scientific communities that the fraction of women leaving academia accumulates throughout the classical research path, from graduation to full permanent position, in contrast to men associates (the *leaky pipeline* phenomenon) . In Fig.\u00a0, we also observe that both gender groups have the same ratio of PhD students ($24\\%$). It indicates that the general issue of women under-representation starts at the beginning of their career, with very few female students admitted in a PhD program in this field.\n\nAs discussed in Sec. , the extrapolation of the gender-based results to the whole Exoplanet Direct Imaging community may be limited by the likely under-representation of men in the survey answers. Yet, we show in Sec.\u00a0 that the three career groups participated to the survey at similar rate, suggesting that the career level distributions are accurate and representative of this community.\n\n## Exposure and visibility at the Spirit of Lyot 2019 conference\n\n```latex\n\\begin{figure*}\n \\begin{center}\n \\begin{tabular}{cc}\n \\includegraphics[height=4.5cm]{Gender_Asked_questions_to_a_speaker_at_the_end_of_their_talk.png} &\n \\includegraphics[height=4.5cm]{Gender_Poster_pop_talk.png}\n \\end{tabular}\n \\end{center}\n \\caption\n { \\label{fig:GenderTalk} \nProportions of women and of men who asked questions at the end of a talk (left) and proportions of women and of men who asked for a poster pop talk (right) at the SOL conference.}\n \\end{figure*}\n```\n\nTo analyse the exposure of each gender group at the 2019 SOL conference, several questions in the survey specifically asked if the participants requested a talk when they registered at the conference (question 7), and if they were actually attributed a talk by the SOC (question 8).\n\nFrom the answers, we derive that within the speakers, $67\\%$ were men, $33\\%$ were women, and none were non-binary people. The ratios for men and women are close to the fraction of female and male participants at the conference and indicate that the conference program was well representative of the binary attendees. However, we can regret that no non-binary person obtained an oral presentation.\n\nThe results show that men and women equally asked for contributing talks ($78\\% \\pm 5\\%$ for men vs. $76\\% \\pm 8\\%$ for women) and that women were slightly more successful at obtaining one than men ($41\\% \\pm 6\\%$ for men vs $48\\% \\pm 9\\%$ for women), although the difference is not significant given the uncertainties. As a comparison, reports that at larger scales (AGU Fall Meetings from 2014 to 2016), women are given fewer opportunities than men to give oral presentations. As a reason for this observation, they explain that women are predominantly PhD students compared to men, and that students represent the least invited career group. In addition, they notice that conveners are mostly men, who seem less likely to give speaking opportunities to women.\n\nThe following results about self-confidence and self-promotion at the SOL conference are also derived from the specific questions inquired in the survey. Figure\u00a0 (left) indicates that male attendees asked questions significantly more frequently than women following the talks ($50\\%$ of the male participants vs $31\\%$ of the female participants). This suggests that a larger fraction of women in this community are subject to self-censorship than men and did not consider the conference environment comfortable or benevolent enough to bring themselves forward. This behavior has also been observed and studied in other conferences such as the AAS meeting 223 and the 2014 UK National Astronomy Meeting and on larger samples . In particular, the survey of the AAS meeting 223 showed that the sessions chaired by women had a higher number of questions asked by women.\n\nFinally, a similar trend is observed for the poster-pop presenters: $31\\%$ of the male participants vs $14\\%$ of the female participants asked to advertise a poster on stage (Fig.\u00a0, right). Poster presentations were however open to all participants without selection by the SOC. This second point thus questions the appreciation of women for their own work and their confidence in front of an audience. We received explanations from 8 women for not volunteering to present their poster on stage, and sorted them in four categories: lack of information about the opportunity (50%), lack of confidence (25%), lack of interest in poster pops (12.5%), and lack of time to prepare the poster pop (12.5%). The similar analysis of the 13 explanations provided by male attendees also shows an important lack of information (31%), but much more frequently a lack of interest (31%) or a lack of time (23%) for the exercise. Only one man suggested a lack of confidence, and one mentioned difficulties to have the poster presentation approved at the institutional level.\n\n## Visibility and recognition by peers\n\n```latex\n\\begin{figure*}\n \\begin{center}\n \\begin{tabular}{cc}\n \\includegraphics[height=4.5cm]{Gender_Attended_an_international_conference_in_2018.png} &\n \\includegraphics[height=4.5cm]{Gender_Invited_to_join_a_SOC_of_an_international_conference_in_2018.png}\n \\end{tabular}\n \\end{center}\n \\caption\n { \\label{fig:GenderConf} \nProportions of women and of men who attended an international conference in 2018 (left) and proportions of women and men having been invited in the SOC of an international conference in 2018 (right).}\n \\end{figure*}\n```\n\nIn this section, we now extend the question of visibility and recognition to the daily professional environment instead of just the 2019 SOL conference. To address this topic, we focused on the general access to conferences, the participation to SOCs, and the inclusion in peer-reviewed publications as co-authors.\n\nIn terms of visibility and exposure to the community, $72\\%$ of the male participants and $62\\%$ of the female participants attended an international conference in 2018 (Fig.\u00a0, left). This suggests that women may have higher difficulties to access conferences than men, although the sample was too small to guaranty the significance of this result. Given the importance of exposure at conferences to be recognized by peers, to advertise projects, and to promote one's career, the trend observed here in the Exoplanet Direct Imaging community should not be neglected and would need to be monitored and confirmed with a larger sample .\n\nIn terms of recognition within the field, the survey showed that in 2018, slightly more women than men have been invited in the SOC of an international conference ($21\\%$ of the female participants vs. $14\\%$ of the male participants, see Fig.\u00a0 right). The uncertainties are however also too large to confirm this trend at a significant level. If confirmed on a larger sample, it may indicate an effort in this community towards a better representation of women at conferences. However, $50\\%$ of the women having participated in a SOC in 2018 perceived that they were invited to fulfill a gender quota, and one of them specified that it was explicitly communicated to her. We note that the formulation of the question in the survey asking about the impression of filling a gender quota in a SOC may have been ambiguous about the time period considered and about the type of SOC (e.g. international conference vs. institutional committees, etc.), because two additional women answered affirmatively while indicating that they have not been invited in the SOC of an international conference in 2018. This ambiguity can be removed in future surveys by specifying that the question 17 refers to the same SOC as question 16. In any case, these numbers question the real motivation of increasing the representation of women in SOCs and on the recognition of their scientific expertise within this community.\n\n```latex\n\\begin{figure*}\n \\begin{center}\n \\begin{tabular}{cc}\n \\includegraphics[height=4.5cm]{Gender_Unfairly_absent_for_a_list_of_co-authors.png} &\n \\includegraphics[height=4.5cm]{Gender_Unfairly_present_in_a_list_of_co-authors.png}\n \\end{tabular}\n \\end{center}\n \\caption \n { \\label{fig:GenderAuthor} \nProportions of women and of men who felt being unfairly absent (left) and unfairly present (right) in a publication co-author list. }\n \\end{figure*}\n```\n\nFinally, the other aspect related to the recognition by peers probed in the survey concerned the inclusion in publications as co-author. Figure\u00a0 shows no significant difference between the fractions of men and of women who have sensed being unfairly excluded from a publication. We note that, independently of the gender considerations, a large fraction of people considers having been excluded from author lists ($\\sim 30\\%$ in total), which may indicate a larger issue. Likewise, we see no significant difference between the fractions of men and of women who felt unfairly present in the lists of co-authors ($\\sim 18\\%$). These two results indicate that there is no discrimination on gender in the inclusion in publication author lists within the Exoplanet Direct Imaging community. However, we note that, unlike the other questions of the survey, these two questions called on a subjective feeling and the responses were dependent on the sensitivity of the participant. This will be more discussed in the conclusion.\n\n## Unprofessional behaviors per gender\n\nIn this section, we analyzed the feedback to the survey questions probing the occurrence of and sensitivity to unprofessional behaviors in the Exoplanet Direct Imaging community. Similar to the previous section, some of these questions called on subjective interpretations of the respondents.\n\nFirst of all, the survey inquired if the participants had already been interrupted while talking or prevented from talking at a professional event. Independently of the gender considerations, the total number of people who suffered from this behavior is 33%. It reflects poor attitudes, indicating a lack of awareness in respecting boundaries among peers in the Exoplanet Direct Imaging community. In addition, the gender-based results revealed that $23\\pm5\\%$ of men versus $59\\pm9\\%$ of women consider they have faced this issue (Fig.\u00a0). This shows a significant gender bias on this problem, with women being 2.6 times more frequently interrupted than men. This result is particularly appalling knowing that such conduct can impact one's self-confidence, leadership, and credibility, all indirectly linked to one's visibility, recognition, and thus career evolution. We recommend the community to be more respectful in this aspect and to be watchful against disrespectful interruptions, in particular of women.\n\n```latex\n\\begin{figure*}\n \\begin{center}\n \\begin{tabular}{cc}\n \\includegraphics[height=5.2cm]{Gender_Experienced_at_work_or_at_a_conference_a_situation_of_inappropriate_behavior_V2.png} &\n \\includegraphics[height=5.2cm]{Gender_Noticed_at_work_or_at_a_conference_a_situation_of_inappropriate_behavior_V2.png} \\\\\n \\includegraphics[height=5.2cm]{Gender_Noticed_at_work_or_at_a_conference_that_a_person_was_behaving_differently_to_you_because_of_your_gender_V2.png} &\n \\includegraphics[height=5.2cm]{Gender_Noticed_at_work_or_at_a_conference_that_a_person_was_behaving_differently_to_somebody_because_of_their_gender_V2.png}\n \\end{tabular}\n \\end{center}\n \\caption\n { \\label{fig:GenderInappropriate} \nProportions of women and of men who experienced an inappropriate behavior (top left). Fractions of women and men who noticed a situation of inappropriate behavior (top right). Fractions of women and men who experienced a difference of behavior towards themselves due to their gender (bottom left). Fractions of women and men who noticed a gender-based difference of behavior towards somebody (bottom right).}\n \\end{figure*}\n```\n\nFurthermore, $33\\%$ of the participants have already been victims of an inappropriate behavior in this community. This rate is alarming as it shows a generalized and frequent behavior, revealing an unsafe work environment. It is also significantly unbalanced between men ($25\\%$ of male victims) and women ($52\\%$ of female victims), with 5.1 times more women experiencing inappropriate behaviors than men, as shown in Fig.\u00a0 (top left). It implies that significant changes of conducts are urgent in the field of high-contrast imaging to make it a safe zone for everybody and for women in particular.\n\nComplementary to these distressing results, we also observe that 48% of the participants have already noticed a situation of inappropriate behavior, women slightly more frequently than men ($55\\%$ vs. $45\\%$, respectively, see Fig.\u00a0, top right). These values are somewhat encouraging as they indicate that people of both genders are aware of inappropriate behaviors and are capable of identifying such situations.\n\nFinally, we obtain from the survey that a majority of the female participants (69%) have experienced being treated differently because of their gender (see Fig.\u00a0, bottom left). In comparison, our data shows that this experience has happened to a negligible fraction of the male participants ($13\\%$). Such a gender-based difference in behavior is not welcomed in a professional environment because it affects the credibility, leadership, and confidence of a person and influences hishertheir professional abilities. Thus we call for a significant change of conduct within the Exoplanet Direct Imaging community. Knowing that such gender-based changes of behavior may be performed unintentionally, we encourage the community to become further aware of its own prejudices, unintentional biases and be attentive and observing about one's own actions.\n\nIt is encouraging to note that 57% of the participants have already noticed such gender-based differences of behavior. Although, Fig.\u00a0 (bottom right) shows that women are significantly more vigilant about such behaviors than men ($76\\%$ vs. $51\\%$, respectively).\n\nOverall, the results from this particular analysis draw a rather unsafe picture of the High-Contrast Imaging community. Combining the three types of unprofessional behaviors probed in the survey (questions 12, 14, 18), 54% of the community have experienced at least one of these situations (gender-biased behavior: 29%; oral interruption: 33%; inappropriate behavior: 33%). For women, the proportion of victims is 80% (gender-biased behavior: 69%; oral interruption: 59%; inappropriate behavior: 52%). Such events occur twice more frequently for women than for men, who are affected at 43% by unprofessional behaviors.\n\nThese rates are alarmingly high and reveal an unsafe environment. Inappropriate behaving needs to be considered as a general issue and should be resolved urgently in our field. We observe that a large fraction of the community (around 50%) generally notices inappropriate behaviors or gender-based differences. According to , inappropriate behavior is one of the major problems causing women to leave academic research. The results described for the Exoplanet Direct Imaging community should thus be monitored and improved both to build a workplace safer for everybody and to push for a better gender representation.\n\n# Results based on the career level\n\nIn this section, we analyze the results of the survey as a function of the professional position of the participants: Faculty member (or similar types of permanent position), postdoctoral researcher, or PhD student. About half of the participants were faculty members (47%), and the other participants almost equally split between postdoctoral researchers (27%) and PhD students (24%), see Fig.\u00a0. Two individuals in our data do not fit into these three predefined categories and answered \"Other\" in the career level options.\n\n## Involvement in the survey\n\nSimilar to the gender-based analysis, we analyzed the rate at which participants from the different career groups responded to the survey (Fig.\u00a0). First, we observe that the relative fractions between the three main career categories have all stabilized around their final values within the first two days of the survey period. Unlike the gender analysis, in which a bias against the survey was identified for the male group, the present analysis shows that all three career groups had a similar interest in the survey as their answers were received at the same rate. It also indicates that the survey accurately captured the actual ratios for each career group attending the Spirit of Lyot conference, despite the 53% participation rate to the survey.\n\n## Gender balance per professional category\n\nThe current and the following sections focus only on the three major categories (Faculties, Postdocs, PhD students) and do not include the two \"Other\" entries as they are not statistically significant in this sample.\n\nFigure\u00a0 shows the gender balance in each professional category. It shows the same results as Fig.\u00a0 but with a different perspective. Here, we see that gender imbalance exists in the three main career steps in the Exoplanet Direct Imaging community. Furthermore, the figure also aims at illustrating the so-called *leaky pipeline* process in the community. At first glance, we observe a rather constant proportion of women at each career step (29% of female PhD student, 33% of female postdocs, 28% of female faculties). This tentatively indicates that there is no obvious inequalities in the recruiting process within this community. However, the large error bars (7-9%) are comparable to the ratio fluctuations between the career groups, which prevents us from drawing a firm conclusion for this field.\n\nIn addition, the low 29% ratio of female PhD student is not encouraging to progress towards a better gender balance in the foreseeable future. Finally, we note that non-binary individuals equally spread in the younger PhD student and postdoc populations. This indicates that the Exoplanet Direct Imaging community is slowly starting to be more inclusive and diverse. Monitoring the evolution of these demographics in the next 5 years is necessary to assess whether these individuals are offered equal opportunities to obtain faculty positions.\n\n## Expertise per professional level\n\nFigure\u00a0 shows the spread in expertise in each career group. We notice that the expertise is equally spread between \"Instrumentation\", \"Observation\", and \"Observation and instrumentation\" within the Faculty and Postdoc populations. PhD Students tend to have a single expertise (Instrumentation 50%, Observations 33%, Theory 8%), but rarely a combination of several (8% total). It reveals an (implicit) policy in this community to develop a strong expertise in a specific field at the PhD level and to diversify one's skills with additional expertise at the postdoc level.\n\nWe notice that there are significantly more PhD students working in instrumentation (50%) compared to the senior groups (30-36%). It suggests that instrumentation is very attractive to the undergraduate students and is a good entry point for starting a career in astronomy. It also indicates that young doctors who specialized in instrumentation are more likely to broaden their expertise to observational skills than the ones with other expertise. This trend may be explained by several reasons: 1) there is a disinterest for instrumentation at the postdoc level, possibly explained by limited job opportunities in astronomical instrumentation or ample opportunities in private companies as compared to the other expertises, or 2) young instrumentalist doctors have more opportunities to develop observational skills, for instance after having commissioned an instrument or an instrument sub-system.\n\n## Visibility and recognition by peers per career level\n\nFigure\u00a0 (left) shows the exposure given to each career category in international conferences. Figure\u00a0 (right) presents the recognition level received by each career category within the community through invitations to join the SOC of an international conference.\n\nOur data shows that postdocs receive the most exposure and visibility at international conferences, with 89% of them who attended at least one conference in 2018. Counting the participation to the 2019 SOL conference, it shows that the large majority of postdocs attends an international conference at least once per year. This is a positive result, as the postdoctoral positions are based on short-term contracts, hence the postdocs are required to actively advertise their work and expand their network in search for a new position. Thus, this high visibility ratio demonstrates a healthy community which encourages and supports the postdocs in their career developments.\n\nHowever, only 46% of the PhD student participants attended an international conference in 2018. We note that some of the PhD students attending the SOL conference may not have yet started their PhD program in 2018, so this percentage has to be taken carefully. Nevertheless, combined with the low attendance of PhD students (24%) as compared to the permanent researchers (47%), it strengthens the previous analysis that this community does not promote its PhD students enough and prevents a fraction of them from attending conferences. This is yet another downside of the community, knowing that these events are critical for young researchers to promote their work and develop their professional network. This behavior is likely to make it more difficult for PhD students to find a postdoc position in the field.\n\nIn comparison, faculties receive ample amount of opportunities to regularly attend international conferences. In addition to attending the 2019 SOL conference, 70% of the faculty participants attended an international conference in 2018. In addition to being the largest population of the SOL conference (47% of the participants), this also strengthens the analysis that faculties in this field expose and promote themselves much more often than their PhD students.\n\nFinally, Fig.\u00a0, right, shows that only permanent researchers have been invited to join the SOC of an international conference in 2018. Similarly, the SOC of the 2019 Spirit of Lyot conference was exclusively composed of faculty members. Although it can be argued that managing the scientific organization of conferences requires some level of professional experience, the exclusion of postdocs from SOCs demonstrates again a significant bias against young-career researchers in this community. We recommend the Exoplanet Imaging community to be more inclusive of young scientists in the future by supporting PhD students to attend international conferences and include postdocs in their scientific organization.\n\n## Inappropriate behavior\n\nFigure\u00a0 shows how the different professional categories are exposed to inappropriate behaviors. On the left, we show the fraction of each category that has experienced a situation of inappropriate behavior, and on the right the fraction that has noticed such behaviors.\n\nIn the left panel, we notice a significant difference between the professional categories in terms of experiencing an inappropriate behavior: the majority of postdoc participants have been victims of inappropriate behavior (56%), about twice more than the permanent astronomers (28%) and PhD students (21%). Although all these ratios are high and concerning, this particularly high rate at the postdoc level is notably alarming given the vulnerability of postdocs to the professional insecurity, which is already a source of anxiety.\n\nIn the right panel, we notice that about half of the faculty community and a majority of postdocs have noticed a situation of inappropriate behavior at work or at a conference. On the one hand, this indicates that such behaviors happen relatively often in this community, which is a concern. On the other hand, it also indicates that the community is aware and attentive to such situations, with the postdoc community being the most vigilant, with 63% of them having noticed inappropriate behaviors in the past. The PhD student community, however, seems relatively protected from exposure to the inappropriate behaviors, with 29% of them having noticed such a situation.\n\nIt is particularly interesting to cross-correlate the occurrence of inappropriate behaviors between the gender and career groups, to identify more precisely the social groups which are most prone to such behaviors. In table\u00a0, we show the fractions of people having experienced inappropriate behaviors as a function of both their gender and career level. The non-binary and female postdoc group appear to be the most exposed to inappropriate behaviors, with 80% of them who have gone through such conduct. In comparison, male PhD students and male faculties are the most protected groups (19% and 21%, respectively). This is concerning and immediate steps should be taken to make this community safer, diverse and inclusive given how this well-known problem forces under-represented scientists (women and non-binary persons) to quit the field . The fact that it happens predominantly at the postdoctoral stage, which is the most insecure in one's career, is particularly worrying as it discourages them to continue in the field and thus contributes to a vicious circle working against a representative gender-balanced community.\n\n| | Female & Non binary | Male |\n|:----------------|:-------------------:|:----:|\n| PhD students | 25 % | 19 % |\n| Postdocs | 80 % | 41 % |\n| Faculty members | 46 % | 21 % |\n\nProportions of victims of inappropriate behaviors within each gender and career group. To preserve the anonymity of the non-binary participants despite their small sample, their answers are combined with those of women.\n\n## Inclusion in publications\n\nIn Fig.\u00a0, we show the fraction of people who mentioned being unfairly absent in the author list of a publication within the different professional categories. Here too we notice that the postdoc community most often declares that they have been unfairly left out from the publication authorship (41% of them), compared to the faculty (28% of them) and PhD student communities (25% of them). This shared anxiety among the postdoc community may be enhanced by the insecurity of their short-term position and a need for a high publication rate to obtain a faculty position.\n\nWe do not report significant differences between these communities regarding the feeling of having been unfairly *present* in author lists (17% over all the participants regardless of their professional category).\n\n# Discussion and conclusions\n\nIn this paper, we presented an overview of the community working on the direct imaging of exoplanetary systems based on a survey conducted at the Spirit of Lyot 2019 conference in Tokyo, Japan. The questions of the survey focused on several aspects about the general demographics, representation and self-confidence markers during this conference, the equity of exposure and recognition in the field through authorship, SOC invitation and access to conferences, and the occurrence of inappropriate behaviors at the workplace. The survey collected 100 answers, providing a 53% participation rate from the conference attendees. In addition to the overall study, the results were analyzed within two main categories: as a function of the gender and of the career level of the participants.\n\nFrom the global demographics analysis, we extract three main results:\n\n- Women are under-represented in the community, with a 29% representation rate.\n\n- Young-career scientists (PhD students and postdocs) were under-represented at this conference, with a participation rate of 51% compared to a representation in the field estimated between 66% and 75%.\n\n- Significant representation disparities exist between the different countries present at the conference.\n\nFrom the gender-based analysis, we gathered several key results. The principal positive results are the following:\n\n- The proportions of PhD students, postdocs, and permanent researchers do not significantly vary between men and women. Owing to the large uncertainties, we cannot draw any conclusion about recruiting discrimination in the field.\n\n- At the Spirit of Lyot conference, the fractions of female contributing speakers (33%) was comparable to the participation of women to the conference (29%), showing an unbiased representation effort from the SOC.\n\n- Generally, in this field, a slightly higher fraction of women are invited in the SOC of international conferences compared to men ($21\\%$ of the female participants vs $14\\%$ of the male participants), which shows an effort towards a better gender representation. However, a large fraction of these women (50% of them) felt they were included in SOCs in order to fill a gender quota which questions the ability of the community to recognize them for their expertise.\n\nHowever, several negative results are also reported:\n\n- Women were more subject to self-censorship than men at the conference, which was deduced from two markers: they were significantly fewer to ask questions after the talks ($31\\%$ of them vs $50\\%$ of the male participants), and they were significantly fewer to advertise their poster on stage ($14\\%$ of them vs $31\\%$ of the male participants). The former behavior was also observed in other studies .\n\n- The most striking and alarming result is that 80% of women reported having experienced unprofessional behaviors, ranging from gender-biased behaviors (69% of them), to interruptions while they were speaking (59% of them), to inappropriate behaviors (52% of them). These rates are twice higher than for men (43% of them experienced a type of unprofessional behavior). These behaviors contribute to creating an unsafe working environment and to undermining women's confidence, credibility, and leadership.\n\nOn an encouraging note, a majority of the participants, of all genders (although more often women), have already noticed such unwanted behaviors, which shows that the community is overall aware of these issues and can be in a position to intervene and stop such situations from happening.\n\nFrom our career-based analysis, we mainly found that:\n\n- PhD students from this field are under-represented at conferences. This is seen both with their low participation to the Spirit of Lyot 2019 conference (24%, 5 points lower than for instance the AGU meetings) and with the low fraction of them attending an international conference in 2018 (46%). In comparison, faculties seem to more easily support themselves to conferences, with a 47% fraction at the Spirit of Lyot conference and with 70% of them attending an international conference in 2018. This may create difficulties for students in this community to find a postdoctoral position at the end of their thesis.\n\n- Postdocs in this field are excluded from the SOC of international conferences. This strengthens the analysis that this community has a bias against young-career scientists.\n\n- A majority of postdocs has already experienced inappropriate behaviors (56%), significantly more than the PhD students and the faculties (around 25% for each). In particular women and non-binary postdocs are predominantly victims of such situations (78% of them). In comparison, 41% of the male postdocs have encountered inappropriate situations. This is particularly alarming given that postdoc is the most insecure type of position: it may discourage early-career researchers especially female and non-binary scientists to continue working in this field.\n\nThis survey can be used as a reference point for future similar studies in order to monitor the evolution of the demographics and social behaviors in the field of direct imaging of exoplanetary systems. We also point out the systematic behaviors which need to be addressed so that the quality of the work environment can be improved. In the prospect of developing a safer and a well-represented community, we formulate a number of recommendations:\n\n1. Be proactive to prevent all sorts of professionally inappropriate behaviors, ranging from micro-aggression (interruptions, biased comments...) to serious aggression (harassment, intimidation,...).\n\n2. Be more inclusive to under-represented genders, in particular at the PhD student recruiting level.\n\n3. In order to better promote young-career scientists, provide more support and encouragement to the PhD students to attend international conferences, and be more inclusive of postdocs when forming the SOC of conferences.\n\n4. Systematically implement a similar socio-demographic survey in the registration form of future conferences and workshops.\n\nThe later point has the capability of enabling the future studies to further monitor the evolution of gender representation and different biases over time, and allow to refine the solutions to counterbalance these issues. Furthermore, it would enable to increase the sample size and obtain more precise results. It would also provide awareness and more visibility to the experience of minorities including non-binary people. In particular, it was noted that many of the gender studies in science remain biased towards non-binary people and coming studies should follow the recommendations of and and . In order to help increasing the number of such studies, an improved version of the proposed survey is now available in open source on GitHub using this link: . We encourage different communities to use it as a template to homogenize socio-demographic studies and allow long-term monitoring. We also invite the users of this survey to provide comments and feedbacks to improve its completeness. We are particularly interested in inputs and suggestions from researchers with a social science expertise.\n\nThis study adds up to an increasing number of publications studying the community of researchers itself, instead of their research. While analyzing the results, we also listed improvements that could make this survey more precise on some aspects: 1) the questions about the inappropriate behaviors could be more specific about the different types of behaviors targeted in the study. These questions could even be repeated to probe the occurrence of a range of behaviors, from unwanted (e.g. gender-biased comments in a professional discussion) to inappropriate (e.g. sexual comments in a professional environment) to serious offences (e.g. aggression, harassment). 2) Some questions could be more specific on the period considered (e.g. questions 12\u201315, 17, 19, 20), which would remove ambiguities and make some career-based results more accurate. 3) Additional topics could be probed with this survey, such as the number of citations (see a very interesting study of gender-based differences on this topic by ), the occurrence of solicitations by scientific journals to review manuscripts , and the responsibility in large projects . More generally, social scientists are the experts for such studies and should be involved to guaranty that the survey questions are not phrased in a partial and suggestive way, but are expressed explicitly and neutrally. In addition, studies also acknowledge the experience of other under-represented groups in astronomy, such as LGBT or African American and Hispanic researchers .\n\n# Survey sent to all participants to the Spirit of Lyot 2019 conference\n\nThe survey contained the following questions:\n\n) Choose the option that best describes your gender (Female \/ Male \/ Non binary \/ Other \/ Prefer not to disclose)\n\n) Choose the option that best describes your gender identity (Cisgender \/ Transgender \/ Other \/ Prefer not to disclose)\n\n) What country do you currently live in?\n\n) What kind of position do you occupy? (Intern \/ PhD student \/ Postdoc \/ Faculty)\n\n) Did you attend an international conference in 2018?\n\n) What is your expertise? (Instrumentation \/ Observation \/ Theory)\n\n) At this conference (Lyot 2019): Did you ask for a talk?\n\n) At this conference (Lyot 2019): Did you get a talk?\n\n) At this conference (Lyot 2019): Did you ask for a poster pop talk?\n\n) At this conference (Lyot 2019): If not, why?\n\n) At this conference (Lyot 2019): Did you ask questions to a speaker at the end of their talk?\n\n) HCI in general: Have you ever experienced, at work or at a conference, a situation of inappropriate behavior?\n\n) HCI in general: Have you ever noticed, at work or at a conference, a situation of inappropriate behavior?\n\n) HCI in general: Have you ever noticed, at work or at a conference, that a person was behaving differently to you because of your gender?\n\n) HCI in general: Have you ever noticed, at work or at a conference, that a person was behaving differently to somebody because of their gender?\n\n) HCI in general: Have you been invited to join a SOC of an international conference in 2018?\n\n) HCI in general: Have you ever been invited to a SOC in order to fulfill a ratio of minority?\n\n) HCI in general: Have you ever been cut off while talking or prevented from talking at a professional event?\n\n) HCI in general: Have you ever been unfairly absent from a list of co-authors?\n\n) HCI in general: Have you ever been unfairly present in a list of co-authors?\n\n) Are there any comments you would like to share with us?\n\nThis work was co-authored by women and non-binary people having met at the Spirit of Lyot 2019 conference in Tokyo, Japan. It received full help and support from the LOC and the SOC of the conference and the participation rate of $53\\%$ indicates an implication of the conference audience. We sincerely thank these three entities and particularly the SOC and LOC, who supported this initiative and helped advertise it among the conference attendees. The authors are sincerely grateful to the referee for their constructive comments, which improved the quality of the paper and strengthened its conclusions. LL has received support of IRIS Origines et Conditions d'Apparition de la Vie (OCAV) of PSL Idex under the program Investissements d'Avenir with the reference ANR-10-IDEX-0001-02 PSL. GS would like to acknowledge the funding received from the European Union's Horizon 2020 research and innovation programme under the Marie Sk\u0142odowska-Curie grant agreement No 798909.","meta":{"dup_signals":{"dup_doc_count":14,"dup_dump_count":3,"dup_details":{"curated_sources":2,"2024-26":1,"unknown":11}},"filename":"out\/2012.11494_extract_main.tex.md"},"subset":"arxiv"} +{"text":"abstract: Most emergent properties of the materials discovered since the 1980s are related to the existence of electron-electron interactions which are large with respect to the kinetic energies and could not be thoroughly studied before. The occurrence of metal insulator transitions, of exotic magnetic and\/or superconducting properties in many new compounds have stimulated a large series of experimental and theoretical developments to grasp their physical significance. We present here a simple introduction to the elementary aspects of the physics of electron-electron interactions, which could be a starting point for typical undergraduate students.\nauthor: Henri Alloul [^1]\ntitle: **Strongly Correlated Electrons in Solids**\n\n# Introduction\n\nThe study of the electronic properties of solids, done within an independent electron approximation since World War II, has been essential for the understanding of the occurrence of semi-conductors. This understanding was at the origin of the information technologies which expanded rapidly after the war. But during that period, a myriad of new materials with increasing complexity have been discovered as well. These materials were found to display unexpected novel electronic properties. Many such properties are not explained by the independent electron approximations, require new conceptual developments, and will certainly lead in the future to specific promising applications. Most of these emergent properties are linked with magnetic responses due to the strong electron-electron interactions in these complex new materials.\n\nWe will briefly discuss how these electronic interactions yield original states of electronic matter. A variety of experimental and theoretical techniques have been developed which permit a detailed investigation of their unexpected properties.\n\nThis article will be organised as follows. Electronic properties of solids were, in the first half of the twentieth century, considered mostly in the frame of an independent electron approximation with spin degeneracy. The resulting electronic band structure of metals which will be briefly recalled in section is such that each electronic level could be doubly occupied. In such an approach one expects metals or insulators with no significant magnetic properties.\n\nIn order to explain that some solids display magnetic properties one must reassess the underlying approximations that led to the band theory, and especially the averaging approach to the Coulomb interactions between electrons. In section we shall show that one has to take into account the strong local coulomb repulsion on atomic orbitals, which permits magnetic atomic states and magnetic insulators in the solid state.\n\nWe shall then specifically mention in section the superconducting state which is an original correlated electronic state which occurs in most metals at low temperature. This is a macroscopic quantum electronic state which results from an indirect electron-electron attractive interaction induced by the interplay of electronic and atomic vibrations in classical metals in which the electron states do not interact at high temperatures.\n\nWe shall then consider in section how electronic correlations yield materials with properties which are in an intermediate regime between independent, delocalised electrons and local states. Those intermediate electronic states are at the basis of the correlated electron physics. They often display exotic superconducting states with unexpectedly high transition temperatures, can undergo charge ordering or metal insulator transitions as well as exotic magnetic states considered as spin liquids. Such original states, which are far from being fully understood at this time, will be introduced in dedicated Scholarpedia articles.\n\n# The basics of the electronic band structure of solids\n\nIsolated atoms display discrete, narrow electronic levels. In the solid state, the electron can delocalise between sites due to the overlap of the electronic orbitals of neighboring atoms. The transfer integrals $t$ between orbitals of neighboring atoms lead to a broadening of the atomic levels into electronic bands which characterize the actual band structure of a given material. The width of these energy bands is typically determined by $zt$ where $z$ is the number of neighboring atoms surrounding a given site. In such an independent electron approach the available electrons in the material fill the energy levels in increasing energy order. This yields insulators when filled and empty bands are separated by finite gaps, and metals if there are partially filled energy bands up to an energy level which defines the Fermi energy, as shown in Fig.. In such an approach, solids with an odd number of electrons per unit cell are expected to be metals as they should necessarily display partially filled bands in which delocalisation of electrons can be done at moderate energy cost. One distinguishes then among the insulators the cases where the energy gap is small compared to the thermal energy $k_BT$. In that case electrons can be excited thermally at temperature $\\sim T$ into the first empty band (conduction band) and leave empty holes in the last occupied band (the valence band), this being the case of semiconductors. Among those, graphene has been highlighted recently, as in that case the gap vanishes and the conduction and valence band touch each other at a single energy point, the Dirac point which corresponds in that case to the Fermi energy.\n\nIn those cases the band theory for the electronic states applies rather well and explains most of the electronic properties of these metals, insulators, semiconductors or Dirac point metals. In all those cases the independent electrons approach yields a weak paramagnetism as all these descriptions do not lift the spin degeneracy of the electronic states. This Band theory describes these materials well as the k space construction lifts the site degeneracy of the atomic state by building Bloch states which have different energies and well defined properties under translation.\n\n# The origin of Atomic magnetism and Mott insulators\n\nIf $t$ is small, then one expects very narrow bands and localized electronic states, as the case $t=0$ corresponds to strictly isolated atomic states. In that case electronic interactions can no longer be treated as an average as done in band theories and do give rise to local moment and magnetism, as we shall see hereafter.\n\n## Mott insulators\n\nLet us begin by considering the case of an isolated atom (on the left in Fig). In this context, in band theory, it is assumed that the energy brought to the system by an extra electron would be $\\epsilon_0$, and that a second electron on the same atom would also bring $\\epsilon_0$, so that the total energy would be $2\\epsilon_0$ for a doubly negatively charged ion. But this is obviously not very realistic, owing to Coulomb repulsion. Apart from its 'orbital' energy $\\epsilon_0$, the second electron will also be subject to the Coulomb repulsion of the first electron, and its energy will thus be higher than $\\epsilon_0$ by an amount usually denoted by U, which represents the Coulomb repulsion between the first and second electrons added to an initially neutral atom. The total energy of the doubly negative ion is thus $\\epsilon_0 + U$. Note that $U$ can vary considerably depending on the atom (from about 1 eV to more than 10 eV).\n\nIf we now consider this ion in a crystal, the hopping integrals between nearest neighbors will broaden the discrete atomic levels into bands of width $W=zt$. To begin, we consider the limiting case of small hopping integral compared with $U$. We find ourselves in a situation corresponding to the middle of Fig.. There are two allowed energy bands called the upper and lower Hubbard bands, separated by a band gap. This gives the impression that we have a typical insulator (or semiconductor). But this is not in fact correct. There is one additional one-electron state per atom, so that, in a solid comprising $N_n$ atoms, the lower band of the middle column can contain up to $N_n$ electrons, rather than up to $2N_n$ electrons, as is the case in the context of the independent electron band theory. In particular, if there is now one electron per atom (or more generally an odd number of electrons per primitive cell), the lower band will be completely filled and the upper band completely empty. We will thus have an insulator with an odd number of electrons per primitive cell, as a consequence of the interactions $U$ between electrons. The very existence of such an insulator (usually called a Mott-Hubbard insulator in recognition of the two British scientists who first studied them in the 1960s) is thus a consequence of the Coulomb interaction between electrons. As we shall see later, important examples of Mott-Hubbard insulators are undoped cuprates in which the $Cu^{2+}$ ions are in a $3d^9$ state.\n\n## Magnetism of Mott Insulators\n\nWhile usual band insulators should be nonmagnetic (or more precisely, slightly diamagnetic), very different expectations occur for a Mott-Hubbard insulator. If we begin by considering the limiting case of very small hopping integrals, we end up with isolated atoms. The electron in the level $\\epsilon_0$ can then have spin up or spin down, behaving like an isolated spin $1\/2$. In the solid, these spins taken together will give rise to Curie paramagnetism with a spin susceptibility $\\sim 1\/T$, that is, a paramagnetic insulator susceptibility that contradicts band theory. If one takes into account the finite value of the hopping integral $t$, it can be shown that, at low enough temperatures, the spins on neighboring atomic sites will like to arrange themselves in opposite directions, that is, antiferromagnetic coupling dominates.\n\nThe main conclusion which can be taken here is that going beyond the possibilities offered by band theory (paramagnetic metals and diamagnetic insulators), the presence of Coulomb interactions between electrons, if they are strong enough, can give rise to an insulating state with a variety of magnetic properties, such as Curie paramagnetism, antiferromagnetism (but also ferromagnetism), and so on as will be shown later on.\n\nThe Hubbard model, which replaces the true Coulomb potential $V(r)\\sim 1\/r$ by a repulsion which only acts if the two electrons are located on the same atom, is clearly a drastic simplification of the actual physical situation. However, it is rather naturally justified in the context of the theory of magnetic phenomena. Experiments show that there are not only magnetic insulators of spin $1\/2$ (in fact these are in the minority), but that in most cases the spin per atom is much higher. This is due to the fact that, in almost all cases, the atomic orbitals involved are not $s$ levels (hence non-degenerate), but $d$- or $f$-type (hence five- or seven-fold degenerate).\n\nIn such a situation one has to take in more detail the local repulsive Coulomb interaction between electrons on the orbitals of such poly-electronic atoms. Though the Coulomb interaction is purely electrostatic, it differentiates the energy levels of the atomic orbitals, depending of their orbital symmetry and disfavors then double occupancy of some of them. These electronic interactions when combined with the Pauli principle are responsible for the local moment magnetism of isolated atoms. In this situation, the angular momentum of each atom in the Mott-Hubbard insulating state is determined by the electronic filling of the atomic levels through specific rules named Hund's rules.\n\nThe interactions between those local moments in ordered solids are responsible for the various long range ordered magnetic states (ferromagnetic or antiferromagnetic) or their absence thereof in the case where ordering is prohibited by geometric frustration effects, as will be illustrated later on.\n\n# Superconductivity and electron-phonon interaction\n\nSo far we have seen that the original properties of electronic matter are mostly governed by the magnitude of the Coulomb repulsion between electrons which is essential in the magnetic properties and in promoting localized electronic states rather than extended states. But, although we mentioned it at many places already, we did not consider so far one of the most important correlated electronic states which has been studied at length during most of the last century, that is superconductivity. This electronic state of matter is by no way an independent electron case, as the basic feature of this state is an electronic organisation which emphasizes pairs of electrons, the Cooper pairs. This has been highlighted in classical metals by the development of the Bardeen Cooper Schrieffer (BCS) theory which states that in the presence of an attractive interaction between electrons, no matter how weak it may be, the electron gas becomes unstable.\n\nThe beauty of this unexpected physical situation is that the lower energy condensed electronic state is a quantum state of electronic matter in which the correlations between electrons extend on macroscopic distances.The mystery which prevented the actual understanding of superconductivity during the first half of the 20th century concerned the actual possibility of such an attractive interaction between electrons. This has only been understood when it had been noticed that the electrons do attract the ions of the atomic background, and that their displacements (the phonons) being slow due to the large ionic masses provide a memory effect which mediates an attractive interaction between electrons. If that electron-phonon interaction dominates the electronic Coulomb repulsion, then the net attractive interaction favors the pairing of electrons which is qualitatively depicted in Fig.. The pairing of electrons results in the many body electronic states which is the basis of the electronic properties of the superconductors. One of the main unexpected behaviors which could be explained by the BCS theory is the existence of a gap between the electronic superconducting ground and excited states. The occurrence of such a gap has been initially ascertained by NMR experiments.\n\n# From Mott Insulators to Metallic Magnetism and Superconductivity\n\nWe have examined so far two completely different limiting descriptions of electronic states in a solid. In the band structure approach we have described the case of electrons considered as independent, their interactions being restricted to an averaged potential. The delocalisation of these electrons between the atomic sites driven by the transfer integrals may yield metallic states. In contrast we have considered the specific situation for which electrons localized on ionic states lead to local atomic magnetic moments. Those arise when the Pauli principle and on site inter-electronic Coulomb repulsion are taken into account properly. We have assumed implicitly that these electrons do not delocalise when the transfer integrals between electrons on neighboring ions are small enough in such solids. This then corresponds to an insulating magnetic state quite different from the band insulating states considered so far in the independent electron band approach.\n\nThe actual situation in real materials does indeed sometimes correspond to these limiting cases, but a wide variety of solids correspond to intermediate situations, like that of ferromagnetic metals such as $Fe$ or $Ni$. But the correlated electron physics is now rich with examples of such intermediate cases which are quite important both for the fundamental questions raised and for the applications of the novel physical effects which come into play.\n\n## From Mott insulators to metal insulator transitions\n\nIn a Mott-Hubbard insulator, if we increase $t$ (or if we consider compounds with lower values of $U$), for a certain critical value of $t\/U$, the upper and lower Hubbard bands begin to overlap (see Fig. right), causing the band gap to disappear and leading to a metallic state. Such an increase in $t$ can be produced by bringing the atoms closer together. This was first achieved in the case of doped semiconductors by increasing the donor concentration, e.g., by increasing the concentration of phosphorus in silicon. This causes the hydrogen-like orbitals of $P$ to move much closer together and increases the hopping integrals, while remaining in a configuration corresponding to one electron per donor atom. A simpler way to achieve this situation directly without changing the number of electrons in a material is to apply an external pressure. This increases the hopping integrals t by bringing the atoms closer together, provided that the material is compressible. In the metallic state thereby induced, one then observes magnetic and thermodynamic properties which require taking into account the existence of the strong coulomb repulsion $U$. As for the Mott-Hubbard insulator, let us point out that it looks at first glance like a band insulator, the only difference being that here each Hubbard band contains only Nn states rather than $2N_n$ states in the case of the band theory of section .\n\n## Doping a Mott insulator: the cuprate problem\n\nChemical treatment may be envisaged to change the number of electrons in a Mott insulator. For example, it can be doped with holes, reducing the number of electrons in the lower Hubbard band to a number $N_{e}$ smaller than $N_n$. This is exemplified by the case of cuprates such as $YBa_2Cu_3O_6$ or $La_2CuO_4$ which are antiferromagnetic Mott insulators.In the latter, the $Cu$ are in a $3d^9$ state with spin $1\/2$, which order antiferromagnetically below 340K. By chemical exchange of a fraction $x$ of $La^{3+}$ by $Sr^{2+}$ one can typically reduce the number of $Cu$ electrons to become $N_e = (1-x)N_n$. This reduction of the number of electrons in the lower Hubbard band suggests that the doped Mott-Hubbard insulator is expected to be a metal. Experimental investigations carried out on the cuprates, and also on certain other classes of doped Mott insulators, have shown that doping gradually reduces the N\u00e8el temperature of the antiferromagnetic state. This $AF$ state is completely suppressed for a low level of doping, of the order of $x\\approx 0.05$, as can be seen in the phase diagram for $La_{2-x}Sr_xCuO_4$ displayed in Fig..\n\n## Superconductivity in correlated electronic systems\n\nThe importance of the cuprates in the physics of correlated systems has resulted from the discovery that when the $AF$ is suppressed by hole doping, the doped metallic state which results has a $SC$ ground state and displays strange metallic and magnetic properties. The most surprising feature has been the fact that the superconductivity discovered in these materials has the highest critical temperatures $T_c$ found so far in any superconducting material, and exceeds any $T_c$ which could be expected within the BCS approach known to apply in classical metallic states. This has immediately led to the idea that $SC$ in the cuprates has an exotic origin linked with electron-electron interactions rather than the classical electron-phonon driven superconductivity which prevails in classical metals. An important observation in the cuprates has been the fact that the phase diagram with increasing hole doping displays a dome shaped $SC$ regime, that is $SC$ disappears for dopings beyond about 0.3.\n\nWhile the cuprates are certainly exotic superconductors, let us state that many other materials have been shown to display situations where magnetism and $SC$ are proximate to each-other in phase diagrams. In pnictides those are sometimes spanned by doping as in the cuprates, but in other families of compounds the phase diagrams are spanned by pressure control of the overlap integrals as for organic, heavy fermions or $Cs_3C_{60}$ compounds. The author shall present many examples of such families in the Scholarpedia article NMR in strongly correlated materials\\[.\n\n# Experimental techniques\n\nSuch original states have been revealed initially by experimental techniques which were quite adapted at the time of the discovery of the cuprates to studies of their electronic properties. Among those, Nuclear Magnetic Resonance (NMR) is a technique which is quite essential as it permits local measurements in the materials. This gives precious information which goes beyond the first indications given by the macroscopic magnetic measurements as they permit one to differentiate the properties of the materials which can be attributed to specific phases or sites in the structure. Also, as usual for magnetic materials, inelastic and elastic Neutron scattering techniques reveal the occurrence of magnetic responses and of their k-dependence.\n\nSignificant effort has been invested to improve the quality of single crystals which are essential for the studies of the transport properties in these exotic metals and $SC$. Static or pulsed high magnetic field sufficiently large to suppress the superconducting state have been achieved, though this not yet possible for samples with high optimal $T_c$.\n\nOther new specific techniques for studies of surfaces of 2D compounds have been developed during the last decades. The Angular Resolved Photoemission Spectroscopy (ARPES) uses X-rays generated by synchrotrons to perform k-space resolved spectroscopy of the occupied electron states. This permits determination of the band structures of these correlated electron materials. Deviations with respect to simple band calculations permit determination of the incidence and strength of the electronic correlations. Also Scanning Tunneling Microscopy experiments reveal spatial inhomogeneities of the gaps and of the electronic structures at surfaces in these materials. Some experimental groups have developed Fourier transformations at a level of refinement which allowed them to reproduce some of the ARPES spectral information. The existence of charge density wave transitions is also detected by Resonant Inelastic X-ray Scattering (RIXS) or Resonant Elastic X-ray Scattering (REXS).\n\nMany of these novel techniques have been improved by recent technical developments, but their input on the physics of correlated electron systems are still far from being fully understood at this time and will be introduced in dedicated Scholarpedia articles\\[\\].\n\n# Refereces\n\n1. Alloul H., \"Introduction to the physics of Electrons in Solids\" Graduate texts in Physics, Springer\u2013Verlag (Heidelberg) (2011), ISBN 978-3-642-13564-4 DOI:10.1007\/978-3-642-13565-1\n\n2. Ashcroft Neil W., Mermin N. David, Saunders College (1976), ISBN 0030493463, 9780030493461\n\n3. Kittel C., \"Introduction to Solid State Physics\", 8th Edition, Wiley (2005), ISBN 047141526X, 9780471415268\n\n4. Mott N., \"Metal Insulator transitions\" Taylor & Francis (1974), ISBN 0850660793, 9780850660791\n\n5. Tinkham M., \"Introduction to Superconductivity\", Dover Publications (1996), ISBN 0486134725, 9780486134727\n\n# See Also\n\n1. \"NMR in strongly correlated materials\", H. Alloul, Scholarpedia, 10(1):30632 (2015)\n\n2. \"Bardeen-Cooper-Schrieffer theory\", Leon Cooper and Dimitri Feldman (2009), Scholarpedia, 4(1):6439.\n\n[^1]: Henri Alloul (2014), *Strongly Correlated Electrons in Solids*, Scholarpedia, 9(7):32067.","meta":{"dup_signals":{"dup_doc_count":44,"dup_dump_count":39,"dup_details":{"curated_sources":4,"2023-40":1,"2023-23":1,"2023-06":1,"2022-49":1,"2022-27":1,"2022-21":1,"2021-43":1,"2021-17":1,"2021-04":2,"2020-45":1,"2020-40":1,"2020-24":1,"2020-10":1,"2020-05":1,"2019-47":1,"2019-43":1,"2019-39":1,"2019-35":1,"2019-30":1,"2019-26":1,"2019-22":1,"2019-18":1,"2019-13":1,"2019-09":1,"2019-04":1,"2018-51":1,"2018-43":1,"2018-39":1,"2018-30":1,"2018-26":1,"2018-17":1,"2018-13":1,"2018-05":1,"2017-47":1,"2023-50":1,"2024-22":2,"2024-10":1,"2024-26":1}},"filename":"out\/1504.05855_extract_Henri_Alloul_Strongly_Correlated_Electrons_in_Solids.tex.md"},"subset":"arxiv"} +{"text":"abstract: Inferring the coupling structure of complex systems from time series data in general by means of statistical and information-theoretic techniques is a challenging problem in applied science. The reliability of statistical inferences requires the construction of suitable information-theoretic measures that take into account both direct and indirect influences, manifest in the form of information flows, between the components within the system. In this work, we present an application of the optimal causation entropy (oCSE) principle to identify the coupling structure of a synthetic biological system, the repressilator. Specifically, when the system reaches an equilibrium state, we use a stochastic perturbation approach to extract time series data that approximate a linear stochastic process. Then, we present and jointly apply the aggregative discovery and progressive removal algorithms based on the oCSE principle to infer the coupling structure of the system from the measured data. Finally, we show that the success rate of our coupling inferences not only improves with the amount of available data, but it also increases with a higher frequency of sampling and is especially immune to false positives.\nauthor: Jie Sun; Carlo Cafaro; Erik M. Bollt\ntitle: Identifying Coupling Structure in Complex Systems through the Optimal Causation Entropy Principle\n\n# Introduction\n\nDeducing equations of dynamics from empirical observations is fundamental in science. In real-world experiments, we gather data of the state of a system. Then, to achieve the comprehension of the mechanisms behind the system dynamics, we often need to reconstruct the underlying dynamical equations from the measured data. For example, the laws of celestial mechanics were deduced based on observations of planet trajectories\u00a0; the forms of chemical equations were inferred upon empirical reaction relations and kinetics\u00a0; the principles of economics were uncovered through market data analysis\u00a0. Despite such important accomplishments, the general problem of identifying dynamical equations from data is a challenging one. Early efforts, such as the one presented in , utilized embedding theory and relative entropy to reconstruct the deterministic component of a low-dimensional dynamical system.\n\nLater, more systematic methods were developed to design optimal models from a set of basis functions, where model quality was quantified in various ways, such as the Euclidean norm of the error\u00a0, the length of the prediction time window (chaotic shadowing)\u00a0 or the sparsity in the model terms (compressive sensing)\u00a0. Each method achieved success in a range of systems, but none is generally applicable. This is, in fact, not surprising in light of the recent work that showed that the identification of exact dynamical equations from data is NP hard and, therefore, unlikely to be solved efficiently\u00a0.\n\nFortunately, in many applications, the problem we face is not necessarily the extraction of exact equations, but rather, uncovering the cause-and-effect relationships (*i.e.*, direct coupling structure) among the components within a complex system. For example, in medical diagnosis, the primary goal is to identify the causes of a disease and\/or the roots of a disorder, so as to prescribe effective treatments. In structural health monitoring, the main objective is to locate the defects that could cause abrupt changes of the connectivity structure and adverse the performance of the system. Due to its wide range of applications, the problem of inferring causal relationships from observational data has attracted broad attention over the past few decades\u00a0.\n\nAmong the various notions of causality, we adopt the one originally proposed by Granger, which relies on two basic principles\u00a0:\n\n1. The cause should occur before the effect (caused);\n\n2. The causal process should carry information (unavailable in other processes) about the effect.\n\nA causal relationship needs to satisfy both requirements. See Figure\u00a0 as a schematic illustration.\n\nThe classical Granger causality is limited to coupled linear systems, while most recently developed methods based on information-theoretic measures are applicable to virtually any model, although their effectiveness relies on the abundance of data. Notably, transfer entropy (TE), a particular type of conditional mutual information, was introduced to quantify the (asymmetric) information flow between pairs of components within a system\u00a0 and further developed to detect the directionality of coupling in systems with two components\u00a0. The application of these pairwise inference methods to identify direct coupling in systems with more than two components warrants caution: the inferred couplings are often incorrect regardless of the amount of available data\u00a0. In fact, Granger's second principle implies that a true causal relationship should remain effective upon the removal of all other possible causes. As a consequence, an inference method cannot correctly identify the direct coupling between two components in a system without appropriate conditioning on the remaining components\u00a0.\n\nInferring causal relationships in large-scale complex systems is challenging. For a candidate causal relationship, one needs to effectively determine whether the cause and effect is real or is due to the\u00a0presence of other variables in the system. A common approach is to test the relative independence between the potential cause and effect conditioned on all other variables, as demonstrated in linear models\u00a0. Although analytically rigorous, this approach requires the estimation of the related inference measures for high dimensional random variables from (often limited available) data, and therefore, suffers the curse of dimensionality\u00a0. Many heuristic alternatives have been proposed to address this issue. The essence of these alternative approaches is to repeatedly measure the relative independence between the cause and effect conditioned on a combination of the other variables, often starting with subsets of only a few variables\u00a0. As soon as one such combination renders the proposed cause and effect independent, the proposed relationship is rejected, and there is no need to condition on subsets with more variables. The advantage of such an approach is that it reduces the dimensionality of the sample space if the proposed relationship is rejected at some point. However, if the relationship is not rejected, then the algorithm will continue, potentially having to enumerate over all subsets of the other variables. In this scenario, regardless of the dimensionality of the sample space, the combinatorial search itself is often computationally infeasible for large or even moderate-size systems.\n\nIn our recent work in , we introduced the concept of causation entropy (CSE), proposed and proved the optimal causation entropy (oCSE) principle and presented efficient algorithms to infer causal relationships within large complex systems. In particular, we showed in that by a combination of the aggregate discovery and progressive removal of variables, all causal relationships can be correctly inferred in an computational feasible and data-efficient manner. We repeat for emphasis that without properly accounting for multiple interactions and conditioning accordingly, erroneous or irrelevant casual influences may be inferred, and specifically, any pairwise-only method will inherent the problem that many false-positive connections will arise. The design of oCSE specifically addresses this\u00a0issue.\n\nIn this paper, we focus on the problem of inferring the coupling structure in synthetic biological systems. When the system reaches an equilibrium state, we employ random perturbations to extract time series that approximate a linear Gaussian stochastic process. Then, we apply the oCSE principle to infer the system's coupling structure from the measured data. Finally, we show that the success rate of causal inferences not only improves with the amount of available data, but it also increases with the higher frequency of sampling.\n\n# Inference of Direct Coupling Structure through Optimal Causation Entropy\n\nTo infer causal structures in complex systems, we clearly need to specify the mathematical assumptions under which the task is to be accomplished. Accurate mathematical modeling of complex systems demands taking into account the coupling of neglected degrees of freedom or, more generally, the fluctuations of external fields that describe the environment interacting with the system itself\u00a0. This requirement can be addressed in a phenomenological manner by adding noise in deterministic dynamical models. The addition of noise, in turn, generally leads to the stochastic process formalism used in the modeling of natural phenomena\u00a0. We study the system in a probabilistic framework. Suppose that the system contains $n$ components, $\\mathcal{V}=\\{1,2,\\dots,n\\}$. Each component $i$ is assumed to be influenced by a unique set of components, denoted by $N_i$. Let: $$X_t=[X^{(1)}_t,X^{(2)}_t,\\dots,X^{(n)}_t]\\vspace{-3pt}$$ be a random variable that describes the state of the system at time $t$. For a subset $K=\\{k_1,k_2,\\dots,k_q\\}\\subset\\mathcal {V}$, we define: $$X^{(K)}_t\\equiv[X^{(k_1)}_t,X^{(k_2)}_t,\\dots,X^{(k_q)}_t].$$\n\n## Markov Conditions\n\nWe assume that the system undergoes a stochastic process with the following Markov conditions\u00a0. $$\\label{eq:processconds}\n\\begin{cases}\n\\mbox{{(i)} {Temporally Markov}:~}\\\\\n\\quad\\quad\\quad\\quad p(X_{t}|X_{t-1},X_{t-2},\\dots) = p(X_{t}|X_{t-1})= p(X_{t'}|X_{t'-1})~\\mbox{for any $t$ and $t'$.}\\\\\n\\mbox{{(ii)} {Spatially Markov}:~}\\\\\n\\quad\\quad\\quad\\quad p(X^{(i)}_{t}|X_{t-1})=p(X^{(i)}_t|X^{(N_i)}_{t-1})~\\mbox{for any $i$.}\\\\\n\\mbox{{(iii)} {Faithfully Markov}:~}\\\\\n{\n\\quad\\quad\\quad\\quad p(X^{(i)}_{t}|X^{(K)}_{t-1})\\neq p(X^{(i)}_t|X^{(L)}_{t-1})~\\mbox{whenever\n$(K\\cap N_i)\\neq(L\\cap N_i)$.}}\n\\end{cases}$$ Here, $p(\\cdot|\\cdot)$ denotes conditional probability. The relationship between two probability density functions $p_1$ and $p_2$ is denoted as $``p_1=p_2\"$ iff they equal almost everywhere, and $``p_1\\neq p_2\"$ iff there is a set of positive measure on which the two functions do not equal. Note that the Markov conditions stated in Equation\u00a0 ensure that for each component $i$, there is a unique set of components $N_i$ that renders the rest of the system irrelevant in making inference about $X^{(i)}$ and each individual component in $N_i$ presents an observable cause regardless of the presence or absence of the other components.\n\nAlthough several complex systems can be properly modeled in terms of Markov processes\u00a0, we cannot avoid recalling that, after all, non-Markov is the rule and Markov is only the exception in nature\u00a0. Therefore, it is important to develop a theoretical framework suitable for identifying coupling structures in complex systems driven by correlated fluctuations where memory effects cannot be neglected. It is in fact possible to relax the assumptions in Equation\u00a0 to a stochastic process with finite or vanishing memory, as discussed in the last section of the paper.\n\nThe problem of inferring direct (or causal) couplings can be stated as follows. Given time series data $\\{x^{(i)}_t\\}$ ($i=1,2,\\dots,n;t=1,2,\\dots,T$) drawn from a stochastic process that fulfills the Markov conditions in Equation\u00a0, the goal is to uncover the set of causal components $N_i$ for each $i$. We solve this problem by means of algorithms that implement the oCSE principle.\n\n## Causation Entropy\n\nWe follow Shannon's definition of entropy to quantify the uncertainty of a random variable. In particular, the entropy of a continuous random variable $X$ is defined as\u00a0: $$\\label{eq:hX}\n h(X) \\equiv -\\int p(x)\\log p(x)dx,$$ where $p(x)$ is the probability density function of $X$. The conditional entropy of $X$ given $Y$ is defined as\u00a0: $$\\label{eq:hXY}\n h(X|Y)\\equiv-\\int p(x,y)\\log p(x|y)dxdy.\\vspace{3pt}$$ Equations\u00a0 and\u00a0 are valid for both univariate and multivariate random variables. Consider a stochastic process. Let $I$, $J$ and $K$ be arbitrary sets of components within the system. We propose to quantify the influence of $J$ on $I$ conditioned upon $K$ via the CSE: $$\\label{eq:defcse}\n C_{J\\rightarrow I|K} = \\lim_{t\\rightarrow\\infty}[h(X^{(I)}_{t+1}|X^{(K)}_t) - h(X^{(I)}_{t+1}|X^{(K)}_t,X^{(J)}_t)],$$ provided that the limit exists\u00a0. Since, in general, $H(X|Y)-H(X|Y,Z)=I(X;Z|Y)$, where the latter defines the conditional mutual information between $X$ and $Z$ conditioned on $Y$, it follows from the nonnegativity of conditional mutual information that CSE is nonnegative. When $K=\\varnothing$, we omit the conditioning part and simply use the notation $C_{J\\rightarrow I}$. Notice that if $K=I$, CSE reduces to TE. On\u00a0the other hand, CSE generalizes TE by allowing $K$ to be an arbitrary set of components.\n\n## Optimal Causation Entropy Principle\n\nIn our recent work\u00a0, we revealed the equivalence between the problem of identifying the causal components and the optimization of CSE. In particular, we proved that for an arbitrary component $i$, its\u00a0unique set of causal components $N_i$ is the *minimal set of components that maximizes CSE*. Given the collection of sets (of components) with maximal CSE with respect to component $i$, $$\\label{eq:optimalcse1}\n \\mathcal{K}_i=\\{K\\subset\\mathcal{V}|C_{K\\rightarrow i}\\geq C_{K'\\rightarrow i}~\\mbox{for any}~K'\\subset\\mathcal{V} \\},\\vspace{-3pt}$$``{=html} it was shown that $N_i$ is the unique set in $\\mathcal{K}_i$ with minimal cardinality, *i.e.*, $$\\label{eq:optimalcse2}\n N_i = \\cap_{K\\in\\mathcal{K}_i}K = {\\operatorname{argmin}}_{K\\in\\mathcal{K}_i}|K|.$$ We refer to this minimax principle as the oCSE principle\u00a0.\n\n## Computational Causality Inference\n\nBased on the oCSE principle, we developed two algorithms whose joint sequential application allows the inference of causal relationships within a system\u00a0. The goal of these algorithms is to effectively and efficiently identify for a given component $i$ the direct causal set of components $N_i$. The first algorithm aggregatively identifies the potential causal components of $i$. The outcome is a set of components $M_i$ that includes $N_i$ as its subset, with possibly additional components. These additional components are then progressively removed by applying the second algorithm.\n\nFor a given component $i$, the first algorithm, referred to as aggregative discovery, starts by selecting a component $k_1$ that maximizes CSE, *i.e.*, $$k_1={\\operatorname{argmax}}_{k\\in\\mathcal{V}}C_{k\\rightarrow i}.$$ Then, at each step $j$ ($j=1,2,\\dots$), a new component $k_{j+1}$ is identified among the rest of the components to maximize the CSE conditioned on the previously selected components: $$k_{j+1}=\\underset{k\\in\\mathcal{V}\/\\{k_1,k_2,\\dots,k_j\\}}{\\operatorname{argmax}}C_{k\\rightarrow i|\\{k_1,k_2,\\dots,k_j\\}}.\n%k_{j+1}={\\operatorname{argmax}}_{k\\in\\mathcal{V}-\\{k_1,k_2,\\dots,k_j\\}}C_{k\\rightarrow i|\\{k_1,k_2,\\dots,k_j\\}}.$$ Recall that CSE is nonnegative. The above iterative process is terminated when the corresponding maximum CSE equals zero, *i.e.*, when: $$\\label{eq:maxcsezero}\n \\underset{k\\in\\mathcal{V}\/\\{k_1,k_2,\\dots,k_j\\}}\\max C_{k\\rightarrow i|\\{k_1,k_2,\\dots,k_j\\}} = 0,$$ and the outcome is the set of components $M_i=\\{k_1,k_2,\\dots,k_j\\}\\supset N_i$.\n\nNext, to remove non-causal components (including indirect and spurious ones) that are in $M_i$, but not in $N_i$, we employ the second algorithm, referred to as progressive removal. A component $k_j$ in $M_i$ is removed when: $$\\label{eq:csezero}\n C_{k_j\\rightarrow i|M_i\/\\{k_j\\}}=0,$$ and $M_i$ is updated accordingly[^1]. After removing all such components, the resulting set is identified as $N_i$.\n\n## Estimation of CSE in Practice\n\nIn practice, causation entropy $C_{J\\rightarrow I|K}$ needs to be *estimated* from data. We define the Gaussian estimator of causation entropy as: $$\\label{eq:cseest}\n \\widehat{C}^{\\mbox{\\scriptsize (Gaussian)}}_{J\\rightarrow I|K} \\equiv \\frac{1}{2}\\log\n \\left(\\frac{\\operatorname{det}\\left[\\widehat\\Phi(0)_{II}-\\widehat\\Phi(1)_{IK}\\widehat\\Phi(0)_{KK}^{-1}\\widehat\\Phi(1)_{IK}^\\top\\right]}\n {\\operatorname{det}\\left[\\widehat\\Phi(0)_{II}-\\widehat\\Phi(1)_{I,K\\cup J}\\widehat\\Phi(0)_{K\\cup J,K\\cup J}^{-1}\\widehat\\Phi(1)_{I,K\\cup J}^\\top\\right]}\\right).$$ Here: $$\\Phi(\\tau)_{IJ}\\equiv\\operatornamewithlimits{Cov}(X^{(I)}_{t+\\tau},X^{(J)}_t)\\vspace{3pt}$$ denotes a covariance matrix where $\\tau\\in\\{0,1\\}$, and $\\widehat\\Phi(\\tau)_{IJ}$ denotes the corresponding sample covariance matrix estimated from the data\u00a0. The estimation $\\widehat{C}^{\\mbox{\\scriptsize (Gaussian)}}_{J\\rightarrow I|K}\\approx C_{J\\rightarrow I|K}$ if: (i) the underlying random variables are Gaussians; and (ii) the amount of data is sufficient for the relevant sample covariances to be close to their true values. When the underlying random variables are non-Gaussian, an efficient estimator for CSE is yet to be fully developed. As previously pointed out, binning-related estimators, although conceptually simple, are generally inefficient, unless the sample space is low-dimensional\u00a0. To gain efficiency and to minimize bias for a large system where the sample space is high-dimensional, an\u00a0idea would be to build upon the $k$-nearest neighbor estimators of entropic measures\u00a0.\n\nRegardless of the method that is being adopted for the estimation of $C_{J\\rightarrow I|K}$, the estimated value is unlikely to be exactly zero due to limited data and numerical precision. In practice, it is necessary to decide whether or not the estimated quantity should be regarded as zero. Such a decision impacts the termination criterion Equation\u00a0 in the aggregative discovery algorithm and determines which components need to be removed based on Equation\u00a0 in the progressive removal algorithm. As discussed in , such a decision problem can be addressed via a nonparametric statistical test, called the permutation test, as described below.\n\nFor given time series data $\\{x^{(i)}_t\\}$ ($i=1,2,\\dots,n;~t=1,2,\\dots,T$), let $\\widehat{C}_{J\\rightarrow I|K}$ denote the estimated value of $C_{J\\rightarrow I|K}$. Based on the set of components $J$ and a permutation function $\\pi$ on the set of integers from one to $T$, the corresponding permuted time series $\\{y^{(i)}_t\\}$ is defined as: $$y^{(i)}_t=\n \\begin{cases}\n x^{(i)}_{\\pi(t)} & \\mbox{if $i\\in J$,}\\\\\n x^{(i)}_t & \\mbox{if $i\\notin J$}.\n \\end{cases}$$ To apply the permutation test, we generate a number of randomly permuted time series (the number will be denoted by $r$). We then compute the causation entropy from $J$ to $I$ conditioned on $K$ for each permuted time series to obtain $r$ values of the estimates, which are consequently used to construct an empirical cumulative distribution $F$ as: $$F(C)\\equiv\\frac{1}{r}\\big|\\{\\widehat{C}^{(s)}_{J\\rightarrow I|K}:\\widehat{C}^{(s)}_{J\\rightarrow I|K}\\leq C,~1\\leq s\\leq r\\}\\big|,$$ where $\\widehat{C}^{(s)}_{J\\rightarrow I|K}$ are estimates from the permuted time series with $1\\leq s\\leq r$ and $|\\cdot|$ denotes the cardinality of a set. Finally, with the null hypothesis that $C_{J\\rightarrow I|K}=0$, we regard $C_{J\\rightarrow I|K}$ as strictly positive if and only if: $$F(\\widehat{C}_{J\\rightarrow I|K})>\\theta,\\vspace{-3pt}$$ where $0<(1-\\theta)<1$ is the prescribed significance level. In other words, the null hypothesis is rejected at level $(1-\\theta)$ if the above inequality holds. Therefore, the permutation test relies on two input parameters: (i) the number of random permutations $r$; and (ii) the significance level $(1-\\theta)$. In practice, the increase of $r$ improves the accuracy of the empirical distribution at the expense of additional computational cost. A reasonable balance is often achieved when $10^{3}\\lesssim r\\lesssim10^4$\u00a0. The value of $\\theta$ sets a lower bound on the false positive ratio and should be chosen to be close to one (for example, $\\theta=99\\%$)\u00a0.\n\n# Extracting Stochastic Time Series from Deterministic Orbits\n\nBiological systems can exhibit both regular and chaotic features\u00a0. For instance, a healthy cardiac rhythm is characterized by a chaotic time series, whereas a pathological rhythm often exhibits regular dynamics\u00a0. Therefore, a decrease of the chaoticity of the cardiac rhythm is an alarming clinical signature. A similar connection between the regularity of the time series and pathology is observed in spontaneous neuronal bursting in the brain\u00a0. When a system settles into a periodic or equilibrium state, it becomes nearly impossible to infer the coupling structure among the variables, as the system is not generating any information to be utilized for inference. To overcome this difficulty, we propose to apply small stochastic perturbations to the system while in an equilibrium state (this is equivalent to adding dynamic noise to a system to facilitate coupling inference, as shown in ). Then, we measure the system's response over short time intervals. Finally, we follow the oCSE principle and apply the aggregative discovery and progressive removal algorithms to the measured data to infer the couplings among the variables.\n\n## Dynamical System and Equilibrium States\n\nConsider a continuous dynamical system: $$\\label{eq:sys1}\n dx\/dt = f(x),$$ where $x=[x_1,x_2,\\dots,x_n]^\\top\\in\\mathbb{R}^n$ is the $n$-dimensional state variable, the symbol $``\\top\"$ denotes transpose, and $f=[f_1,f_2,\\dots,f_n]^\\top:\\mathbb{R}^n\\rightarrow\\mathbb{R}^n$ is a smooth vector field, which models the dynamic rules of the system. A trajectory of the system\u00a0 is a solution $x(t)$ to the differential equation, Equation\u00a0, with a given initial condition $x(0)=x_0$.\n\nAn equilibrium of the system is a state $x^*$, such that $f(x^*)=0$. When a system reaches an equilibrium, the time evolution of the state ceases. An equilibrium $x^*$ is called stable if nearby trajectories approach $x^*$ forward in time, *i.e.*, there exists a constant $\\rho>0$, such that $x(t)\\xrightarrow{t\\rightarrow\\infty}x^*$ whenever $\\|x_0-x^*\\|<\\rho$, where $\\|\\cdot\\|$ denotes the standard Euclidean norm. Otherwise, the equilibrium is called unstable.\n\n## Response of the System to Stochastic Perturbations\n\nTo gain information about the coupling structure of a system, it is necessary to apply external perturbations to \"knock\" the system out of an equilibrium state and observe how it responds to these perturbations. Suppose that we apply and record a sequence of random perturbations to the system in such a manner that before each perturbation, the system is given sufficient time to evolve back to its stable equilibrium. In addition, the response of the system is measured shortly after each perturbation, but before the system reaches the equilibrium again. Denote the stable equilibrium of interest as $x^*$; we\u00a0propose to repeatedly apply the following steps.\n\nStep 1. Allow the system to (spontaneously) reach $x^*$.\n\nStep 2. At time $t$, apply and record a random perturbation $\\xi$ to the system, *i.e.*, $x(t)=x^*+\\xi$.\n\nStep 3. At time $t+\\Delta{t}$, measure the rate of response, defined as $\\eta=[x(t+\\Delta{t})-x^*-\\xi]\/\\Delta{t}$.\n\nRepeated application of these steps $L$ times results in a collection of perturbations, denoted as $\\{\\xi_\\ell\\}$, and rates of response, denoted as $\\{\\eta_\\ell\\}$, where $\\ell=1,2,\\dots,L$. Here, each perturbation $\\xi_\\ell$ is assumed to be drawn independently from the same multivariate Gaussian distribution with zero mean and covariance matrix $\\sigma^2\\mathbb{I}$. For a given equilibrium $x^*$, in addition to $L$\u2014the number of times the perturbations are applied\u2014there are two more parameters in the stochastic perturbation process: the sample frequency defined as $1\/\\Delta{t}$ and the variance of perturbation defined as $\\sigma^2$. To ensure that the perturbation approximates the linearized dynamics of the system, we require that $1\/\\Delta{t}\\gg 1$ and $\\sigma\\ll 1$. The influence of these parameters will be studied in the next section with a concrete example.\n\nWe remark here that the choice of Gaussian distribution is a practical rather than a conceptual one. In theory, any multivariate distribution can be used to generate the perturbation vector $\\xi_\\ell$ as long as the component-wise distributions of $\\xi^{(i)}_\\ell$ and $\\xi^{(j)}_k$ are identical and independent whenever $i\\neq{j}$ or $\\ell\\neq k$. In practice, since the effectiveness of any information theoretic method (including the one proposed here) depends on the estimation of entropies, choosing the perturbations to be Gaussian greatly improves the reliability of estimation and, thus, the accuracy of the inference. This is because the entropy of Gaussian variables depends only on covariances, rendering its estimation relatively straightforward\u00a0.\n\nNote that each perturbation $\\xi_\\ell$ and its response $\\eta_\\ell$ are related through the underlying nonlinear differential equation $dx\/dt=f(x)$, where the nonlinearity is encoded in the function $f(x)$, which is assumed to be unknown. For an equilibrium $x^*$, the dynamics of nearby trajectories can be approximated by its linearized system, as follows. Consider a state $x\\approx x^*$ and define the new variable $\\delta{x}=x-x^*$. To the first order, the time evolution of $\\delta x$ can be approximated by the following linear system: $$\\label{eq:vareq}\n d(\\delta{x})\/dt = Df(x^*)\\delta{x},$$ where $Df(\\cdot)$ is the $n$-by-$n$ Jacobian matrix of $f$, defined as $[Df(x)]_{ij}=\\partial f_i\/\\partial x_j$. A sufficient condition for $x^*$ to be stable is that all eigenvalues of $Df(x^*)$ must have negative real parts. From Equation\u00a0, with the additional assumption that $\\Delta{t}\\ll 1$, the relationship between the perturbation and response is approximated by the following equation: $$\\label{eq:linear1}\n \\eta_\\ell= Df(x^*)\\xi_\\ell.$$ Note that since $\\xi_\\ell$ is a multivariate normal random variable and $Df(x^*)$ is a constant matrix, the variable $\\eta_\\ell$ is also (approximately) a multivariate normal random variable. Equation\u00a0 therefore represents a drive-response type of Gaussian process.\n\n# Application to Synthetic Biology\n\n## The Repressilator\n\nCellular dynamics is centrally important in biology\u00a0. To describe and, to a certain extent, to\u00a0understand what happens in cells, fluctuating chemical concentrations and reaction rates need to be measured experimentally. However, constructing dynamic models that accurately reproduce the observed phenomena in cells is quite challenging. An alternative approach consists in engineering synthetic biological systems that follow prescribed dynamic rules. An important example is the so-called repressilator (or repression-driven oscillator) presented in . The repressilator is based upon three transcriptional repressors inserted into the *E. coli* bacteria with a plasmid. The three repressors, $lacl$, $tetR$ and $cl$, are related as follows: $lacl$ inhibits the transcription of the gene coding for $tetR$; $tetR$ inhibits the transcription of the gene coding for $cl$; $cl$ inhibits the transcription of the gene coding for $lacl$. In the absence of inhibition, each of the three proteins reaches a steady-state concentration resulting from a balance between its production and degradation rates within the bacterium. In the presence of cross-inhibition by the other two repressors, this network architecture potentially allows oscillations and other interesting dynamical behaviors and serves as a prototype of modeling the quorum sensing among bacteria species\u00a0.\n\nThe repressilator dynamics can be modeled by a system of coupled differential equations, which describe the rates of change for the concentration $p_{i}$ of each protein repressor and the concentration $m_{i}$ of its associated mRNA in their network, as: \u00a0 $$\\begin{aligned}\n\\label{eq:repressilator}\n\\begin{cases}\n\\dot{m}_1 = -m_1 + \\dfrac{\\alpha}{(1+p^n_3)}+\\alpha_0\\\\\n\\dot{m}_2 = -m_2 + \\dfrac{\\alpha}{(1+p^n_1)}+\\alpha_0\\\\\n\\dot{m}_3 = -m_3 + \\dfrac{\\alpha}{(1+p^n_2)}+\\alpha_0\\\\\n\\dot{p}_1 = -\\beta(p_1-m_1)\\\\\n\\dot{p}_2 = -\\beta(p_2-m_2)\\\\\n\\dot{p}_3 = -\\beta(p_3-m_3)\n\\end{cases}\n\\end{aligned}$$ where $m_1$ ($p_1$), $m_2$ ($p_2$) and $m_3$ ($p_3$) represent the mRNA (protein) concentration of the genes $lacl$, $tetR$ and $cl$, respectively. See Figure\u00a0a for a schematic representation of the system. Each ODE in Equation\u00a0 consists of positive terms modeling the production rate and a negative term representing degradation. There are four parameters in the ODEs in Equation\u00a0, namely: $\\beta$ is the ratio of the protein decay rate to the mRNA decay rate; $n$ is the so-called Hill coefficient and describes the cooperativity of the binding of repressor to promoter; $\\alpha _{0}$, the leakiness of the promoter, is the rate of transcription of mRNA in the presence of saturating concentration of the repressor; $\\alpha _{0}+\\alpha$ is the additional rate of transcription of mRNA in the absence of the inhibitor. Note that units of time and concentration in Equation\u00a0 have been rescaled in order to make these equations non-dimensional\u00a0.\n\nAs shown in , there is an extended region in the parameter space for which the system described in Equation\u00a0 exhibits a single stable equilibrium. The Jacobian matrix at the equilibrium is: $$Df =\n\\begin{pmatrix}\n-1 & 0 & 0 & 0 & 0 & -\\frac{\\alpha np_3^{n-1}}{(1+p^n_3)^2} \\\\\n0 & -1 & 0 & -\\frac{\\alpha np_1^{n-1}}{(1+p^n_1)^2} & 0 & 0 \\\\\n0 & 0 & -1 & 0 & -\\frac{\\alpha np_2^{n-1}}{(1+p^n_2)^2} & 0 \\\\\n\\beta & 0 & 0 & -\\beta & 0 & 0 \\\\\n0 & \\beta & 0 & 0 & -\\beta & 0 \\\\\n0 & 0 & \\beta & 0 & 0 & -\\beta\\vspace{-3pt}\n \\end{pmatrix}.$$ \n**Problem statement:** The goal of coupling inference is to identify the location of the nonzero entries of the Jacobian through time series generated by the system near equilibrium.\n\n## Inference of Coupling Structure via the Repressilator Dynamics\n\nWe consider the repressilator dynamics as modeled by Equation\u00a0 with the parameters $n=2$, $\\alpha_0=0$, $\\alpha=10$ and $\\beta=100$. Under this setting, the system exhibits a single stable equilibrium\u00a0 at which $m_1=m_2=m_3=p_1=p_2=p_3=2$. Typical time series are shown in Figure\u00a0b. After the system settles at the equilibrium, we apply the stochastic perturbations as described in Section\u00a0 and obtain a time series of the perturbations $\\{\\xi_\\ell\\}$, as well as responses $\\{\\eta_\\ell\\}$. The goal is to identify, for each component (the mRNA or protein of a gene) in the system, the set of components that determine its dynamics (*i.e.*, the variables that appear on the right-hand side of each relation in Equation\u00a0).\n\nTo apply our algorithms according to the oCSE principle, we define the set of random variables $X^{(i)}_t$ ($i=1,2,\\dots,12; t=1,2,\\dots$) as: $$\\label{eq:maptoX}\nX^{(i)}_t=\n\\begin{cases}\n\\xi^{(i)}_t & \\mbox{if $1\\leq i\\leq 6$},\\\\\n\\eta^{(i-6)}_{t-1} & \\mbox{if $7\\leq i\\leq 12$}.\n\\end{cases}$$ The approximate relationship between the perturbation and response as in Equation\u00a0 can be expressed\u00a0as: $$\\label{eq:linear2}\n X^{(I)}_{t+1} = AX^{(J)}_t,\\vspace{3pt}$$ where $I=\\{7,8,\\dots,12\\}$, $J=\\{1,2,\\dots,6\\}$ and $A=Df(x^*)$. Since this equation defines a stochastic process that satisfies the Markov assumptions in Equation\u00a0, the oCSE principle applies. This implies that the direct couplings can be correctly inferred, at least in principle, by jointly applying the aggregative discovery and progressive removal algorithms.\n\nSince the perturbations $\\{\\xi_\\ell\\}$ and responses $\\{\\eta_\\ell\\}$ depend on the number of samples $L$, the rate of perturbation $1\/\\Delta{t}$ and the variance of perturbation $\\sigma^2$, so is the accuracy of the inference. Next, we explore the change in performance of our approach by varying these parameters. We use two quantities to measure the accuracy of the inferred coupling structure, namely, the false positive ratio $\\varepsilon_{+}$ and the false negative ratio $\\varepsilon_{-}$. Since our goal is to infer the structure rather than the weights of the couplings, we focus on the structure of the Jacobian matrix $Df(x^*)$, encoded in the binary matrix $B$, where: $$[B]_{ij}=\n\\begin{cases}\n1, & \\mbox{if $[Df(x^*)]_{ij}\\neq 0$},\\\\\n0, & \\mbox{otherwise}.\n\\end{cases}$$ On the other hand, applying the oCSE principle, the inferred direct coupling structure gives rise to the estimated binary matrix $\\widehat{B}$, where: $$[\\widehat{B}]_{ij}=\n\\begin{cases}\n1, & \\mbox{if $j$ is a direct causal component of $i$},\\\\\n0, & \\mbox{otherwise}.\\vspace{-6pt}\n\\end{cases}$$ Given matrices $B$ and $\\widehat{B}$, the false positive and false negative ratios are defined, respectively, as: $$\\label{eq:inferror}\n\\begin{cases}\n\\varepsilon_{+}\\equiv \\dfrac{\\mbox{number of $(i,j)$ pairs with $\\widehat{B}_{ij}=1$ and $B_{ij}=0$}}{\\mbox{number of $(i,j)$ pairs with $B_{ij}=0$}},\\vspace{0.1in}\\\\\n\\varepsilon_{-}\\equiv \\dfrac{\\mbox{number of $(i,j)$ pairs with $\\widehat{B}_{ij}=0$ and $B_{ij}=1$}}\n{\\mbox{number of $(i,j)$ pairs with $B_{ij}=1$}}.\n\\end{cases}$$ It follows that $0\\leq\\varepsilon_{+},\\varepsilon_{-}\\leq1$ and $\\varepsilon_{+}=\\varepsilon_{-}=0$ when exact (error-free) inference is achieved.\n\nFigure\u00a0a,b shows that both the false positives and false negatives converge as the number of samples increases. However, they converge to zero only if the rate of perturbation is sufficiently high. Figure\u00a0c,d supports this observation and, in addition, shows that in the high rate of perturbation regime, exact inference is achieved with a sufficient number of samples (in this case, $L\\sim 100$). The combined effects of $L$ and $1\/\\Delta{t}$ are shown in Figure\u00a0. In all simulations, the variation of the perturbation is set to be $\\sigma^2=10^{-4}$ and is found to have little effect on the resulting inference, provided that it is sufficiently small to keep the linearization in Equations\u00a0 and valid.\n\n# Discussion and Conclusions\n\n## Results Summary\n\nIn this paper, we considered the challenging problem of inferring the causal structure of complex systems from limited available data (enjoy Figure\u00a0 for a cartoon depiction of the concept of causality during a soccer game). Specifically, we presented an application of the so-called oCSE principle to identify the coupling structure of a synthetic biological system, the repressilator. First, we briefly reviewed the main points of the oCSE principle (Equations\u00a0 and ), which states that for an arbitrary component $i$ of a complex system, its unique set of causal components $N_{i}$ is the minimal set of components that maximizes CSE (causation entropy, defined in Equation\u00a0 is a generalization of transfer entropy). We strengthen in this work our claim that CSE is a suitable information-theoretic measure for reliable statistical inferences, since it takes into account both direct and indirect influences that appear in the form of information flows between nodes of networks underlying complex systems. We also devoted some attention to the implementation of the oCSE principle. This task is accomplished by means of the joint sequential application of two algorithms, aggregative discovery and progressive removal, respectively. Second, having introduced the main theoretical and computational frameworks, we used a stochastic perturbation approach to extract time series data approximating a Gaussian process when the model system\u2014the repressilator (see Equation\u00a0)\u2014reaches an equilibrium configuration. We then applied the above-mentioned algorithms implementing the oCSE principle in order to infer the coupling structure of the model system from the observed data. Finally, we numerically showed that the success rate of our causal entropic inferences not only improves with the amount of available measured data (Figure\u00a0a), but it also increases with a higher frequency of sampling (Figure\u00a0b).\n\nOne especially important feature of our oCSE-based causality inference approach is that it is immune (in principle) to false positives. When data is sufficient, false positives are eliminated by sequential joint application of the aggregative discovery and progressive removal algorithms, as well as raising the threshold $\\theta$ used in the permutation test (see for a more detailed investigation of this point). In contrast, any pairwise causality measure without additional appropriate conditioning will in principle be susceptible to false positives, result in too many connections and, sometimes, even, implies that everything causes everything. Such a limitation is intrinsic and cannot be overcome merely by gathering more data\u00a0. On the other hand, false negatives are less common in practice and are usually caused by ineffective statistical estimation orinsufficient data.\n\n## oCSE for General Stochastic Processes\n\nWe presented the oCSE principle for stochastic processes under the Markov assumptions stated in Equation\u00a0. From an inference point of view, each and every one of these assumptions has a well-defined interpretation. For instance, the temporal Markov condition implies the stationarity of causal dependences between nodes. The loss of temporal stationarity could be addressed by partitioning the time series data into stationary segments and, then, performing time-dependent inferences. Clearly, such inferences would require extra care. Furthermore, the loss of the spatially and\/or faithfully Markov condition would imply that the set of nodes that directly influence any given node of the network describing the complex system is no longer minimal and unique. Causality inference in this case becomes an ill-posed problem. These issues will be formally addressed in a forthcoming work.\n\nNote that a finite $k$-th order Markov process $\\{X_t\\}$ can always be converted into a first-order (memoryless) one $\\{Z_t\\}$ by lifting, or in other words, defining new random variables $Z_t=(X_{t-k+1},X_{t-k+2},\\dots,X_t)$\u00a0. In this regard, the oCSE principle extends naturally to an arbitrary finite-order Markov process. On the other hand, for a general stationary stochastic process that is not necessarily Markov (*i.e.*, considering a process with potentially infinite memory), there might exist an infinite number of components from the past that causally influence the current state of a given component. However, under an assumption of vanishing (or fading) memory, such influences decay rapidly as a function of the time lag, and consequently, the process itself can be approximated by a finite-order Markov chain\u00a0. We plan to leave such generalizations for forthcoming investigations.\n\n**Acknowledgments** We thank Samuel Stanton from the Army Research Office (ARO) Complex Dynamics and Systems Program for his ongoing and continuous support. This work was funded by ARO Grant No. W911NF-12-1-0276.\n\n**Author Contributions** Jie Sun and Erik M. Bollt designed and supervised the research. Jie Sun and Carlo Cafaro performed the analytical calculations and the numerical simulations. All authors contributed to the writing of the paper.\n\n**Conflicts of Interest** The authors declare no conflict of interest.\n\n[^1]: An alternative way of removing non-causal components is to test conditioning on subsets of $M_i$ with increasing cardinality, potentially reducing the dimensionality of sample space at the expense of increased computational burden. This is similar to the PC-algorithm originally proposed in Ref.\u00a0 and successfully employed in Ref.\u00a0, with the key difference that here the enumeration of conditioning subsets only needs to be performed for $M_i$ instead of for the entire system $\\mathcal{V}$.","meta":{"dup_signals":{"dup_doc_count":37,"dup_dump_count":34,"dup_details":{"curated_sources":2,"2018-17":1,"2018-09":1,"2017-47":1,"2017-39":1,"2017-30":1,"2017-22":1,"2017-17":1,"2017-09":1,"2017-04":1,"2016-50":1,"2016-44":1,"2016-40":1,"2016-36":1,"2016-30":1,"2016-26":1,"2016-22":1,"2016-18":1,"2016-07":1,"2015-48":1,"2015-35":1,"2015-32":1,"2015-27":1,"2015-22":1,"2015-14":1,"2014-52":1,"2014-49":2,"2014-42":1,"2014-41":2,"2020-45":1,"2015-18":1,"2015-11":1,"2015-06":1,"2017-13":1}},"filename":"out\/1411.5350_extract_CSE_Entropy_arXiv.tex.md"},"subset":"arxiv"} +{"text":"abstract: These pages provide you with an example of the layout and style for 100% reproduction which we wish you to adopt during the preparation of your paper. This is the output from the LaTeX document class you requested.\naddress: Mathematics and Computer Science Section, Elsevier Science B.V., \nP.O. Box 103, 1000 AC Amsterdam, The Netherlands; Economics Department, University of Winchester, \n2 Finch Road, Winchester, Hampshire P3L T19, United Kingdom\nauthor: P. de Groot \n[^1], R. de Maas[^2], X.-Y. Wang \nand A. Sheffield[^3]\ntitle: Elsevier instructions for the preparation of a 2-column format camera-ready paper in LaTeX\n\n# FORMAT\n\nText should be produced within the dimensions shown on these pages: each column 7.5 cm wide with 1 cm middle margin, total width of 16 cm and a maximum length of 19.5 cm on first pages and 21 cm on second and following pages. The LaTeX document class uses the maximum stipulated length apart from the following two exceptions (i) LaTeX does not begin a new section directly at the bottom of a page, but transfers the heading to the top of the next page; (ii) LaTeX never (well, hardly ever) exceeds the length of the text area in order to complete a section of text or a paragraph. Here are some references: .\n\n## Spacing\n\nWe normally recommend the use of 1.0 (single) line spacing. However, when typing complicated mathematical text LaTeX automatically increases the space between text lines in order to prevent sub- and superscript fonts overlapping one another and making your printed matter illegible.\n\n## Fonts\n\nThese instructions have been produced using a 10 point Computer Modern Roman. Other recommended fonts are 10 point Times Roman, New Century Schoolbook, Bookman Light and Palatino.\n\n# PRINTOUT\n\nThe most suitable printer is a laser or an inkjet printer. A dot matrix printer should only be used if it possesses an 18 or 24 pin printhead (\"letter-quality\").\n\nThe printout submitted should be an original; a photocopy is not acceptable. Please make use of good quality plain white A4 (or US Letter) paper size. *The dimensions shown here should be strictly adhered to: do not make changes to these dimensions, which are determined by the document class*. The document class leaves at least 3\u00a0cm at the top of the page before the head, which contains the page number.\n\nPrinters sometimes produce text which contains light and dark streaks, or has considerable lighting variation either between left-hand and right-hand margins or between text heads and bottoms. To achieve optimal reproduction quality, the contrast of text lettering must be uniform, sharp and dark over the whole page and throughout the article.\n\nIf corrections are made to the text, print completely new replacement pages. The contrast on these pages should be consistent with the rest of the paper as should text dimensions and font sizes.\n\n# TABLES AND ILLUSTRATIONS\n\nTables should be made with LaTeX; illustrations should be originals or sharp prints. They should be arranged throughout the text and preferably be included *on the same page as they are first discussed*. They should have a self-contained caption and be positioned in flush-left alignment with the text margin within the column. If they do not fit into one column they may be placed across both columns (using `\\begin{table*}` or `\\begin{figure*}` so that they appear at the top of a page).\n\n## Tables\n\nTables should be presented in the form shown in Table\u00a0. Their layout should be consistent throughout.\n\n```latex\n\\begin{table*}[htb]\\caption{The next-to-leading order (NLO) results\n{\\em without} the pion field.}\n\\label{table:1}\n\\newcommand{\\m}{\\hphantom{$-$}}\n\\newcommand{\\cc}[1]{\\multicolumn{1}{c}{#1}}\n\\renewcommand{\\tabcolsep}{2pc} % enlarge column spacing\n\\renewcommand{\\arraystretch}{1.2} % enlarge line spacing\n\\begin{tabular}{@{}lllll}\n\\hline\n$\\Lambda$ (MeV) & \\multicolumn{1}{c}{$140$} & \\multicolumn{1}{c}{$150$} & \\multicolumn{1}{c}{$175$} & \\multicolumn{1}{c}{$200$} \\\\\n\\hline\n$r_d$ (fm) & \\hphantom{$-$}1.973 & \\hphantom{$-$}1.972 & \\hphantom{$-$}1.974 & \\hphantom{$-$}1.978 \\\\\n$Q_d$ ($\\mbox{fm}^2$) & \\hphantom{$-$}0.259 & \\hphantom{$-$}0.268 & \\hphantom{$-$}0.287 & \\hphantom{$-$}0.302 \\\\\n$P_D$ (\\%) & \\hphantom{$-$}2.32 & \\hphantom{$-$}2.83 & \\hphantom{$-$}4.34 & \\hphantom{$-$}6.14 \\\\\n$\\mu_d$ & \\hphantom{$-$}0.867 & \\hphantom{$-$}0.864 & \\hphantom{$-$}0.855 & \\hphantom{$-$}0.845 \\\\\n$\\mathcal{M}_{\\mathrm{M1}}$ (fm) & \\hphantom{$-$}3.995 & \\hphantom{$-$}3.989 & \\hphantom{$-$}3.973 & \\hphantom{$-$}3.955 \\\\\n$\\mathcal{M}_{\\mathrm{GT}}$ (fm) & \\hphantom{$-$}4.887 & \\hphantom{$-$}4.881 & \\hphantom{$-$}4.864 & \\hphantom{$-$}4.846 \\\\\n$\\delta_{\\mathrm{1B}}^{\\mathrm{VP}}$ (\\%) \n & $-0.45$ & $-0.45$ & $-0.45$ & $-0.45$ \\\\\n$\\delta_{\\mathrm{1B}}^{\\mathrm{C2:C}}$ (\\%) \n & \\hphantom{$-$}0.03 & \\hphantom{$-$}0.03 & \\hphantom{$-$}0.03 & \\hphantom{$-$}0.03 \\\\\n$\\delta_{\\mathrm{1B}}^{\\mathrm{C2:N}}$ (\\%) \n & $-0.19$ & $-0.19$ & $-0.18$ & $-0.15$ \\\\\n\\hline\n\\end{tabular}\\\\[2pt]\nThe experimental values are given in ref. \\cite{Eato75}.\n\\end{table*}\n```\n```latex\n\\begin{sidewaystable}\n\\caption{The next-to-leading order (NLO) results\n{\\em without} the pion field.}\n\\label{table:2}\n\\newcommand{\\m}{\\hphantom{$-$}}\n\\newcommand{\\cc}[1]{\\multicolumn{1}{c}{#1}}\n\\renewcommand{\\arraystretch}{1.2} % enlarge line spacing\n\\begin{tabular*}{\\textheight}{@{\\extracolsep{\\fill}}lllllllllllll}\n\\hline\n& $\\Lambda$ (MeV) & \\multicolumn{1}{c}{$140$} & \\multicolumn{1}{c}{$150$} & \\multicolumn{1}{c}{$175$} & \\multicolumn{1}{c}{$200$} & \\multicolumn{1}{c}{$225$} & \\multicolumn{1}{c}{$250$} &\n\\multicolumn{1}{c}{Exp.} & \\multicolumn{1}{c}{$v_{18}$~\\cite{v18}} & \\\\\n\\hline\n%b\n & $r_d$ (fm) & \\hphantom{$-$}1.973 & \\hphantom{$-$}1.972 & \\hphantom{$-$}1.974 & \\hphantom{$-$}1.978 & \\hphantom{$-$}1.983 & \\hphantom{$-$}1.987 & 1.966(7) & \\hphantom{$-$}1.967 & \\\\[2pt]\n & $Q_d$ ($\\mbox{fm}^2$) & \\hphantom{$-$}0.259 & \\hphantom{$-$}0.268 & \\hphantom{$-$}0.287 & \\hphantom{$-$}0.302 & \\hphantom{$-$}0.312 & \\hphantom{$-$}0.319 & 0.286 & \\hphantom{$-$}0.270 & \\\\[2pt]\n & $P_D$ (\\%) & \\hphantom{$-$}2.32 & \\hphantom{$-$}2.83 & \\hphantom{$-$}4.34 & \\hphantom{$-$}6.14 & \\hphantom{$-$}8.09 & \\hphantom{$-$}9.90 & $-$ & \\hphantom{$-$}5.76 & \\\\[2pt]\n & $\\mu_d$ & \\hphantom{$-$}0.867 & \\hphantom{$-$}0.864 & \\hphantom{$-$}0.855 & \\hphantom{$-$}0.845 & \\hphantom{$-$}0.834 & \\hphantom{$-$}0.823 & 0.8574 & \\hphantom{$-$}0.847 & \\\\[5pt]\n & $\\mathcal{M}_{\\mathrm{M1}}$ (fm) & \\hphantom{$-$}3.995 & \\hphantom{$-$}3.989 & \\hphantom{$-$}3.973 & \\hphantom{$-$}3.955 & \\hphantom{$-$}3.936 & \\hphantom{$-$}3.918 & $-$ & \\hphantom{$-$}3.979 & \\\\[5pt]\n & $\\mathcal{M}_{\\mathrm{GT}}$ (fm) & \\hphantom{$-$}4.887 & \\hphantom{$-$}4.881 & \\hphantom{$-$}4.864 & \\hphantom{$-$}4.846 & \\hphantom{$-$}4.827 & \\hphantom{$-$}4.810 & $-$ & \\hphantom{$-$}4.859 & \\\\[2pt]\n & $\\delta_{\\mathrm{1B}}^{\\mathrm{VP}}$ (\\%) & $-0.45$ & $-0.45$ & $-0.45$ & $-0.45$ & $-0.45$ & $-0.44$ & $-$ & $-0.45$ & \\\\[2pt]\n & $\\delta_{\\mathrm{1B}}^{\\mathrm{C2:C}}$ (\\%) & \\hphantom{$-$}0.03 & \\hphantom{$-$}0.03 & \\hphantom{$-$}0.03 & \\hphantom{$-$}0.03 & \\hphantom{$-$}0.03 & \\hphantom{$-$}0.03 & $-$ & \\hphantom{$-$}0.03 & \\\\[2pt]\n & $\\delta_{\\mathrm{1B}}^{\\mathrm{C2:N}}$ (\\%) & $-0.19$ & $-0.19$ & $-0.18$ & $-0.15$ & $-0.12$ & $-0.10$ & $-$ & $-0.21$ & \\\\\n\\hline\n\\end{tabular*}\\\\[2pt]\nThe experimental values are given in ref. \\cite{Eato75}.\n\\end{sidewaystable}\n```\n\nHorizontal lines should be placed above and below table headings, above the subheadings and at the end of the table above any notes. Vertical lines should be avoided.\n\nIf a table is too long to fit onto one page, the table number and headings should be repeated above the continuation of the table. For this you have to reset the table counter with `\\addtocounter{table}{-1}`. Alternatively, the table can be turned by $90^\\circ$ ('landscape mode') and spread over two consecutive pages (first an even-numbered, then an odd-numbered one) created by means of `\\begin{table}[h]` without a caption. To do this, you prepare the table as a separate LaTeX document and attach the tables to the empty pages with a few spots of suitable glue.\n\n## Useful table packages\n\nModern LaTeX comes with several packages for tables that provide additional functionality. Below we mention a few. See the documentation of the individual packages for more details. The packages can be found in LaTeX's `tools` directory.\n\n`array`\n\n: Various extensions to LaTeX's `array` and `tabular` environments.\n\n`longtable`\n\n: Automatically break tables over several pages. Put the table in the `longtable` environment instead of the `table` environment.\n\n`dcolumn`\n\n: Define your own type of column. Among others, this is one way to obtain alignment on the decimal point.\n\n`tabularx`\n\n: Smart column width calculation within a specified table width.\n\n`rotating`\n\n: Print a page with a wide table or figure in landscape orientation using the `sidewaystable` or `sidewaysfigure` environments, and many other rotating tricks. Use the package with the `figuresright` option to make all tables and figures rotate in clockwise. Use the starred form of the `sideways` environments to obtain full-width tables or figures in a two-column article.\n\n## Line drawings\n\nLine drawings may consist of laser-printed graphics or professionally drawn figures attached to the manuscript page. All figures should be clearly displayed by leaving at least one line of spacing above and below them. When placing a figure at the top of a page, the top of the figure should align with the bottom of the first text line of the other column.\n\nDo not use too light or too dark shading in your figures; too dark a shading may become too dense while a very light shading made of tiny points may fade away during reproduction.\n\nAll notations and lettering should be no less than 2\u2006mm high. The use of heavy black, bold lettering should be avoided as this will look unpleasantly dark when printed.\n\n## PostScript figures\n\nInstead of providing separate drawings or prints of the figures you may also use PostScript files which are included into your LaTeX file and printed together with the text. Use one of the packages from LaTeX's `graphics` directory: `graphics`, `graphicx` or `epsfig`, with the `\\usepackage` command, and then use the appropriate commands (`\\includegraphics` or `\\epsfig`) to include your PostScript file.\n\nThe simplest command is: `\\includegraphics{file}`, which inserts the PostScript file `file` at its own size. The starred version of this command: `\\includegraphics*{file}`, does the same, but clips the figure to its bounding box.\n\nWith the `graphicx` package one may specify a series of options as a key\u2013value list, e.g.:\n\n| |\n|:----------------------------------------------|\n| `\\includegraphics[width=15pc]{file}` |\n| `\\includegraphics[height=5pc]{file}` |\n| `\\includegraphics[scale=0.6]{file}` |\n| `\\includegraphics[angle=90,width=20pc]{file}` |\n\nSee the file `grfguide`, section \"Including Graphics Files\", of the `graphics` distribution for all options and a detailed description.\n\nThe `epsfig` package mimicks the commands familiar from the package with the same name in LaTeX2.09. A PostScript file `file` is included with the command `\\psfig{file=file}`.\n\nGrey-scale and colour photographs cannot be included in this way, since reproduction from the printed CRC article would give insufficient typographical quality. See the following subsections.\n\n## Black and white photographs\n\nPhotographs must always be sharp originals (*not screened versions*) and rich in contrast. They will undergo the same reduction as the text and should be pasted on your page in the same way as line drawings.\n\n## Colour photographs\n\nSharp originals (*not transparencies or slides*) should be submitted close to the size expected in publication. Charges for the processing and printing of colour will be passed on to the author(s) of the paper. As costs involved are per page, care should be taken in the selection of size and shape so that two or more illustrations may be fitted together on one page. Please contact the Author Support Department at Elsevier (E-mail: `firstname.lastname@example.com`) for a price quotation and layout instructions before producing your paper in its final form.\n\n# EQUATIONS\n\nEquations should be flush-left with the text margin; LaTeX ensures that the equation is preceded and followed by one line of white space. LaTeX provides the document class option `fleqn` to get the flush-left effect.\n\n$$H_{\\alpha\\beta}(\\omega) = E_\\alpha^{(0)}(\\omega) \\delta_{\\alpha\\beta} +\n \\langle \\alpha | W_\\pi | \\beta \\rangle$$\n\nYou need not put in equation numbers, since this is taken care of automatically. The equation numbers are always consecutive and are printed in parentheses flush with the right-hand margin of the text and level with the last line of the equation. For multi-line equations, use the `eqnarray` environment.\n\nFor complex mathematics, use the A-.1667em.5ex-.125emSmath package. This package sets the math indentation to a positive value. To keep the equations flush left, either load the `espcrc` package *after* the A-.1667em.5ex-.125emSmath package or set the command `\\mathindent=0pt` in the preamble of your article.\n\nReferences should be collected at the end of your paper. Do not begin them on a new page unless this is absolutely necessary. They should be prepared according to the sequential numeric system making sure that all material mentioned is generally available to the reader. Use `\\cite` to refer to the entries in the bibliography so that your accumulated list corresponds to the citations made in the text body.\n\nAbove we have listed some references according to the sequential numeric system .\n\n[^1]: Footnotes should appear on the first page only to indicate your present address (if different from your normal address), research grant, sponsoring agency, etc. These are obtained with the `'134thanks` command.\n\n[^2]: For following authors with the same address use the `'134addressmark` command.\n\n[^3]: To reuse an addressmark later on, label the address with an optional argument to the `'134address` command, e.g. `'134 address[MCSD]`, and repeat the label as the optional argument to the `'134addressmark` command, e.g. `'134addressmark[MCSD]`.","meta":{"dup_signals":{"dup_doc_count":28,"dup_dump_count":2,"dup_details":{"curated_sources":14,"unknown":14}},"filename":"out\/hep-lat0202011_extract_espcrc2.tex.md"},"subset":"arxiv"} +{"text":"abstract: We have developed a unified format for *phylogenetic placements*, that is, mappings of environmental sequence data (e.g. short reads) into a phylogenetic tree. We are motivated to do so by the growing number of tools for computing and post-processing phylogenetic placements, and the lack of an established standard for storing them. The format is lightweight, versatile, extensible, and is based on the JSON format which can be parsed by most modern programming languages. Our format is already implemented in several tools for computing and post-processing parsimony- and likelihood-based phylogenetic placements, and has worked well in practice. We believe that establishing a standard format for analyzing read placements at this early stage will lead to a more efficient development of powerful and portable post-analysis tools for the growing applications of phylogenetic placement.\nauthor: Frederick A Matsen; Noah G Hoffman; Aaron Gallagher; Alexandros Stamatakis\nbibliography: placefmt.bib\ntitle: A format for phylogenetic placements\n\n# Introduction\n\n\"Phylogenetic placement\" has become popular in the last several years as a way to gain an evolutionary understanding of a large collection of sequences. The input to a phylogenetic placement algorithm consists of a reference tree, a corresponding reference multiple sequence alignment, and a collection of query sequences. The output of a phylogenetic placement algorithm is a set of assignments of the query sequences to branches of the tree; there is at least one such assignment for each query. A query can be assigned to more than one branch on the reference tree to express placement uncertainty for that query sequence.\n\nPhylogenetic placement methods circumvent several problems associated with applying traditional phylogenetic algorithms to large, environmentally-derived sequence data. The computational burden is decreased compared to constructing a tree containing reference and query sequences *de novo*, resulting in algorithms that can place thousands to tens of thousands of query sequences per hour *and* per processing core into a reference phylogeny with one thousand taxa. Because computation is performed on each query sequence individually and independently, placement algorithms are also straightforward to parallelize. The relationships between the query sequences are not investigated. Hence, the size of the search space is reduced from an exponential to just a linear number of phylogenetic hypotheses. Moreover, short and\/or non-overlapping query sequences pose less of a problem, as query sequences are compared to the full-length reference sequences. Visualization of samples and comparison between samples are facilitated by the assumption of a fixed reference tree, that can be drawn in a way which highlights the location and distribution of reads.\n\nThe advent of high-throughput sequencing has motivated a growing interest in phylogenetic placement. The basic idea is as old as computational phylogenetics although these insertions historically have been considered as just the first step towards full *de novo* tree reconstruction. Recent implementations have focused on algorithms for likelihood-based placement, such as , with more efficient recent implementations . These tools are being incorporated into popular workflows for microbial ecology, such as QIIME and the next version of AMPHORA . Comparative methods are being developed and implemented in software , and work is underway to extend a tree viewer to visualize placements. Dedicated algorithms to align reads with respect to reference alignments for subsequent phylogenetic placement are also being developed .\n\nBecause of this expansion of activity, standards are needed. The original versions of pplacer\u00a0 and EPA\u00a0 each implemented their own idiosyncratic tabular file formats. These ad-hoc formats kept post-analysis tools from being interoperable and hindered tool comparison.\n\nIn this letter, we describe a lightweight file format that will ensure consistency between tools. Because it adopts JSON (JavaScript Object Notation) , a widely used data interchange standard, and extends the widely used Newick format for phylogenetic trees, it is straight-forward to parse using existing tools. It can be used with likelihood, posterior probability, and parsimony-based placements, can associate an arbitrary number of sequence names associated with a placement, and can store a generalization of a name list called a *named multiplicity* as described below. Basic operations such as subsetting arbitrary collections of placements and merging these lists are easily done. The format can be extended to incorporate additional information, such as taxonomic assignments.\n\nAlthough we have made our best efforts to ensure that the format is sufficiently extensible without changing the specification, it may be necessary to change it in the future. For that reason, the authoritative version of the file format will be maintained on the server as an online preprint of the same name. The version described in this document is version 3 of the file format.\n\n# Concepts\n\nWe first establish terminology in order to describe the placement format. As described above, phylogenetic placement is performed by inserting a collection of query sequences onto a fixed reference tree in order to optimize a given criterion. Specifically, for a given set of query sequences the objective is to find an attachment of each query sequence to the tree that maximizes likelihood or minimizes the parsimony score for the the reference tree with that (and only that) query sequence attached. Because each query sequence is placed individually on the tree, the run time complexity is of order the product of the number of reference sequences, the number of query sequences, and the number of sites in the alignment.\n\nThere may be more than one good or likely location for a query sequence, and it is important to record this uncertainty. Uncertainty may be expressed in terms of placement locations that have equal parsimony scores, in terms of *likelihood weight ratio* (the ratio of likelihoods of the various placements), or in terms of posterior probability. Because a given query sequence thus can be considered to have a collection of placements with varying certainties, we use the word *pquery* for \"placed query\" to denote the collection of placements of a single query sequence.\n\nIt is also common to obtain several identical sequence reads from deep-sequencing studies. Furthermore, closely related sequences may exhibit such similar placement results that a user may wish to group them together for ease of analysis. For this reason, we allow more than one sequence name to be associated with a given pquery.\n\nUsers may simply wish to keep the number of sequences associated with a given pquery instead of the complete collection of names. More generally, they may wish to simply have a single floating point number, the *multiplicity* associated with a pquery. This multiplicity may represent a transformed measure of the quantity of sequences associated with that pquery, analogous to the transforms that are commonly applied to ecological count data. For that reason, we also allow the specification of a named multiplicity associated with a pquery in place of a list of names.\n\n# Design\n\nOne possible representation of a collection of placements would be a single tree with each placement inserted as a pendant branch. That design is problematic for representations of uncertainty; if each possible location for every query sequence were represented as a pendant branch, then it would be difficult to distinguish the pendant edges that resulted from uncertainty with those resulting from multiple query sequences. Subsetting collections of placements would require tree \"surgery\". Furthermore, packing everything into a tree would make placement-specific metadata such as multiple confidence measures difficult to keep track of. Also, visualizing a reference tree with 1,000 taxa and 10,000 queries *and* with several placements per query may become computationally and visually cumbersome.\n\nThese considerations led us to develop a format where the placements are represented as a list, and their branch assignments are indexed by numbered edges of the reference tree. Each placement is associated with entries for a collection of *fields*, which can contain arbitrary data about the placement. With such a list-based format, subsetting pqueries becomes trivial.\n\nWith the separation of reference set and placements in mind, our goals in designing the format were: to adopt a popular extensible open standard human-readable file format, to ease parsing between languages and tools, and to deploy a light-weight format that can handle large collections of placements on large reference trees without requiring too much space. We chose JSON, since it satisfies all of the above criteria.\n\nUsing the JSON syntax, one option would be to individually associate each placement with an arbitrary collection of information using key-value pairs for each placement. However, doing so would have created a substantial file size overhead, as the total number of characters used to represent the keys would be about the same as the total number of characters used to represent the data. Because of this, the field titles are written out only once, and every placement just supplies the data as an array with entries in the correct corresponding order, as described below.\n\n# Specification\n\nFiles using the format described in this paper will use the `.jplace`\u00a0suffix, which is short for JSON placement.\n\nThe basic types in a JSON file are `Array`, `Boolean`, `Number`, `Object`, `String`, and `null`. These are familiar terms except `Object`, which is a list of colon separated key-value pairs, where the keys are strings and the values are arbitrary types. A JSON file contains a single JSON object.\n\nIn `.jplace`\u00a0files, the fundamental object contains a list of four keys: \"tree\", \"fields\", \"placements\", \"metadata\", and \"version\". We will describe each of these in succession, but this need not correspond to their order in the JSON object. Indeed, the order of key-value pairs in a JSON object is unspecified.\n\n## tree\n\nTo represent the tree, we extend the well-known Newick file format. In that format, commas and parentheses are used to display the structure of the tree. The taxon names (leaf labels) are inserted as plain text. It is also common to label internal nodes with strings appearing after the closing of a parenthesis. It is also possible to label edges of the tree with strings enclosed in square brackets. For example, the tree\n\n ((A:.01[e], B:.01)D:.01[g], C:.01[h]);\n\nis a tree with some edge labels and some node labels.\n\nWe extend this format with edge numberings in curly braces:\n\n ((A:.01[e]{0}, B:.01{1})D:.01{3}[g], C:.01{4}[h]){5};\n\nThese edge numberings index the edges for the placements. We use curly braces to distinguish between our edge numberings and other edge label values such as posterior probability or bootstrap branch (bipartition) support.\n\nAlthough not required for parsing, we use a convention that placement algorithms should use a pre-defined edge numbering. Specifically, we enforce that branches are labeled by a depth-first traversal (descending left subtree first and starting at the root\/top-level node in the reference input tree) and we assign branch numbers by a post-order traversal. This strict definition is convenient to ensure one-to-one comparability of results obtained from various placement algorithms.\n\nWe also require the output tree to be identical as a planar tree to the input reference tree, that is, the subtree ordering and top-level trifurcation must remain unchanged. In the case of parsimony-based placements, the reference tree may optionally be represented without branch lengths.\n\n## fields\n\nThe value associated with \"fields\" is an array of strings specifying the headers in the same order as the arrays of placement data. For example, the default fields for a maximum likelihood EPA or pplacer run are edge_num, likelihood, like_weight_ratio, distal_length, and pendant_length.\n\nThe edge_num\u00a0specifies the placement edge, and is necessary for all placements. The pendant_length\u00a0is the branch length for the placement edge, and distal_length\u00a0is the length from the distal (away from the root) side of the reference tree edge to the placement attachment location. The likelihood\u00a0is the likelihood of the tree with the placement attached, which may be calculated from an alignment with columns masked out that do not appear in the read. For that reason, the log likelihood of the placement may be better (closer to zero) than the log likelihood of the reference tree on the full-length alignment. The like_weight_ratio\u00a0is the ratio of that placement's likelihood to that of the other alternate placements for that read. For a pplacer posterior probability run, the marginal likelihood marginal_prob\u00a0and the posterior probability post_prob\u00a0are also included.\n\nIn contrast to pplacer, EPA optimizes three branch lengths associated with a placement: the pendant branch length, the distal branch length, and the proximal branch length. Thus, the EPA output could be extended to comprise the full information generated by the EPA algorithm by adding a proximal_length field. Because the currently available downstream placement analysis tools (e.g., guppy) do not use this additional information, it is not included in the EPA `.jplace`\u00a0output file at present.\n\nThe corresponding fields for parsimony-based placements (currently only available in EPA) are edge_num\u00a0and parsimony. The parsimony field just contains the parsimony score of the placement as an integer.\n\n## placements\n\nThe value associated with the \"placements\" key is the list of placements grouped into pqueries. The representation of each pquery is a JSON object of its own, with two keys: \"p\", for placements, and either \"n\" for names or \"nm\" for names with multiplicity. The value associated with \"p\" is the list of placements for that pquery with entries corresponding to the fields in the order set up by the \"fields\" described above. The list of placements shows possible placement locations along with their confidence scores and other information. The value associated with \"n\" is a list of names associated with that pquery. Although an arbitrary list of names can be associated with a pquery, the typical use will be to collect placement information for identical or closely related sequences. The value associated with \"nm\" is a list of *named multiplicities*, which as simply ordered pairs of a name and then a positive floating point value. As described above, multiplicity can be used to keep track of the number of sequences associated with that name or a transform thereof.\n\nFor parsimony-based placements we require all equally parsimonious placements of a query to be included in the output file. This is to enable easy comparison between parsimony-based placement methods; if only one of the best-scoring placements is arbitrarily selected in one way or another, comparing programs based on our standard will become error-prone and biased.\n\n## Other keys\n\nThere are also two other keys in the fundamental JSON object. The first, \"version\", is mandatory, and indicates an integer version number of the format. The version described in this paper is 3. The second, \"metadata\", is optional and keys an arbitrary object for metadata. It can describe how the placement file was generated, which phylogenetic model was used, and so on. In EPA and pplacer we include the full command line string of the placement program invocation to allow for easy reproducability of results.\n\n## A small example\n\n {\n \"tree\": \"((A:0.2{0},B:0.09{1}):0.7{2},C:0.5{3}){4};\",\n \"placements\":\n [\n {\"p\":\n [[1, -2578.16, 0.777385, 0.004132, 0.0006],\n [0, -2580.15, 0.107065, 0.000009, 0.0153]\n ],\n \"n\": [\"fragment1\", \"fragment2\"]\n },\n {\"p\": [[2, -2576.46, 1.0, 0.003555, 0.000006]],\n \"nm\": [[\"fragment3\", 1.5], [\"fragment4\", 2]]}\n ],\n \"metadata\":\n {\"invocation\":\n \"pplacer -c tiny.refpkg frags.fasta\"\n },\n \"version\": 3,\n \"fields\":\n [\"edge_num\", \"likelihood\", \"like_weight_ratio\",\n \"distal_length\", \"pendant_length\"]\n }\n\n## Tabular representation\n\nThe JSON object can be readily transformed into a tabular format to more easily summarize or explore the data using statistical tools or a relational database. With the addition of an index (\"placement_id\") to form a relation between placements and sequence names, two tables are sufficient: one with columns \"placement_id\" followed by each of the fields contained by each pquery array, and another providing a mapping of every \"placement_id\" to each of the corresponding sequence names or named multiplicities. This transformation can be performed efficiently using any modern high level language with a JSON parsing library. Such a representation of the data is useful for supporting analyses that involve grouping and partitioning placements and sequences.\n\n# Tools\n\nThe latest versions of EPA () and pplacer () both produce these files. The guppy program in the pplacer suite has a number of subcommands that allow transformations and filterings of these files (manuscript in preparation). MePal, an implementation of placement using an alignment-free generalization to indels of Felsenstein's phylogenetic pruning algorithm , now imports and writes out this format as well. The TopiaryExplorer tree visualization package is now in the process of being extended to read this format for visualization.\n\n# Conclusion\n\nWe have designed a unified format for phylogenetic placements. The format is lightweight, flexible, and is based on JSON, a well-established data interchange standard. The format handles placement uncertainty and allows for multiple sequence names to be associated with the placement of a single sequence. Current versions of two placement software packages have already adopted the format, and others are in the process of doing so.","meta":{"dup_signals":{"dup_doc_count":12,"dup_dump_count":2,"dup_details":{"curated_sources":3,"unknown":9}},"filename":"out\/1201.3397_extract_placefmt.tex.md"},"subset":"arxiv"} +{"text":"author: Serge Tabachnikov[^1]\ntitle: Skewers\n\n# Introduction\n\nTwo lines in 3-dimensional space are skew if they are not coplanar. Two skew lines share a common perpendicular line that we call their *skewer*. We denote the skewer of lines $a$ and $b$ by $S(a,b)$.[^2]\n\nConsider your favorite configuration theorem of plane projective geometry that involves points and lines. For example, it may be the Pappus theorem, see Figure : if $A_1,A_2,A_3$ and $B_1,B_2,B_3$ are two triples of collinear points, then the three intersection points $A_1B_2\\cap A_2B_1$, $A_1B_3\\cap A_3B_1$, and $A_2B_3\\cap A_3B_2$ are also collinear (we refer to for a modern viewpoint on projective geometry).\n\nThe Pappus theorem has a skewer analog in which both points and lines are replaced by lines in 3-space and the incidence between a line and a point translates as the intersection of the two respective lines at right angle. The basic 2-dimensional operations of connecting two points by a line or by intersecting two lines at a point translate as taking the skewer of two lines.\n\n**Theorem 1** (Skewer Pappus theorem I). * Let $a_1,a_2,a_3$ be a triple of lines with a common skewer, and let $b_1,b_2,b_3$ be another triple of lines with a common skewer. Then the lines $$S(S(a_1,b_2),S(a_2,b_1)),\\ S(S(a_1,b_3),S(a_3,b_1)),\\ {\\rm and}\\ \\ S(S(a_2,b_3),S(a_3,b_2))$$ share a skewer.*\n\nIn this theorem, we assume that the lines involved are in general position in the following sense: each time one needs to draw a skewer of two lines, this operation is well defined and unique. This assumption holds in a Zariski open subset of the set of the initial lines (in this case, two triples of lines with common skewers, $a_1,a_2,a_3$ and $b_1,b_2,b_3$). A similar general position assumption applies to other theorems in this paper.[^3]\n\nAnother skewer analog of the Pappus theorem was discovered by R. Schwartz.\n\n**Theorem 2** (Skewer Pappus theorem II). * Let $L$ and $M$ be a pair of skew lines. Choose a triple of points $A_1,A_2,A_3$ on $L$ and a triple of points $B_1,B_2,B_3$ on $M$. Then the lines $$S((A_1 B_2), (A_2 B_1)), \\ S((A_2 B_3), (A_3 B_2)),\\ {\\rm and}\\ \\ S((A_3 B_1), (A_1 B_3))$$ share a skewer.*\n\nAlthough the formulation of Theorem is similar to that of Theorem , we failed to prove it along the lines of the proofs of other results in this paper, and the 'brute force' proof of Theorem is postponed until Section .\n\nAnother classical example is the Desargues theorem, see Figure : if the three lines $A_1 B_1$, $A_2B_2$ and $A_3B_3$ are concurrent, then the three intersection points $A_1A_2\\cap B_1B_2$, $A_1A_3\\cap B_1B_3$, and $A_2A_3\\cap B_2B_3$ are collinear.\n\nAnd one has a skewer version:\n\n**Theorem 3** (Skewer Desargues theorem). * Let $a_1,a_2,a_3$ and $b_1,b_2,b_3$ be two triples of lines such that the lines $S(a_1,b_1), S(a_2,b_2)$ and $S(a_3,b_3)$ share a skewer. Then the lines $$S(S(a_1,a_2),S(b_1,b_2)),\\ S(S(a_1,a_3),S(b_1,b_3)),\\ {\\rm and}\\ \\ S(S(a_2,a_3),S(b_2,b_3))$$ also share a skewer.*\n\nThe projective plane ${\\mathbf {RP}}^2$ is the projectivization of 3-dimensional vector space $V$. Assume that the projective plane is equipped with a polarity, a projective isomorphism $\\varphi: {\\mathbf {RP}}^2 \\to ({\\mathbf {RP}}^2)^*$ induced by a self-adjoint linear isomorphism $V \\to V^*$.\n\nIn particular, in 2-dimensional spherical geometry, polarity is the correspondence between great circles and their poles.[^4] In terms of 2-dimensional hyperbolic geometry, polarity is depicted in Figure : in the projective model, $H^2$ is represented by the interior of a disc in ${\\mathbf {RP}}^2$, and the polar points of lines lie outside of $H^2$, in the de Sitter world.\n\nAs a fourth example, consider a theorem that involves polarity, namely, the statement that the altitudes of a (generic) spherical or a hyperbolic triangle are concurrent (in the hyperbolic case, the intersection point may also lie in the de Sitter world).\n\nThe altitude of a spherical triangle $ABC$ dropped from vertex $C$ is the great circle through $C$ and the pole $P$ of the line $AB$, see Figure . Likewise, the line $PQ$ in Figure is orthogonal in $H^2$ to the line $AB$.\n\nIn the skewer translation, we do not distinguish between polar dual objects, such as the line $AB$ and its pole $P$ in Figure . This yields the following theorem.\n\n**Theorem 4** (Petersen-Morley ). * Given three lines $a,b,c$, the lines $$S(S(a,b),c),\\ S(S(b,c),a),\\ \\ {\\rm and}\\ \\ S(S(c,a),b)$$ share a skewer.[^5]*\n\nIn words, *the common normals of the opposite sides of a rectangular hexagon have a common normal*; see Figure , borrowed from .\n\nThese 'skewer' theorems hold not only in the Euclidean, but also in the elliptic and hyperbolic geometries. In $H^3$, two non-coplanar lines have a unique skewer. In elliptic space ${\\mathbf {RP}}^3$, a pair of generic lines has two skewers; we shall address this subtlety in Section .\n\nIn the next section we shall formulate a general correspondence principle, Theorem , establishing skewer versions of plane configuration theorems. This correspondence principle will imply the above formulated theorems, except for Theorem , whose proof will be given in Section .\n\nThe correspondence principle concerns line geometry of 3-dimensional projective space, a subject that was thoroughly studied in the 19th century by many an eminent mathematician (Cayley, Chasles, Klein, Kummer, Lie, Pl\u00fccker, Study, ...) See for a classical and for a modern account.\n\nAlthough we did not see the formulation of our Theorem in the literature, we believe that classical geometers would not be surprised by it. Similar ideas were expressed earlier. In the last section of , H. S. M. Coxeter writes:\n\n> ... every projective statement in which one conic plays a special role can be translated into a statement about hyperbolic space.\n\nCoxeter illustrated this by the hyperbolic version of the Petersen-Morley theorem.\n\nEarlier F. Morley also discussed the hyperbolic Petersen-Morley theorem, along with a version of Pascal's theorem for lines in $H^3$ (the \"celestial sphere\" in the title of this paper is the sphere at infinity of hyperbolic space).\n\nWe are witnessing a revival of projective geometry , not least because of the advent of computer-based methods of study, including interactive geometry software (such as Cinderella[^6] and GeoGebra). Elementary projective geometry has served as a source of interesting dynamical systems , and it continues to yield surprises . We hope that this paper will contribute to the renewal of interest in this classical area.\n\n**Acknowledgments**. The 'godfather' of this paper is Richard Schwartz whose question was the motivation for this project, and who discovered Theorem and helped with its proof. I am grateful to Rich for numerous stimulating discussions on this and other topics.\n\nI am also grateful to I. Dolgachev, M. Skopenkov, V. Timorin, and O. Viro for their insights and contributions. Many thanks to A. Barvinok who introduced me to the chains of circles theorems.\n\nI was supported by NSF grants DMS-1105442 and DMS-1510055. Part of this work was done during my stay at ICERM; it is a pleasure to thank the Institute for the inspiring, creative, and friendly atmosphere.\n\n# Correspondence principle\n\n## What is a configuration theorem?\n\nWe adopt the following 'dynamic' view of configuration theorems.\n\nOne starts with an initial data, a collection of labelled points $a_i$ and lines $b_j$ in ${\\mathbf {RP}}^2$, such that, for some pairs of indices $(i,j)$, the point $a_i$ lies on the line $b_j$. One also has an ordered list of instructions consisting of two operations: draw a line through a certain pair of points, or intersect a certain pair of lines at a point. These new lines and points also receive labels.\n\nThe statement of a configuration theorem is that, among so constructed points and lines, certain incidence relations hold, that is, certain points lie on certain lines.\n\nAssume, in addition, that a polarity $\\varphi: {\\mathbf {RP}}^2 \\to ({\\mathbf {RP}}^2)^*$ is given. We may think of lines in ${\\mathbf {RP}}^2$ as points in $({\\mathbf {RP}}^2)^*$. The polarity takes one back to ${\\mathbf {RP}}^2$, assigning the polar point to each line and vice versa.\n\nGiven a polarity, one adds to the initial data that, for some pairs of indices $(k,l)$, the point $a_k$ is polar dual to the line $b_l$. One also adds to a list of instructions the operation of taking the polar dual object (point $\\leftrightarrow$ line). Accordingly, one adds to the statement of a configuration theorem that certain points are polar dual to certain lines.\n\nWe assume that the conclusion of a configuration theorem holds for almost every initial configuration of points and lines satisfying the initial conditions, that is, holds for a Zariski open set of such initial configurations (this formulation agrees well with interactive geometry software that makes it possible to perturb the initial data without changing its combinatorics).\n\nIn this sense, a configuration theorem is not the same as a configuration of points and lines as described in Chapter 3 of or in : there, the focus is on whether a combinatorial incidence is realizable by points and lines in the projective plane.\n\nFor example, the configuration theorem in Figure has three points $A,B$ and $C$ as an initial data. One draws the lines $AB, BC$ and $CA$, and constructs their polar dual points $c, a$ and $b$, respectfully. Then one connects points $a$ and $A$, $b$ and $B$, and $c$ and $C$. The claim is that these three lines are concurrent (that is, the intersection point of the lines $aA$ and $bB$ lies on the line $cC$).\n\nA configuration theorem for lines in space is understood similarly: one has an initial collection of labelled lines $\\ell_i$ such that, for some pairs of indices $(i,j)$, the lines $\\ell_i$ and $\\ell_j$ intersect at right angle. There is only one operation, taking the skewer of two lines. The statement of a configuration theorem is that certain pairs of thus constructed lines again intersect at right angle. This conclusion holds for almost all initial configurations of lines (i.e., a Zariski open set) satisfying the initial conditions.\n\n## Correspondence principle\n\nThe correspondence principle provides a dictionary that translates a plane configuration theorem, involving points and lines, to a configuration theorem in space involving lines.\n\n**Theorem 5** (Correspondence principle). * To a plane configuration theorem with the initial data consisting of points $a_i$, lines $b_j$, and incidences between them, there corresponds a configuration theorem for lines in space (elliptic, Euclidean, or hyperbolic), so that:*\n\n- *to each point $a_i$ and line $b_j$ of the initial data there corresponds a line in space;*\n\n- *whenever a point $a_i$ and a line $b_j$ are incident, the respective lines in space intersect at right angle;*\n\n- *the operations of connecting two points by a line and of intersecting two lines at a point are replaced by the operation of taking the skewer of two lines.*\n\n*If, in addition, a plane configuration theorem involves a polarity, then each pair of polar dual points and lines involved corresponds to the same line in space, and the operation of taking the polar dual object in the plane (point $\\leftrightarrow$ line) corresponds to the trivial operation of leaving a line in space intact.*\n\nThe reader might enjoy formulating the skewer version of the whole *hexagrammum mysticum*, the collection of results, ramifying the Pappus theorem, due to Steiner, Pl\u00fccker, Kirkman, Cayley and Salmon; see for a modern treatment.\n\nWe shall present two proofs of the Correspondence principle, one concerning the elliptic, and another the hyperbolic geometry. Either proof implies the Correspondence principle for the other two classical geometries: if a configuration theorem holds in the elliptic geometry, then it also holds in the hyperbolic geometry, and vice versa, by 'analytic continuation'. And either non-zero curvature version implies the Euclidean one as a limiting case.\n\nThis analytic continuation principle is well known in geometry; we refer to where it is discussed in detail.\n\n## Elliptic proof\n\nA line in elliptic space ${\\mathbf {RP}}^3$ is the projectivization of a 2-dimensional subspace of ${\\mathbb R}^4$, and the geometry of lines in ${\\mathbf {RP}}^3$ is the Euclidean geometry of 2-planes in ${\\mathbb R}^4$. The space of oriented lines is the Grassmannian $G(2,4)$ of oriented 2-dimensional subspaces in ${\\mathbb R}^4$.\n\nTo every oriented line $\\ell$ in ${\\mathbf {RP}}^3$ there corresponds its dual oriented line $\\ell^*$: the respective oriented planes in ${\\mathbb R}^4$ are the orthogonal complements of each other (the orientation of the orthogonal complement is induced by the orientation of the plane and the ambient space). The dual lines are equidistant and they have infinitely many skewers. The preimage of a pair of dual lines in $S^3$ is a Hopf link.\n\nThe next lemma collects the properties of the Grassmannian $G(2,4)$ that we shall use. These properties are well known, see for a detailed discussion.\n\n**Lemma 1**. * 1) The Grassmannian is a product of two spheres: $G(2,4)=S^2_-\\times S^2_+$. This provides an identification of an oriented line in ${\\mathbf {RP}}^3$ with a pair of points of the unit sphere $S^2$: $\\ell\\leftrightarrow (\\ell_-,\\ell_+)$. \n2) The antipodal involutions of the spheres $S^2_-$ and $S^2_+$ generate the action of the Klein group ${\\mathbb Z}_2\\times{\\mathbb Z}_2$ on the space of oriented lines. The action is generated by reversing the orientation of a line and by taking the dual line. \n3) Two lines $\\ell$ and $m$ intersect at right angle if and only if $d(\\ell_-,m_-)=d(\\ell_+,m_+)=\\pi\/2$, where $d$ denotes the spherical distance in $S^2$. \n4) The set of lines that intersect $\\ell$ at right angle coincides with the set of lines that intersect $\\ell$ and $\\ell^*$. \n5) A line $n$ is a skewer of lines $\\ell$ and $m$ if and only if $n_-$ is a pole of the great circle $\\ell_- m_-$, and $n_+$ is a pole of the great circle $\\ell_+ m_+$. \n6) A pair of generic lines has exactly two skewers (four, if orientation is taken into account), and they are dual to each other.*\n\n#### Proof.\n\nGiven two planes in ${\\mathbb R}^4$, there are two angles, say $0\\leq\\alpha\\leq\\beta\\le\\pi\/2$, between them: $\\alpha$ is the smallest angle made by a line in the first plane with the second plane, and $\\beta$ is the largest such angle.\n\nRecall the classical construction of Klein quadric (see, e.g., ). Given an oriented plane $P$ in ${\\mathbb R}^4$, choose a positive basis $u,v$ in $P$, and let $\\omega_P$ be the bivector $u\\wedge v$, normalized to be unit. In this way we assign to every oriented plane a unit decomposable element in $\\Lambda^2 {\\mathbb R}^4$. The decomposability condition $\\omega\\wedge\\omega=0$ defines a quadratic cone in $\\Lambda^2 {\\mathbb R}^4$, and the image of the Grassmannian is the spherization of this cone (the Klein quadric is its projectivization).\n\nConsider the star operator in $\\Lambda^2 {\\mathbb R}^4$, and let $E_-$ and $E_+$ be its eigenspaces with eigenvalues $\\pm1$. These spaces are 3-dimensional, and $\\Lambda^2 {\\mathbb R}^4=E_-\\oplus E_+$. Let $S^2_{\\pm}$ be the spheres of radii $1\/\\sqrt{2}$ in $E_{\\pm}$. Then the bivector $\\omega_P$ has the components in $E_{\\pm}$ of lengths $1\/\\sqrt{2}$, and hence $G(2,4)=S^2_-\\times S^2_+$. We rescale the radii of the spheres to unit. Thus an oriented plane $P$ becomes a pair of points $P_\\pm$ of a unit sphere.\n\nLet us prove claim 2). Orientation reversing of a plane $P$ changes the sign of the bivector $\\omega_P$ corresponding to the antipodal involutions of both spheres. Let $e_1,\\ldots,e_4$ be an orthonormal basis in ${\\mathbb R}^4$. Then the following vectors form bases of the spaces $E_{\\pm}$: $$u_{\\pm}=\\frac{e_1\\wedge e_2 \\pm e_3\\wedge e_4}{2},\\ v_{\\pm}=\\frac{e_1\\wedge e_3 \\mp e_2\\wedge e_4}{2},\\ w_{\\pm}=\\frac{e_1\\wedge e_4 \\pm e_2\\wedge e_3}{2}.$$ Without loss of generality, assume that a plane $P$ is spanned by $e_1$ and $e_2$. Then $P^{\\perp}$ is spanned by $e_3$ and $e_4$. Since $e_1\\wedge e_2=u_+ + u_-, e_3\\wedge e_4=u_+ - u_-$, the antipodal involution of $S^2_-$ sends $P$ to $P^{\\perp}$.\n\nGiven two planes $P$ and $Q$, one has two pairs of points on $S^2$: $(P_-,Q_-)$ and $(P_+,Q_+)$. Let $\\alpha$ and $\\beta$ be the two angles between $P$ and $Q$. Then $$d(P_-,Q_-)=\\alpha+\\beta,\\quad d(P_+,Q_+)=\\beta-\\alpha,$$ see .\n\nIn particular, $P$ and $Q$ have a nonzero intersection when $\\alpha=0$, that is, when $d(P_-,Q_-)=d(P_+,Q_+)$. Likewise, $P$ and $Q$ are orthogonal when $\\beta=\\pi\/2$. It follows that the respective lines intersect at right angle when $d(P_-,Q_-)=d(P_+,Q_+)=\\pi\/2$. This proves 3) and implies 5).\n\nIn terms of bivectors, two lines intersect if and only if $\\omega_P \\cdot * \\omega_Q =0,$ and they intersect at right angle if, in addition, $\\omega_P \\cdot \\omega_Q =0$. Here dot means the dot product in $\\Lambda^2 {\\mathbb R}^4$ induced by the Euclidean metric. The duality $\\ell \\leftrightarrow \\ell^*$ corresponds to the star operator on bivectors. This implies 4).\n\nFinally, given two lines, $\\ell$ and $m$, consider the distance between a point of $\\ell$ and a point of $m$. This distance attains a minimum, and the respective line is a skewer of $\\ell$ and $m$. By the above discussion, the skewers of lines $\\ell$ and $m$ are the lines that intersect the four lines $\\ell, \\ell^*, m$ and $m^*$. This set is invariant under duality and, by an elementary application of Schubert calculus (see, e.g., ), generically consists of two lines. This proves 6). $\\Box$``{=html}\n\nThus taking the skewer of a generic pair of lines is a 2-valued operation. However, by the above lemma, the choice of the skewer does not affect the statement of the respective configuration theorem.\n\nOne can also avoid this indeterminacy by factorizing the Grassmannnian $G(2,4)$ by the Klein group, replacing it by the product of two elliptic planes ${\\mathbf {RP}}^2_-\\times{\\mathbf {RP}}^2_+$. In this way, we ignore orientation of the lines and identify dual lines with each other. As a result, a generic pair of lines has a unique skewer.\n\nNow to the Correspondence principle.\n\nGiven a plane configuration theorem, we realize it in the elliptic geometry: the initial data consists of points $a_i$ and lines $b_j$ in ${\\mathbf {RP}}^2$ with some incidences between them, and the polarity in ${\\mathbf {RP}}^2$ is induced by the spherical duality (pole $\\leftrightarrow$ equator).\n\nLet us replace the lines by their polar points. Thus the initial data is a collection of points $\\{a_i, b_j^*\\}$ in the projective plane such $d(a_i, b_j^*)=\\pi\/2$ when the point $a_i$ is incident with the line $b_j$.\n\nLikewise, instead of connecting two points, say $p$ and $q$, by a line, we take the polar dual point to this line, that is, the cross-product $p\\times q$ of vectors in ${\\mathbb R}^3$, considered up to a factor. In this way, our configuration theorem will involve only points, and its statement is that certain pairs of points are at distance $\\pi\/2$.\n\nTake another the initial collection, $\\{\\bar a_i, \\bar b_j^*\\}$, and consider the collection of pairs $\\{(a_i,\\bar a_i), (b_j^*,\\bar b_j^*)\\}$ in ${\\mathbf {RP}}^2_-\\times {\\mathbf {RP}}^2_+$. According to Lemma , one obtains a configuration of lines $\\{\\ell_i, \\ell_j\\}$ in elliptic space such that if a point $a_i$ is incident with a line $b_j$ then the corresponding lines $\\ell_i$ and $\\ell_j$ intersect at right angle. This is the initial data for the skewer configuration theorem. By varying the generic choices of $\\{a_i, b_j^*\\}$ and $\\{\\bar a_i, \\bar b_j^*\\}$ satisfying the initial incidences, we obtain a dense open set of initial configurations of lines $\\{\\ell_i, \\ell_j\\}$.\n\nLikewise, the operations that comprise the configuration theorem (connecting pairs of points by lines and intersecting pairs of lines) become the operation of taking the skewer of a pair of lines, and the conclusion of the theorem is that the respective pairs of lines intersect at right angle.\n\n## Hyperbolic proof\n\nIn a nutshell, a skewer configuration theorem in 3-dimensional hyperbolic space is a complexification of a configuration theorem in the hyperbolic plane. We use ideas of F. Morley and V. Arnold .\n\nConsider the 3-dimensional space of real binary quadratic forms $ax^2+2bxy+cy^2$ in variables $x,y$, equipped with the discriminant quadratic form $\\Delta=ac-b^2$ and the respective bilinear form. We view the Cayley-Klein model of the hyperbolic plane as the projectivization of the set $\\Delta >0$, the circle at infinity being given by $\\Delta=0$. The projectivization of the set $\\Delta <0$ is the 2-dimensional de Sitter world.\n\nThus points of $H^2$ are elliptic (sign-definite) binary quadratic forms, considered up to a factor. To a line in $H^2$ there corresponds its polar point that lies in the de Sitter world, see Figure . Hence lines in $H^2$ are hyperbolic (sign-indefinite) binary quadratic forms, also considered up to a factor.\n\nConsider the standard area form $dx\\wedge dy$ in the $x,y$-plane. The space of smooth functions is a Lie algebra with respect to the Poisson bracket (the Jacobian), and the space of quadratic forms is its 3-dimensional subalgebra $sl(2,{\\mathbb R})$. The following observations are made in .\n\n**Lemma 2**. * A point is incident to a line in $H^2$ if and only if the corresponding quadratic forms are orthogonal with respect to the bilinear form $\\Delta$. Given two points of $H^2$, the Poisson bracket of the respective elliptic quadratic forms is a hyperbolic one, corresponding to the line through these points. Likewise, for two lines in $H^2$, the Poisson bracket of the respective hyperbolic quadratic forms is an elliptic one, corresponding to the intersection point of these lines.*\n\nA complexification of this lemma also holds: one replaces ${\\mathbf {RP}}^2$ by ${\\mathbf {CP}}^2$, viewed as the projectivization of the space of quadratic binary forms (and losing the distinction between sign-definite and sign-indefinite forms). The conic $\\Delta =0$ defines a polarity in ${\\mathbf {CP}}^2$.\n\nLemma makes it possible to reformulate a configuration theorem involving points and lines in $H^2$ as a statement about the Poisson algebra of quadratic forms. For example, the statement that the three altitudes of a hyperbolic triangle are concurrent, see Figure right, becomes the statement that the commutators $$\\{\\{f,g\\},h\\},\\ \\ \\{\\{g,h\\},f\\},\\ \\ {\\rm and}\\ \\ \\{\\{h,f\\},g\\}$$ are linearly dependent, which is an immediate consequence of the Jacobi identity $$\\{\\{f,g\\},h\\}+ \\{\\{g,h\\},f\\},+ \\{\\{h,f\\},g\\} =0$$ in the Poisson Lie algebra.\n\nLikewise, the Pappus theorem follows from the Tomihisa's identity $$\\{f_1, \\{\\{f_2, f_3\\}, \\{f_4, f_5\\}\\}\\} + \\{f_3, \\{\\{f_2, f_5\\}, \\{f_4, f_1\\}\\}\\} + \\{f_5, \\{\\{f_2, f_1\\}, \\{f_4, f_3\\}\\}\\} = 0$$ that holds in $sl(2,{\\mathbb R})$, see , and also for this approach to configuration theorems.\n\nNow consider 3-dimensional hyperbolic space $H^3$ in the upper halfspace model. The isometry group is $SL(2,{\\mathbb C})$, and the sphere at infinity is the Riemann sphere ${\\mathbf {CP}}^1$.\n\nA line in $H^3$ intersects the sphere at infinity at two points, hence the space of (non-oriented) lines is the configuration space of unordered pairs of points, that is, the symmetric square of ${\\mathbf {CP}}^1$ with the deleted diagonal. Note that $S^2({\\mathbf {CP}}^1)={\\mathbf {CP}}^2$ (this is a particular case of the Fundamental Theorem of Algebra, one of whose formulations is that $n$th symmetric power of ${\\mathbf {CP}}^1$ is ${\\mathbf {CP}}^n$). Namely, to two points of the projective line one assigns the binary quadratic form having zeros at these points: $$(a_1:b_1,a_2:b_2) \\longmapsto (a_1y-b_1x)(a_2y-b_2x).$$ Thus a line in $H^3$ can be though of as a complex binary quadratic form up to a factor.\n\nThe next result is contained in \u00a752 of .\n\n**Lemma 3**. * Two lines in $H^3$ intersect at right angle if and only if the respective binary quadratic forms $f_i=a_i x^2 + 2 b_i xy + c_i y^2,\\ i=1,2$, are orthogonal with respect to $\\Delta$: $$\\label{ort}\na_1c_2-2b_1b_2+a_2c_1=0.$$ If two lines correspond to binary quadratic forms $f_i=a_i x^2 + 2 b_i xy + c_i y^2,\\ i=1,2$, then their skewer corresponds to the Poisson bracket (the Jacobian) $$\\{f_1,f_2\\} = (a_1b_2-a_2b_1)x^2 + (a_1c_2-a_2c_1) xy + (b_1c_2-b_2c_1) y^2.$$*\n\nIf $(a_1:b_1:c_1)$ and $(a_2:b_2:c_2)$ are homogeneous coordinates in the projective plane and the dual projective plane, then () describes the incidence relation between points and lines. In particular, the set of lines in $H^3$ that meet a fixed line at right angle corresponds to a line in ${\\mathbf {CP}}^2$.\n\nSuppose a configuration theorem involving polarity is given in ${\\mathbf {RP}}^2$. The projective plane with a conic provide the projective model of the hyperbolic plane, see Figures and , so the configuration in realized in $H^2$. Consider the complexification, the respective configuration theorem in ${\\mathbf {CP}}^2$ with the polarity induced by $\\Delta$. According to Lemma , this yields a configuration of lines in $H^3$ such that the pairs of incident points and lines correspond to pairs of lines intersecting at right angle.\n\nAnother way of saying this is by way of comparing Lemmas and : the relations in the Lie algebras $sl(2,{\\mathbb R})$ and $sl(2,{\\mathbb C})$ are the same, hence to a configuration theorem in $H^2$ there corresponds a skewer configuration theorem in $H^3$.\n\n## Euclidean picture\n\nThe following description of the Euclidean case is due to I. Dolgachev (private communication).\n\nAdd the plane at infinity to ${\\mathbb R}^3$; call this plane $H$. A point of $H$ represents a family of parallel lines in ${\\mathbb R}^3$. For a line $L$ in ${\\mathbb R}^3$, let $q(L)=L\\cap H$ be its direction, that is, the respective point at infinity.\n\nOne has a polarity in $H$ defined as follows. Let $A$ be a point in $H$. This point corresponds to a direction in ${\\mathbb R}^3$. The set of orthogonal directions constitutes a line $A^*$ in $H$; this is the line polar to $A$.\n\n**Lemma 4**. * Let $L$ and $M$ be skew lines in ${\\mathbb R}^3$. Then $$q(S(L,M))= q(L)^* \\cap q(M)^*.$$*\n\n#### Proof.\n\nThe direction $q(L)^* \\cap q(M)^*$ is orthogonal to $L$ and to $M$, and so is the skewer $S(L,M)$. This implies the result. $\\Box$``{=html}\n\nThus the skewer $S(L,M)$ is constructed as follows: find points $q(L)$ and $q(M)$ of the plane at infinity $H$, intersect their polar lines, and construct the line through point $q(L)^* \\cap q(M)^*$ that intersect $L$ and $M$. This line exists and is, generically, unique: it is the intersection of the planes through point $q(L)^* \\cap q(M)^*$ and line $L$, and through point $q(L)^* \\cap q(M)^*$ and line $M$.\n\nTo summarize, a skewer configuration in ${\\mathbb R}^3$ has a 'shadow' in the plane $H$: to a line $L$ there corresponds the point $q(L)$ that is also identified with its polar line $q(L)^*$. In this way, the shadow of a skewer configuration is the respective projective configuration in the plane $H$. For example, both Theorems and become the usual Pappus theorem in $H$.\n\n## Odds and ends\n\n1). *Legendrian lift*. One can associate a skewer configuration in ${\\mathbf {RP}}^3$ to a configuration in $S^2$ using contact geometry.\n\nA cooriented contact element in $S^2$ is a pair consisting of a point and a cooriented line through this point. The space of cooriented contact elements is $SO(3)={\\mathbf {RP}}^3$. We consider ${\\mathbf {RP}}^3$ with its metric of constant positive curvature (elliptic space). The projection ${\\mathbf {RP}}^3\\to S^2$ that sends a contact element to its foot point is a Hopf fibration.\n\nThe space of contact elements carries a contact structure generated by two tangent vector fields: $u$ is the rotation of a contact element about its foot point, and $v$ is the motion of the foot point along the respective geodesic. The fields $u$ and $v$ are orthogonal to each other.\n\nA curve tangent to the contact structure is called Legendrian. A smooth cooriented curve in $S^2$ has a unique Legendrian lift: one assigns to a point of the curve the tangent line at this point.\n\nConsider a configuration of points and (oriented) lines in $S^2$. One can lift each point as a Legendrian line in ${\\mathbf {RP}}^3$, consisting of the contact elements with this foot point. Likewise, one can lift each line as a Legendrian line, consisting of the contact elements whose foot point lies on this line. As a result, a configuration of lines and points in $S^2$ lifts to a configuration of lines in ${\\mathbf {RP}}^3$ intersecting at right angle, as described in Theorem .\n\nThe family of (oriented) Legendrian lines in ${\\mathbf {RP}}^3$ is 3-dimensional; it forms the Lagrangian Grassmannian $\\Lambda(2) \\subset G(2,4)$. In the classical terminology, the 3-parameter family of Legendrian lines in projective space is the null-system, .\n\n2). *Comparing the elliptic and hyperbolic approaches*. The approaches of Sections and are parallel. The sphere $S^2$ in Section is the spherization of ${\\mathbb R}^3=so(3)$, the Lie bracket being the cross-product of vectors. The pole of a line $uv$ in $S^2$ corresponds to the vector $u\\times v$ in ${\\mathbb R}^3$. Thus the operations of connecting two points by a line and of intersecting two lines are encoded by the Lie bracket of $so(3)$.\n\nLikewise, the Poisson bracket of two quadratic forms in Section can be identified with the Minkowski cross-product that encodes the operations of connecting two points by a line and of intersecting two lines.\n\nNote that $so(3)$ is the Lie algebra of motions of $S^2$, whereas $sl(2,{\\mathbb R})$ is the Lie algebra of motions of $H^2$, and the complex forms of these Lie algebras coincide. Interestingly, this Lie algebraic approach to configuration theorems fails in the Euclidean plane, see for a discussion; however, Euclidean skewer configurations, such as the Petersen-Morley theorem, can be described in terms of the Lie algebra of motions of ${\\mathbb R}^3$, see .\n\nIn both proofs, one goes from the Lie algebra of motions in dimension 2 to that in dimension 3. In the elliptic situation, we have $so(4)=so(3)\\oplus so(3)$, and in the hyperbolic situation, the Lie algebra of motions of $H^3$ is $sl(2,{\\mathbb C})$. Accordingly, an elliptic skewer configuration splits into the product of two configurations in $S^2$, and a hyperbolic skewer configuration is obtained from a configuration in $H^2$ by complexification.\n\n3). *Skewers in ${\\mathbb R}^3$ via dual numbers*. One can approach skewer configurations in ${\\mathbb R}^3$ using Study's dual numbers ; see for a modern account.\n\nDual numbers are defined similarly to complex numbers: $$a+\\varepsilon b,\\ {\\rm where}\\ a,b\\in{\\mathbb R},\\ {\\rm and}\\ \\varepsilon^2=0.$$ Dual vectors are defined analogously.\n\nTo an oriented line $\\ell$ in ${\\mathbb R}^3$ one assigns the dual vector $\\xi_{\\ell}=u+\\varepsilon v$, where $u\\in S^2$ is the unit directing vector of $\\ell$, and $v$ is the moment vector: $v=P\\times u$ where $P$ is any point of $\\ell$. The vectors $\\xi_{\\ell}$ form the Study sphere: $\\xi_{\\ell}\\cdot \\xi_{\\ell}=1$.\n\nThis construction provides an isomorphism between the isometry group of ${\\mathbb R}^3$ and the group of dual spherical motions. Two lines $\\ell$ and $m$ intersect at right angle if and only if $\\xi_{\\ell}\\cdot \\xi_{m}=0$. Thus skewer configurations in ${\\mathbb R}^3$ correspond to configurations of lines and points in the Study sphere whose real part are the respective configurations in $S^2$.\n\n# Circles\n\nDenote the set of lines in 3-space that share a skewer $\\ell$ by ${\\cal N}_\\ell$. We saw in Section that ${\\cal N}_\\ell$ is an analog of a line in the plane. Two-parameter families of lines in 3-space are called congruences. ${\\cal N}_\\ell$ is a linear congruence: it is the intersection of the Klein quadric with a 3-dimensional subspace ${\\mathbf {RP}}^3 \\subset {\\mathbf {RP}}^5$, that is, it is defined by two linear equations in Pl\u00fccker coordinates.\n\nNow we describe line analogs of circles.\n\nLet $\\ell$ be an oriented line in 3-space (elliptic, Euclidean, or hyperbolic). Let $G_\\ell$ be the subgroup of the group of orientation preserving isometries that preserve $\\ell$. This group is 2-dimensional. Following , we call the orbit $G_\\ell(m)$ of an oriented line $m$ an *axial congruence* with $\\ell$ as axis.\n\nIn particular, ${\\cal N}_\\ell$ is an axial congruence.\n\nIn ${\\mathbb R}^3$ (the case considered in ), the lines of an axial congruence with axis $\\ell$ are at equal distances $d$ from $\\ell$ and make equal angles $\\varphi$ with it. One defines the dual angle between two oriented lines $\\varphi + \\varepsilon d$, see . The dual angle between the lines of an axial congruence and its axis is constant.\n\nThus, in ${\\mathbb R}^3$, an axial congruence consists of a regulus (one family of ruling of a hyperboloid of one sheet) and its parallel translations along its axis.\n\nLikewise, one defines a complex distance between oriented lines $\\ell$ and $m$ in $H^3$. Let $d$ be the distance from $\\ell$ to $m$ along their skewer $S(\\ell,m)$, and let $\\varphi$ be the angle between $m$ and the line $\\ell'$, orthogonal to $S(\\ell,m)$ in the plane spanned by $\\ell$ and $S(\\ell,m)$, and intersecting $m$. (Both $d$ and $\\varphi$ have signs determined by a choice of orientation of the skewer). Then the complex distance is given by the formula $\\chi(\\ell,m)=d+i\\varphi$, see . Again, the complex distance between the lines of an axial congruence and its axis is constant.\n\nIf $\\ell_{1,2}$ and $m_{1,2}$ are the respective points on the sphere at infinity ${\\mathbf {CP}}^1$ then $$\\cosh^2\\left(\\frac{\\chi(\\ell,m)}{2}\\right)=[\\ell_1,m_1,m_2,\\ell_2],$$ where the cross-ratio is given by the formula $$[a,b,c,d]=\\frac{(a-c)(b-d)}{(a-d)(b-c)},$$ see .\n\nIn the next lemma, ${\\mathbf {CP}}^1$ is the 'celestial sphere', that is, the sphere at infinity of $H^3$.\n\n**Lemma 5**. * Let $\\psi:{\\mathbf {CP}}^1\\to{\\mathbf {CP}}^1$ be a M\u00f6bius (projective) transformation having two distinct fixed points. The family of lines connecting point $z\\in {\\mathbf {CP}}^1$ with the point $\\psi(z)$ is an axial congruence, and all axial congruences are obtained in this way.*\n\n#### Proof.\n\nWithout loss of generality, assume that the fixed points of $\\psi$ are $0$ and $\\infty$, and let $\\ell$ be the line through these points. Then $\\psi(z)=cz$ for some constant $c\\in{\\mathbb C}$. One has $[0,z,cz,\\infty]=[0,1,c,\\infty]=c\/(c-1).$ Hence, for the lines $m$ connecting $z$ and $\\psi(z)$, the complex distance $\\chi(\\ell,m)$ is the same.\n\nConversely, given an axial congruence, we may assume, without loss of generality, that its axis $\\ell$ connects $0$ and $\\infty$. Then $G_{\\ell}$ consists of the transformations $z\\mapsto kz,\\ k\\in{\\mathbb C}$. Let $m$ be the line connecting points $w_1$ and $w_2$. Then the axial congruence $G_{\\ell}(m)$ consists of the lines connecting points $k w_1$ and $kw_2=\\psi(kw_1)$, with $\\psi: z\\mapsto (w_2\/w_1) z$. $\\Box$``{=html}\n\nIn $S^3$, an axial congruence is characterized by the condition that the angles $\\alpha$ and $\\beta$ (see the proof of Lemma ) between the axis and the lines of the congruence are constant. It follows from the proof of Lemma that an axial congruence is a torus, a product of circles, one in $S^2_-$ and another in $S^2_+$.\n\nThus an axial congruence of lines is an analog of a circle in 2-dimensional geometry. The arguments from Section imply analogs of the basic properties of circles:\n\n1. If two generic axial congruences share a line then they share a unique other line.\n\n2. Three generic oriented lines belong to a unique axial congruence.\n\n(A direct proof of the first property: if the axes of the congruences are $\\ell_1$ and $\\ell_2$, and the shared line is $m$, then the second shared line is obtained from $m$ by reflecting in $S(\\ell_1,\\ell_2)$ and reverting the orientation).\n\nUsing the approach of Section , one extends the Correspondence principle to theorems involving circles. For example, one has\n\n**Theorem 6** (Skewer Pascal theorem). * Let $A_1,\\ldots,A_6$ be lines from an axial congruence. Then $$S(S(A_1,A_2),S(A_4,A_5)),\\ S(S(A_2,A_3),S(A_5,A_6)), \\ {\\rm and}\\ S(S(A_3,A_4),S(A_6,A_1))$$ share a skewer, see Figure .*\n\nAs another example, consider the Clifford's Chain of Circles. This chain of theorems starts with a number of concurrent circles labelled $1,2,3,\\ldots, n$. In Figure , $n=5$, and the initial circles are represented by straight lines (so that their common point is at infinity).[^7] The intersection point of circles $i$ and $j$ is labelled $ij$. The circle through points $ij, jk$ and $ki$ is labelled $ijk$.\n\nThe first statement of the theorem is that the circles $ijk, jkl, kli$ and $lij$ share a point; this point is labelled $ijkl$. The next statement is that the points $ijkl, jklm, klmi, lmij$ and $mijk$ are cocyclic; this circle is labelled $ijklm$. And so on, with the claims of being concurrent and cocyclic alternating; see , and for a relation with completely integrable systems.\n\nA version of this theorem for lines in ${\\mathbb R}^3$ is due to Richmond . The approach of Section provides an extension to the elliptic and hyperbolic geometries.\n\n**Theorem 7** (Clifford's Chain of Lines). * 1) Consider axial congruences ${\\cal C}_i,\\ i=1,2,3,4$, sharing a line. For each pair of indices $i,j \\in \\{1,2,3,4\\}$, denote by $\\ell_{ij}$ the line shared by ${\\cal C}_i$ and ${\\cal C}_j$, as described in statement 1 above. For each triple of indices $i,j,k \\in \\{1,2,3,4\\}$, denote by ${\\cal C}_{ijk}$ the axial congruence containing the lines $\\ell_{ij},\\ell_{jk},\\ell_{ki}$, as described in the statement 2. Then the congruences ${\\cal C}_{123}, {\\cal C}_{234}, {\\cal C}_{341}$ and ${\\cal C}_{412}$ share a line. \n2) Consider axial congruences ${\\cal C}_i,\\ i=1,2,3,4,5$, sharing a line. Each four of the indices determine a line, as described in the previous statement of the theorem. One obtains five lines, and they all belong to an axial congruence. \n3) Consider axial congruences ${\\cal C}_i,\\ i=1,2,3,4,5,6$, sharing a line. Each five of them determine an axial congruence, as described in the previous statement of the theorem. One obtains six axial congruences, and they all share a line. And so on...*\n\nNext, we present an analog of the Poncelet Porism, see, e.g., . This theorem states that if there exists an $n$-gon inscribed into a conic and circumscribed about a nested conic then every point of the outer conic is a vertex of such an $n$-gon, see Figure .\n\nConsider a particular case when both conics are circles (a pair of nested conics can be sent to a pair of circles by a projective transformation). The translation to the language of lines in space is as follows.\n\nConsider two generic axial congruences ${\\cal C}_1$ and ${\\cal C}_2$, and assume that there exist a pair of lines $\\ell_1\\in {\\cal C}_1$ and $\\ell_2\\in {\\cal C}_2$ that intersect at right angle. That is, ${\\cal C}_1$ and ${\\cal N}_{\\ell_2}$ share the line $\\ell_1$. By property 1) above, there exists a unique other line $\\ell_1'\\in {\\cal C}_1$, shared with ${\\cal N}_{\\ell_2}$, that is, $\\ell_1'$ intersects $\\ell_2$ at right angle. Then there exists a unique other line $\\ell_2'\\in {\\cal C}_2$ that intersects $\\ell_1'$ at right angle, etc. We obtain a chain of intersecting orthogonal lines, alternating between the two axial congruences.\n\nThe following theorem holds in the three classical geometries.\n\n**Theorem 8** (Skewer Poncelet theorem). * If this chain of lines closes up after $n$ steps, then the same holds for any starting pair of lines from ${\\cal C}_1$ and ${\\cal C}_2$ that intersect at right angle.*\n\n#### Proof.\n\nArguing as in Section , we interpret one axial congruence as the set of points of a spherical circle, and another one as the set of geodesic circles tangent to a spherical circle. The incidence between a geodesic and a point corresponds to two lines in space intersecting at right angle. Thus the claim reduces to a version of the Poncelet theorem in $S^2$ where a spherical polygon is inscribed in a spherical circle and circumscribed about a spherical circle.\n\nThis spherical version of the Poncelet theorem is well known, see, e.g., . For a proof, the central projection sends a pair of disjoint circles to a pair of nested conics in the plane, and the geodesic circles to straight lines, and the result follows from the plane Poncelet theorem. $\\Box$``{=html}\n\nA pair of nested circles in the Euclidean plane is characterized by three numbers: their radii, $r`{=html}\n\n# Pappus revisited\n\nIn this section we prove Theorem . This computational proof is joint with R. Schwartz.\n\nAs before, it suffices to establish the hyperbolic version of Theorem . We use the approach to 3-dimensional hyperbolic geometry, in the upper half-space model, developed by Fenchel ; see also . The relevant features of this theory are as follows.\n\nTo a line $\\ell$ in $H^3$, one assigns the reflection in this line, an orientation preserving isometry of the hyperbolic space, an element of the group $PGL(2,{\\mathbb C})$. One can lift it to a matrix ${M_{\\ell}} \\in GL(2,{\\mathbb C})$, defined up to a complex scalar. Since reflection is an involution, one has ${\\rm Tr} (M_{\\ell})=0$. More generally, a traceless matrix $M \\in GL(2,{\\mathbb C})$ is called a *line matrix*; it satisfies $M^2 = -\\det(M) E$ where $E$ is the identity matrix.\n\nThe skewer relations translate to the language of matrices as follows:\n\n- two lines $\\ell$ and $n$ intersect at right angle if and only if ${\\rm Tr} (M_{\\ell} M_n)=0$;\n\n- the skewer of two lines $\\ell$ and $n$ corresponds to the commutator $[M_{\\ell},M_n]$;\n\n- three lines $\\ell,m,n$ share a skewer if and only if the matrices $M_{\\ell}, M_m$, and $M_n$ are linearly dependent.\n\nLikewise, one assigns matrices to points. The reflection in a point $P$ is an orientation-reversing isometry of $H^3$; one assigns to it a matrix $N_P$ in $GL(2,{\\mathbb C})$, defined up to a real scalar, with $\\det N_P >0$ and satisfying $N_P {\\overline N_P} = -\\det(N_P) E$, where bar means the entry-wise complex conjugation of a matrix. Such matrices are called *point matrices*.\n\nEquivalently, point matrices $N$ satisfy $n_{22}=-{\\bar n_{11}}, n_{12}\\in{\\mathbb R}, n_{21}\\in{\\mathbb R}$, that is, the real part of $N$ is a traceless matrix, and the imaginary part is a scalar matrix. It is convenient to normalize so that the imaginary part is $E$, and then $N$ can be though of as a real 3-vector consisting of three entries of the real part of $N$.\n\nIncidence properties translate as follows:\n\n- a point $P$ lies on a line $\\ell$ if and only of $M_{\\ell} N_P = N_P {\\overline M_{\\ell}}$;\n\n- three points are collinear if and only if the respective point matrices are linearly dependent (equivalently, over ${\\mathbb R}$ or ${\\mathbb C}$).\n\nWe need a formula for a line matrix corresponding to the line through two given points. Let $N_1$ and $N_2$ be point matrices corresponding to the given points. Then the desired line matrix $M\\in GL(2,{\\mathbb C})$ satisfies the system of linear equations $$\\label{linepoint}\nM N_1 = N_1 \\overline {M},\\ M N_2 = N_2 {\\overline M},\\ {\\rm Tr} (M) =0.$$ This system is easily solved and it defines $M$ up to a factor (we do not reproduce the explicit formulas here).\n\nWith these preliminaries, the proof proceeds in the following steps.\n\n1. Start with two triples of linearly dependent point matrices, corresponding to the triples of points $A_1,A_2,A_3$ and $B_1,B_2,B_3$.\n\n2. Compute the line matrices, corresponding to the lines $(A_1 B_2)$ and $(A_2 B_1)$, $(A_2 B_3)$ and $(A_3 B_2)$, and $(A_3 B_1)$ and $(A_1 B_3)$ by solving the respective systems ().\n\n3. Compute the commutators of these three pairs of line matrices.\n\n4. Check that the obtained three matrices are linearly dependent.\n\nWe did these computations in Mathematica. Since a line matrix is traceless, it can be viewed as a complex 3-vector, and the last step consists in computing the determinant made by three 3-vectors. The result of this last computation was zero (for arbitrary initial point matrices) which proves the theorem.\n\n**Remark 6**. *Theorem can be restated somewhat similarly to Theorem . Given two skew lines $L$ and $M$, consider the 1-parameter family of lines ${\\cal F}(L,M)$ consisting of the lines that pass through a point $A\\in L$ and orthogonal to the plane spanned by point $A$ and line $M$. Likewise, one has the 1-parameter family of lines ${\\cal F}(M,L)$. These families, ${\\cal F}(L,M)$ and ${\\cal F}(M,L)$, replace the 2-parameter families of lines ${\\cal N}_L$ and ${\\cal N}_M$ in the formulation of Theorem , and yield Theorem . *\n\n**Remark 7**. *F. Bachmann developed an approach to 2-dimensional geometry (elliptic, Euclidean, and hyperbolic) based on the notion of reflection and somewhat similar to Fenchel's approach to 3-dimensional hyperbolic geometry . Namely, to a point $P$ there corresponds the reflection $\\sigma_P$ in this point, and to a line $\\ell$ \u2013 the reflection $\\sigma_{\\ell}$ in this line. The incidence relation $P \\in \\ell$ is expressed as $\\sigma_P \\sigma_{\\ell} = \\sigma_{\\ell} \\sigma_P$. Two lines, $\\ell$ and $m$, are orthogonal if and only if $\\sigma_{\\ell} \\sigma_m = \\sigma_m \\sigma_{\\ell}$. More generally, one has a system of axioms of plane geometry in terms of involutions in the group of motions. At the present writing, it is not clear how to deduce the Correspondence principle using this approach. *\n\n[^1]: Department of Mathematics, Penn State University, University Park, PA 16802; firstname.lastname@example.com\n\n[^2]: One can also define the skewer of two intersecting lines: it's the line through the intersection point, perpendicular to both lines.\n\n[^3]: The configuration theorems of plane geometry also rely on similar general position assumptions.\n\n[^4]: On $S^2$, this is a 1-1 correspondence between oriented great circles and points; in its quotient ${\\mathbf {RP}}^2$, the elliptic plane, the orientation of lines becomes irrelevant.\n\n[^5]: *This result is also known as Hjelmslev-Morley theorem, see .*\n\n[^6]: Which was used to create illustrations in this paper.\n\n[^7]: As usual, lines are considered as circles of infinite radius.","meta":{"dup_signals":{"dup_doc_count":17,"dup_dump_count":14,"dup_details":{"curated_sources":2,"2022-49":1,"2022-21":1,"2021-10":1,"2020-50":1,"2020-10":1,"2019-35":1,"2019-18":1,"2018-47":1,"2018-13":1,"2017-39":3,"2017-26":1,"2023-50":1,"2017-13":1}},"filename":"out\/1509.05903_extract_Skewers5.tex.md"},"subset":"arxiv"} +{"text":"author: Daniel Graziotin [^1]; Xiaofeng Wang; Pekka Abrahamsson\nbibliography: references.bib\ntitle: How do you feel, developer? An explanatory theory of the impact of affects on programming performance[^2]\n\n# Introduction\n\nIt has been established that software development is intellectual, and it is carried out through cognitive processes . Software development happens in our minds first, then on artifacts . We are human beings, and, as such, we behave based on affects as we encounter the world through them . Affects\u2014which for us are emotions and moods[^3]\u2014are the medium within which acting towards the world takes place .\n\nThe affects pervade organizations because they influence worker's thoughts and actions . Affects have a role in the relationships between workers, deadlines, work motivation, sense-making, and human-resource processes . Although affects have been historically neglected in studies of industrial and organizational psychology , an interest in the role of affects on job outcomes has accelerated over the past fifteen years in psychology research . In particular, the link between affects and work-related achievements, including performance and problem-solving processes, such as creativity , has been of interest for recent research. While research is still needed on the impact of affects to cognitive activities and work-related achievements in general, this link undeniably exists according to psychology research. We believe that it is important to understand the role of affects in software development processes and their impact on the performance[^4] of developers.\n\nIt has been argued that software engineering has to produce knowledge that matters to practitioners . Indeed, we have shown elsewhere that practitioners are deeply interested in their affects while developing software, which causes them to engage in long and interesting discussions when reading related articles.\n\nWe share view that software engineering should also be studied from a behavioral perspective. We have embraced this view in previous studies\u2014e.g., and have employed theories and measurement instruments from psychology to understand how affect impact o\u00a0software developers' performance under a quantitative strategy using experiments. However, in order to understand the human behavior behind affects and software development, there is a need to observe software developers in-action and perform interviews. So far, research has not produced qualitative insights on the mechanism behind the impact of affects on the performance of developers. We have called for such studies in the past . Moreover, a lack of theory in software engineering has been recently highlighted .\n\nThus, we conducted a study laying down the theoretical answers to the research question *how are developers' experienced affects related to performance while programming?*. In this paper, we report an interpretive study of the impact of affects of developers on the software development performance. By deeply observing and open interviewing two developers during a development cycle, we constructed an explanatory theory, called *Type II* theory by , for explaining the impact of affects on development performance.\n\nThe remainder of this paper is structured as follows. In the *Background* section, we first briefly introduce what we mean with *affects*. We then review the related studies of affects and the performance of developers. Then, we provide the theoretical framing of this study and the theory representation. The following section summarizes the methodology of this study by explicating our worldview and how we chose among the various options, the research design, the data analysis method, and the reliability procedures. We then report the results of our work, i.e., an explanatory theory of the impact of affects on programming performance, as well as a discussion and comparison with related work. The last section concludes the paper by providing the contribution and implications of our study, the limitations, and the suggested future work.\n\n# Background\n\nIn this section, we first briefly introduce what we mean with *affects*, and we review the papers in the software engineering field, where the affects of software developers have been taken into consideration with respect to performance.\n\n## Affect, emotion, and mood\n\nThe fields of psychology have yet to agree on the definitions of affects and the related terms such as emotions, moods, and feelings . Several definitions for affects, emotions, and moods exist\u2014to the point that defined the study of affects as a \"very confused and confusing field of study\" (p. 2). We are aware that some proposals have been established more than others. For example, have defined *emotions* as the states of mind that are raised by external stimuli and are directed toward the stimulus in the environment by which they are raised. have defined *moods* as emotional states in which the individual feels good or bad, and either likes or dislikes what is happening around him\/her. In other words, mood has been defined as a suffused emotion, where no originating stimulus or a target object can be distinguished .\n\nThe issue with the proposed definitions, including those reported above, is that hundreds of competing definitions have been produced in just a few years and a consensus has yet to be reached. There are also cultural issues to be considered. For example, emotion as a term is not universally employed, as it does not exist in all languages and cultures . Distinctions between emotions and moods are clouded, because both may feel very much the same from the perspective of an individual experiencing either .\n\nAs emotions and moods may feel the same from the perspective of an individual, we have adopted the stance of several researchers in the various fields and employed the noun *affects* (and affective states) as an underlying term for emotions and moods. We do not neglect moods and emotions *per se*[^5]. We opted to understand the states of minds of software developers at the *affective* level only, that is \"one level below\" moods and emotions. Our choice was not unthoughtful. We have adhered to the *core affect* theory , which employs affect as the atomic unit upon which moods and emotional experiences can be constructed. That is, in this article we do not distinguish between emotions and moods. We are interested in understanding how developers feel.\n\n## Related work\n\nstudied 56 software engineers in a field study with removed treatment design.[^6] The aim of the study was to understand the impact of music listening on software design performance. The study was conducted over a five-week period. The design performance and the affects of the developers were self-assessed twice per day. For the first week of the study (the baseline), the participants were observed in natural settings\u2014that is, they worked as usual, doing what they do usually. During the second and third week, the participants were allowed to listen to their favorite music while working. However, during the fourth week, listening to music was not allowed. During the fifth week, the participants were allowed again to listen to the music. The results indicated a positive correlation of positive affects and listening to favorite music. Positive affects of the participants and self-assessed performance were lowest with no music, but not statistically significant. On the other hand, narrative responses revealed the value of music listening for positive mood change and enhanced perception on software design performance.\n\nAlong a similar line, theoretically constructed links from psychology and cognitive science studies to software development studies. In this construction, programming tasks were linked to cognitive tasks, and cognitive tasks were linked to affects. For example, the process of constructing a program\u2014e.g. modeling and implementation\u2014was mapped to the cognitive tasks of memory, reasoning, and induction. conducted two studies to understand the impact of affects on the debugging performance of developers. In the first study, positive affects were induced to the software developers. Subsequently, the developers completed a quiz about software debugging. In the second study, the participants wrote traces of the execution of algorithms on paper. During the task, the affect arousal was induced to the participants. Overall, the results of the two studies provided empirical evidence for a positive correlation between the affects of software developers and their debugging performance.\n\nWe also conducted two studies to understand the connection between affects and the performance of software developers. In the first study , we recruited 42 computer science students to investigate the relationship between the affects of software developers and their performance in terms of creativity and analytic problem-solving. In a natural experiment, the participants performed two tasks chosen from psychology research that could be transposed to development activities. The participants' pre-existing affects were measured before each task. Overall, the results showed that the happiest developers are better problem solvers in terms of their analytic abilities.\n\nThe second study was a correlation study of real-time affects and the self-assessed productivity of eight software developers while they were performing a 90 minute programming task on a real-world project. The developers' affects and their productivity were measured in intervals of 10 minutes. Through the fit of a linear mixed effects model, we found evidence for a positive correlation between the affects of developers associated to a programming task and their self-assessed productivity. In this study, we called for process-based studies on software teams which \"are required in order to understand the dynamics of affects and the creative performance of software teams and organizations\" (p. 17).\n\nperformed a study with 17 participants, 6 of which were professional software developers and 11 were PhD students in computer science. The participants were asked to perform two change tasks, one for retrieving StackOverflow scores and the other to let users undo more than one command in the JHotDraw program. During the development, the participants were observed using three biometric sensors, namely an eye tracker, an electroencephalogram, and a wearable wireless multi-sensor for physiological signals (e.g., heart rate, temperature, skin conductance). After watching a relaxing video, the participants worked on both tasks in a randomly assigned order. They were then interrupted after 5 minutes of working or when they showed strong signs of emotions. During each interruption, the participants rated their affects using a psychology measurement instrument. After other 30 minutes of work, the participants repeated the experiment design using the second task. Finally, the participants were interviewed. Overall, the study found that (1) developers feel a broad range of affects, expressed using the two dimensional measures of valence and arousal instead of labeling the affects, (2) the affects expressed as valence and arousal dimensions are correlated with the perceived progress in the task (evaluated using a 1-5 likert scale), (3) the most important aspects that affect positive emotions and progress are the ability to locate and understand relevant code parts, and the mere act of writing code instead of doing nothing. On the other hand, most negative affects and stuck situations were raised by not having clear goals and by being distracted.\n\nSo far, the literature review has shown that the number of studies regarding the affects and the performance of developers is limited. Furthermore, the studies are all quantitative and toward variance theory.\n\nVariance theories, as opposed to process theories, provide explanations for phenomena in terms of relationships among dependent and independent variables . In variance theory, the precursor is both a necessary and sufficient condition to explain an outcome, and the time ordering among the independent variables is immaterial . Strictly speaking, variance theory studies are hypothesis-driven studies, which aim to quantify the relationship between two variables in their base case.\n\nProcess research is concerned with understanding *how* things evolve over time and *why* they evolve in they way we observe . According to , process data consist mainly of \"stories\"\u2014which are implemented using several different strategies\u2014about what happened during observation of events, activities, choice, and people performing them, over time. has contrasted process theory from variance theory by stating that the basis of explanation of things is a probabilistic rearrangement instead of clear causality, and the precursor in process theory is only a necessary condition for the outcome.\n\nIn the literature review, a lack of theoretical and process-based studies was identified. For this reason, we aimed at developing a process-based theory.\n\n## Theoretical framework\n\nOur theoretical framework was primarily based upon the Affective Events Theory (AET) by and the episodic process model of performance episodes by . AET has been developed as a high-level structure to guide research on how affects influence job satisfaction and job-related performance.\n\nIn AET, the work environment settings (e.g., the workplace, the salary, promotion opportunities, etc.) mediate work events that cause affective reactions, which are interpreted according to the individuals' disposition. Affective reactions then influence work-related behaviors. Work-related behaviors are divided into affect-driven behaviors and judgment-driven behaviors. Affect-driven behaviors are behaviors, decisions, and judgments that have immediate consequences of being in particular emotions and moods. On example could be overreacting to a criticism. Judgment-driven behaviors are driven by the more enduring work attitudes about the job and the organization . Examples are absenteeism and leaving.\n\nAs noted ten years after publishing AET, AET has often been erroneously employed as a theoretical model to explain affective experiences at work. However, AET is a *macrostructure* for understanding affects, job satisfaction in the workplace, and to guide future research on what are their causes, consequences, and explanations. More specifically, AET is not a framework to explain the performance on the job, neither is it a model to explain the impact of all affects on job-related behaviors.\n\nIn their conceptual paper, provided a model that links the experiencing of affects to individual performance. model is centered around the conceptualization of performance episodes, which relies on self-regulation of attention regarding the on-task focus and the off-task focus. The cognitive resources towards the focus switch is limited. Affects, according to , hinder the on-task performance regardless of them being positive or negative. The reason is that affective experiences create cognitive demand. Therefore, affective experiences, according to this model, influence the resource allocation towards off-task demand.\n\n## Theory construction and representation\n\nInterpretive research is often conducted when producing theories for explaining phenomena . examined the structural nature of theories in information systems research. Gregor proposed a taxonomy to classify theories with respect to how they address the four central goals of analysis and description, explanation, prediction, and prescription. We employed the widely established work as a framework for classifying and expressing our proposed theory.\n\nA *type II*\u2014or explanation\u2014theory provides explanations but does not aim to predict with any precision. The structural components of a Type II theory are (1) the means of representation\u2014e.g., words, diagrams, graphics, (2) the constructs\u2014i.e., the phenomena of interests, (3) the statements of relationships\u2014i.e., showing the relationships between the constructs, (4) the scope\u2014the degree of generality of the statements of relationships (e.g., some, many, all, never) and statements of boundaries, and (5) the causal explanations which are usually included in the statements of relationship. While conducting this study, we ensured the constructed theory was composed of these elements.\n\nOur study attempts to broaden our understanding of topics that are novel and unexplored in our field. warned us that \"novelty, however, comes at a cost: novel things are harder to understand and, especially, to appreciate\" (p. 300). Therefore, we have to proceed carefully in the theory building process. The risk is to get lost in complex interrelated constructs in a confused and confusing field of study brought in the complicated, creative domain that is software engineering. Furthermore, advised researchers that, when understanding emotion dynamics, the bigger is the team under observation, the more complex and complicated are the team dynamics. Bigger teams have complicated, and even historical, reasons that are harder to grasp\u2014triggering a complex, powerful network of affects . Therefore, there is the need to keep the phenomenon under study as simple as possible. For novel theory development, philosophers and economists often\u2014but not always\u2014draw from their own personal observation and reasoning, while still being able to offer a sound empirical basis . Theorizing from the ivory tower can complement the scientific method by offering insights and discovering necessary truths , to be further expanded by empirical research. Our empirical stance makes us eager to jump to data and start theorizing; yet, we need to take some precautionary measures before doing this.\n\nWhen novel theories are to be developed in new domains, such as software engineering, a small sample should be considered . A small sample enables the development of an in-depth understanding of the new phenomena under study and to avoid isolation in the ivory tower. Our research follows carefully recommendations, which is reflected in our study design. classic article is of the same stance by reporting that organizational study theories are approximations of complex interrelated constructs of human nature that often have small samples. Those works are often seen as substitutes of theory studies, but they often represent \"struggles in which people intentionally inch toward stronger theories\" (ibid, p. 1). Such struggles are needed when a phenomenon is too complex to be captured in detail . These issues were taken into account when we designed our study, which is demonstrated in the following section.\n\n# Methodology\n\nWe describe our research as a qualitative interpretive study, which was based on face-to-face open-ended interviews, in-field observations, and e-mail exchanges. Given the aim of the study, there was the need to make sense of the developers' perceptions, experiences, interpretations, and feelings. We wanted to conduct open-ended interviews where the realities constructed by the participants are analyzed and reconstructed by the researcher.\n\nOur epistemological stance for understanding these social constructs and interactions has been interpretivism, which we make coincide with social constructivism in line with other authors . Interpretive data analysis has been defined succinctly by as \"really our own constructions of other people's constructions of what they and their compatriots are up to\" (p. 9). Interpretivism is now established in information systems research , but we see it still emerging in software engineering research.\n\n## Design\n\nAs per our chosen design, the participants could be free to undergo the development of the system in any way, method, practice, and process they wished to employ. Our study comprised of regular scheduled face-to-face meetings with recorded interviews, impromptu meetings which could be called for by the participants themselves, e-mail exchanges, in-field observations, and a very short questionnaire right after each commit in the git system (explained in section *Reliability*). Therefore, the participants had to be aware of the design itself, although they were not informed about the aims of the study.\n\nThe participants' native language is Italian, but they have been certified as proficient English speakers. The first author of the present article employs Italian as first language, as well, and he was the reference person for the participants for the duration of the entire study. The other two authors of the present article have been certified as proficient and upper intermediate in Italian. The choice for the design of the study was therefore to conduct the interviews in Italian, as the native language let the participants express their opinion and feelings in the richest, unfiltered way . The interviews were subsequently transcribed in English as suggested by the common research practices , but the present case had the added value that the authors could validate the transcripts with the participants over the course of the study, given their advanced proficiency in English.\n\nThe in-field observations were performed by two of the present authors, and the personal communications such as e-mails or some impromptu meetings were exchanged between the first author of the study and the participants. The coding activities have been a collaborative effort among all the authors of this study.\n\nIn order to keep the study design and results as simple as possible and to provide precise answers to the research question, in line with what we stated in the section *Theory Construction and Representation*, we observed activities that produced code. Other artifacts such as requirements and design were not taken into consideration. Furthermore, our strategy to limit the complex network of triggered affects was to group and study them into the two well-known dimensions of positive and negative affects , which assign the affects\u2014including those perceived as neutral\u2014in a continuum within the two dimensions.\n\nOur design took into account ethical issues, starting with a written consent to be obtained before starting any research activity. The consent form informed the participants of our study in terms of our presence, activities, data recordings, anonymity and data protection, and that their voluntary participation could be interrupted at any time without consequences. They were also informed that any report of the study had to be approved by them in terms of their privacy, dignity protection, and data reliability before it is disclosed to any third party. Furthermore, as an extra measure, any additional, personal data coming from e-mail exchanges and some impromptu meetings with a single author was approved by the participants before inclusion to the study data.\n\n## Data analysis\n\nGrounded theory has been indicated to study human behavior , and it is suitable when the research has an explanatory and process-oriented focus . Qualitative data analysis techniques from grounded theory responded to our needs . We are aware that there has been some heated debate regarding which, between or , is *the* grounded theory qualitative strategy or if it can be employed merely as a tool to analyze qualitative data . comparison study concludes that researchers should stop debating about grounded theory, select the method that best suits their cognitive style, and start doing research. We agree with them and adopted social constructivist grounded theory approach as a tool to analyze qualitative data coming from face-to-face open-ended interviews, in-field observations, and e-mail exchanges.\n\nThe adaption of grounded theory by has merged and unified the major coding techniques into four major phases of coding, which are initial coding, focused coding, axial coding, and theoretical coding. The four coding phases have been adopted in the data analysis process of this study. has often remembered her readers that no author on grounded theory methodology has ever really offered criteria for establishing what we should accept as a coding family, and that the coding phases are often overlapping, iterative and not strictly sequential within each iteration. This is true also for this study. An exemplar case of our coding activities is shown in Figure . The figure is divided into four columns. The first column provides an interview excerpt. The remaining columns show the intermediate results of the coding activities.\n\nThe *initial coding* phase should stick closely to the data instead of interpreting the data. The researchers should try to see the actions in each segment of data, and to avoid applying pre-existing categories to it. Therefore, has suggested to code the data on a line-by-line approach so that the context is isolated as much as possible, and to code the data as actions. In order to help focusing on the data as actions, it has been suggested to use gerunds. For example, in Figure the second column shows the initial codes assigned to a interview snippet.\n\nThe second coding phase is the *focused coding*. Focused code means that the most significant or frequent (or both) codes which appeared in the initial coding are employed to sift through larger amounts of data, like paragraphs, speeches, and incidents. This phase is about deciding which initial codes make the most analytic sense for categorizing the data. However, it is also possible to create umbrella codes as substitutes for other codes. During focused coding, the codes become more directed, selective, and conceptual. For example, as shown in Figure , the initial code \"Improving productivity through the use of ST\" was further abstracted as \"Improving productivity through a tool\".\n\nThe third coding phase is the *axial coding*. The axial coding phase has been proposed by . As synthesized by , the axial coding process follows the development of major categories, relates categories to subcategories, and relates them with each others. If during initial and focused coding the data is fractured into pieces, the axial coding phase brings the data back together again. In this phase, the properties and the dimensions of a category are specified. The fourth column of Figure shows an iteration of axial coding.\n\nThe fourth coding phase is the *theoretical coding*. Theoretical coding was introduced by . As synthesized by , the theoretical coding phase specifies how the codes from the previous phases related to each other as hypotheses to be integrated into a theory.\n\nIt would be impractical to show the steps and complete examples of axial and theoretical coding as they would need several interview excerpts and resulting codes . What we could demonstrate in Figure was that the interview excerpt was further coded in the later coding phases and became part of the evidence to support the key concepts, such as affect, and their components as shown in the fourth column. The overlapping of different categories over the same snippets indicated the potential linkage among them, which became the basis to develop the model proposed in this study.\n\n## Reliability\n\nHere, we describe our procedures for enhancing the reliability of the gathered data and the results. The data was gathered using multiple sources. Each interview was accompanied by handwritten notes, recordings, and related subsequent transcriptions. All in-field observations were accompanied by audio recordings after obtaining permission of the participants. We wrote memos during the study. The transcriptions and the coding phases were conducted using *Atlas.ti 7.5*, which is a recognized instrument for such tasks.\n\nIn order to make the participants focus on their affects and recall how they felt during performance episodes, we asked them to fill out a very short questionnaire at each git commit. The questionnaire was the Self-Assessment Manikin , which is a validated pictorial questionnaire to assess affects. We employed the questionnaire in a previous study as it proved to be quick (three mouse clicks for completing one) and not invasive. We employed the gathered data to triangulate the observational data and the interview data during each interview. If there was disagreement between the qualitative data (e.g., several positive affective episodes but negative quantitative results), we asked for further clarification from the participants to solve the discrepancies.\n\nAs a further action to enhance reliability, but also ethicality of the study, we asked the participants to individually review the present paper in three different times. The first review session happened in the initial drafts of the paper when we solely laid down the results of the study. The second review session happened right before submitting the article. The third review session happened before submitting a revised version of the present article. For the reviews, we asked the participants to evaluate the results in terms of their own understanding of the phenomena under study and the protection of their identity and dignity. Because of their valuable help, the proposed theory is shared with them and further validated by them.\n\n# Results and discussion\n\nThe study was set in the context of a Web- and mobile-based health-care information systems development between July and September 2014. Two software developers, who were conducting a semester-long real-world project as a requirement for their BSc theses in Computer Science, were put in a company-like environment. Both developers, who we shall call *P1* and *P2* for anonymity reasons, were male. P1 was 22 years old and P2 was 26 years old. They both had about five years of experience developing Web and mobile systems. P1 and P2 had their own spacious office serving as an open space, their own desks and monitors, a fast Internet connection, flip-charts, a fridge, vending machines, and 24\/7 access to the building. The developers accepted to work full time on the project as their sole activity. They were instructed to act as if they were in their own software company. Indeed, the developers were exposed to real-world customers and settings. The customers were the head of a hospital department, a nurse responsible for the project, and the entire nursing department. The development cycle began with a first meeting with the customer, and it ended with the delivery of a featureful first version of the working software.\n\nIt is beneficial to the reader to provide a brief summary of the main events, which have been extracted from our in-field memos. During the first week, P1 had to work on the project without P2. P2 failed to show up at work. During the first days, P2 gave brief explanations about the absence, e.g., housework or sickness. However, the explanations stopped quickly, and P2 stopped answering to text messages and phone calls. At the beginning of the second week, P2 showed up at work. P2 had some private issues, which brought some existential crisis. P1 was initially reluctant to welcome P2 in the development, as all the code so far was P1's creation. The first two days of collaboration brought some tension between the team members, crippled experimentation with the code, and a shared loss of project vision. On the third day of the second week, the team tensions exploded in a verbal fight regarding the data structures to be adopted. At that point, one of the present authors was involved in the discussion. The researcher invited the participants to express their opinion and acted as mediator. A decision was eventually made. The initial tensions between the developers began to vanish, and the work resumed at a fair pace. At the end of the second week, P1 and P2 had a further requirements elicitation session with the customer represented by the head nurse. The development appeared to be back at full speed, and a full reconciliation could be observed between the participants. The progresses succeeded one day after another, and the fully working prototype was demoed and tested during the sixth week.\n\nFace-to-face open-ended interviews happened at the beginning of the project during 11 scheduled meetings and 5 impromptu shorter meetings called by the researchers or by the participants. The impromptu meetings were held mostly because of trivial issues, like casual chatting which turned into a proper interview. Only in one case an impromptu meeting was called by P2 when he finally came back to work. We also did not distinguish between the data coming from the scheduled meetings and the impromptu meetings. The interviews were open-ended and unstructured, but they all began with the question *How do you feel?*. In-field observations happened on an almost daily basis. The participants were informed if they were recorded. We recorded a total of 657 minutes of interviews. Finally, data was gathered via the exchange of thirteen emails.\n\nThe transcripts of the interviews were completed immediately after the interviews were concluded. The initial coding phase produced 917 unique codes. The focused coding phase was focused on the individual's experiences of the development process, and it produced 308 codes. Figure provides an example of our coding activities. The axial coding and theoretical coding produced six themes, which are explained in this section. Inconsistencies between the qualitative data and the data from the Self-Assessment Manikin questionnaire happened three times during the entire study. All three discrepancies were minor, and they were immediately solved upon clarification from the participants. For example, in one case the participant P1 reported low values of valence and arousal, and a neutral value for dominance. During the interview, P1 often stated that he had a frustrating day, but there were no mentions of low-arousal negative affects. When asked to explain how the Self-Assessment Manikin values were representative of the work day, the participant added that he felt low esteem, which was caused by episodes of frustration. Overall, P1 was unexcited and lost over the day; thus the reported low value for arousal.\n\nThis section provides the proposed theory. The theory is represented in Figure . We describe the discovered themes and categories (boxes) and their relationships (arrows). While Type II theories are not expected to discuss causal explanations in terms of direction and magnitude , we offer them as they were interpreted from the data. Each relationship is accompanied by a verb, which describes the nature of the relationship. Where possible, we precede the verb with some plus ($+$) or minus ($-$) signs. A plus (minus) sign indicates that we theorize a positive (negative) effect of one construct to another. A double plus (double minus) sign indicates that we theorize a strong positive (strong negative) effect of one construct to another with respect to a proposed weaker alternative. The reader should bear in mind that our theorized effects are not to be strongly interpreted quantitatively. That is, a double plus sign is not the double of a single plus sign or an order more of magnitude of a single plus sign. Every entity and relationship is supplied with interview quotes, codes, and related work.\n\n## Events\n\nThe $events$ are perceived from the developer's point of view as something happening. Events resemble *psychological Objects*, which were defined by as \"the person, condition, thing, or event at which a mental state is directed\" (p. 3) but also at which a mental state is attributed or misattributed.\n\nEvents may be *non work-related*\u2014e.g., family, friends, house, hobbies\u2014or they may be *work-related*\u2014e.g., the working environment, the tools, and the team members. The interview quotes 1 and 2, and in-field memo 3 are examples of work-related events, while interview quote 4 is not related to work.\n\n1. \"*Suddenly, I discovered Google Plus Bootstrap, which is a Bootstrap theme resembling Google+. \\[I implemented it and\\] it was easy and looking good*.\"\u2014P1\n\n2. \"*I found a typo in the name of the key which keeps track of the nurse ID. The bug was preventing a correct visualization of patient-related measurements. Fixing the bug is very satisfying, because I can now see more results on the screen*.\"\u2014P2\n\n3. P1, talking to P2 and visibly irritated \"Again this? You still have not understood the concept! It is \\ that is static, while the measurement changes!\"\n\n4. \"*This morning I received a message with some bad news related to my mother. I immediately desired to abandon development in order to solve the possible issue. The focus was more on that issue than on any other issue at work*.\"\u2014P1\n\nWe further distinguish public events from private events. *Public events* are those that could be observed by a third person. The in-field memo 3 is an exemplar public event. *Private events* are known to oneself only, even if they are coming from the real world. For example, the event described in interview quote 4 was real and coming from the real world. However, it was not observable by a third person. Events have often an episodic nature, as P1 and P2 noted on several occasions. However, private events can also be reflections, realizations, memories, and situations as with psychological Objects.\n\n1. Interviewer: \"*Have you focused better on your programming task today?*.\" P2: \"*Yes, today went better \\[than usual\\]. It's probably..when you do that \\[programming\\] alone that I am more.. it is more difficult, to write code. When I am working with somebody it goes better, you can work better*.\"\n\nIn the interview quote 5, P2 described the general situation, or a summary of the work day events with respect to usual situations. Situations can be causation chains or aggregation of previous events. The participants do not need to be aware of events as merely events or as situations as it does not make any difference to them. We are not representing situations in Figure because we still consider them as events. The rest of the paper provides numerous other examples of events.\n\n## Affects\n\nDuring the development process, several $affects$ have been triggered by events and felt by the developers. We coded only affects which had been directly mentioned by P1 and P2.\n\nThe following are the detected positive and negative affects (respectively) being felt during the development cycle.\n\n*accompanied, accomplished, attracted, contented, dominating, enjoyed, excited, fun, good, gratitude, happy, illuminated, motivated[^7], optimistic, positive, satisfied, serene, stimulated, supported, teased, welcomed*.\n\n*angry, anxious, bored, demoralized, demotivated, depressed, devastated, disinterested, dominated, frustrated, guilty, loneliness, lost, negative, pissed off, sad, stagnated, unexcited, unhappy, unsatisfied, unstimulated, unsupported, worried*.\n\nOur qualitative results on the perceived affects agree with the quantitative results of , which indicated that developers do feel a very broad range of affects in the software development process.\n\nExamples of events that caused positive and negative affects (respectively), coded using the gerund principle of method for analyzing qualitative data, are the following.\n\n*'Feeling contented because a very low number of code changes caused big achievement in terms of quality \\[or functionality\\]', 'Feeling gratitude towards a tool', 'Feeling attracted by a junk of code because of anticipating its value for the end user', 'Feeling motivated because personal issues are now out clear', 'Feeling supported because of the brought automation of a framework', 'Feeling serene because of a low workload right after a high workload', 'Feeling happy because of sensing the presence of a team member after reconciliation'*.\n\n*'Feeling alone \\[or unsupported\\] while working \\[or by a team member\\]', 'Feeling anxious because of a sudden, not localizable bug that ruined the day', 'Feeling anxious by not understanding the code behavior','Feeling bored by implementing necessary but too static details \\[e.g., aesthetic changes instead of functionalities\\]', 'Feeling frustrated by the different coding style of a team member', 'Feeling angry by failing to integrate \\[or extend\\] an external component', 'Feeling stagnated in life \\[or job, or studies\\]', 'Feeling unstimulated because of a too analytic task'*.\n\nAccording to previous research, psychological Objects\u2014sometimes in the form of events, sometimes as stimula\u2014trigger affects all the time, and an individual is under a particular affect or a blend of affects all the time . Sometimes, these affects will be perceived strongly. Sometimes, they will not be perceived at all despite their presence. A failure to attribute an affect to an event does not demise the affect itself. This affect misattribution coincides with some theories of moods , which consider affect as non attributed emotions or simply as free-floating, unattributed affect .\n\n## Attractors\n\nWe observed that some events had a particular affective meaning to the participants. These affective experiences were assumed high importance to the participants with respect to other affective experiences; thus, we called them *attractors*.\n\nAttractors are affects, which earn importance and priority to a developer's cognitive system. At a very basic instance, they gain the highest possible priority and emphasis to a developer's consciousness, to the point that behaviors associated to the attractor can be observed as it is experienced. An example can be offered by quote 6 below.\n\n1. P2: \"*I did a really good job and fixed things also due to Sublime Text (ST)*.\" Interviewer: \"*What has ST done for you?*.\" P2: \"*When you copy\/paste code around and refactor, ST offers you at least three different ways for doing search and replace. It is really advanced*.\" Interviewer: \"*Would another tool make a difference to your work instead?*.\" P2: \"*With another editor or an IDE it would be another story, especially if an editor tries to do too much, like Eclipse. I think that the compromise between functionality and usability of ST is way better*.\" Interviewer: \"*Do you think that ST is enhancing your productivity then?*.\" P2: \"*Absolutely. I was extremely excited by these features and they pushed me to do more and more*.\" Interviewer: \"*Were you actually thinking about this while you were working?*.\" P2: \"*Definitely. First, I turned the monitor towards P1 and showed him the magic. But I felt good for the rest of the day, and I accomplished more than what I hoped I could do*.\"\n\nIn interview quote 6, P2 offered an insight regarding the affects triggered by a software development tool. The excitement toward the tool features was an attractor to P2. The attractor became central to the developer subjective conscious experience, not just an underlying affect. Moreover, the behavior caused by the experience of the attractor was directly observable. Interview quote 6 emphasizes that attractors are not necessarily concerns or negative in nature.\n\nInterview quote 4 provides instead an example of a negative attractor. P1 realized that a non work-related event was not desirable, thus generating negative affects. What happened to his mother was important and demanded his attention. P1 was consciously experiencing the negative attractor, and the appraisal of such attractor had consequences to his way of working.\n\nAttractors are not necessarily stronger than general affects for gaining a developer's subjective *conscious* experience. They might just *be there* and still have an impact. We can access them retrospectively. Interview quote 7 is an example of such occurrence.\n\n1. \"*I am not progressing.. in the working environment.. with my university career. With life. I feel behind everybody else and I do not progress. And I am not even sure about what I want to do with my life. I got no visual of this*.\"\u2014P2\n\nMoreover, interview quote 7 shows that attractors are not always caused by single events. Attractors can become reflections on a series of events as a consequence of them and as a summation of them.\n\nAnother example of reflections of a series of events that have however an impact on a developer's subjective consciousness is shown in interview quote 8. P2 was having a life crisis which resulted in a loss of the vision of his own life.\n\n1. \"*When I was alone at home, I could not focus on my programming task. The thought of me not progressing with life did often come to my mind. There I realized that I was feeling depressed*.\"\u2014P2\n\nIn interview quote 8, the participant had a negative *depressed* attractor with the attached meaning *I am not progressing with life*. The rumination associated with this attractor was strong and pervaded P2 personal experience and his everyday life of that period.\n\nAttractors are part of the personal sphere as much as affects are\u2014indeed, they are special affects for us. In the software process improvement literature, the term *concern* has been used as commitment enabler . The commitments are formed in order to satisfy such concerns, i.e., needs . Attractors are not concerns as employed by . An important difference is that concerns are linked to actions, i.e., actions are driven by concerns. On the other hand, attractors are affects, and affects are not necessarily concerns, nor do they necessarily cause immediate actions.\n\nUnder our current theoretical framework, a blend of affects constitutes an individual's happiness, at least under the hedonistic view of happiness . According to this view, being happy coincides with the frequent experience of pleasure; that is, happiness is reduced to a sequence of experiential episodes . Frequent positive episodes lead to feeling frequent positive affects, and frequent positive affects lead to a positive *affect balance* . consider a person *happy* if the person's affect balance is mainly positive. However, we have just stated in this section that some developers' affects are more important than other affects. Let us now be more specific.\n\nAs argued by the philosopher , a quantitative view of happiness based solely on frequency of affects is psychologically superficial because some affects do not have distinct episodes or attributions (as in moods). Even more, has seen happiness as a matter of a person's affective condition where only *central affects* are concerned. We see a similarity between attractors and central affects. As attractors are important affects, we agree that they are a strong constituent of the happiness of the individuals. However, non attractors could be central affects, as well. In our observations, we saw that attractors are also affects that are easily externalized by the participants, and we will show that their originating events are more visible to them. Furthermore, we will show that attractors are more linked to the focus and the developers' performance. Thus, we differentiate them from central affects.\n\nThe participants could sometimes realize the affective meaning of attractors by themselves, as in quote 8. There is often the need to externalize them in order for an observer to feel them. We found that sometimes, externalizing affects is alone beneficial, as seen in the next section.\n\n## Interventions\n\nWhile the presence of researchers has always an influence on the participant's behaviors , it happened twice that our interaction with the participants had a clear effect on their feelings and behaviors. We call such events *interventions*. Interventions are events\u2014as shown in Figure by the UML-like grey arrow with a white arrowhead\u2014that mediate the intensity of already existing negative attractors, thus reducing them as much as possible to normal affects. After externalizing his depressed state in interview quote 8, P2 continued as follows:\n\n1. \"*What we were doing was not 'in focus'. The result really didn't matter to me. To my eyes, we were losing time. However, once I've told you what I told you \\[the personal issues\\] you know that as well. It is not that I am hiding or that I am inventing things out..I now have no more the possibility to wriggle anymore. I told you why I was not there and I am feeling better already. I am now here for two days, and I feel way better than before*.\"\u2014P2.\n\nThe field memos provided more evidence on the effectiveness of interventions. For example, during the reconciliation, which happened at the beginning of week 2, the developers had frequent soft fights.\n\n> P2 battles fiercely for his opinions and design strategies. However, he is listening to P1 opinions. On the other hand, P1 seems more interested to get stuff done, and he seems less prone to listen to P2. P2 is probably realizing this and responds using passive-aggressive modes. Some not-so-very nice words fly.\n\n> P1 and P2 are less aggressive with each other. My proposal to let them express their opinions and to invite them to listen to each other seems to have a positive effect. A solution, albeit influenced by me, seems to have been reached.\n\nA field memo six days after the reconciliation was much more positive.\n\n> P1 and P2 have been working with an almost stable pace. There does not seem to be an elephant in the room anymore. Both of them smile often and joke with each other. You can feel them happier than before. I often see P1 and P2 showing their results to each other. The work seems way more productive than last week.\n\nEven personal issues were having less impact on P2 as he revealed in a interview nine days after the reconciliation.\n\n1. \"*My personal issues are having a minor impact on my productivity, despite the fact that my mind wonders in different places. It is because we are now working well together and share a vision*.\"\u2014P2\n\nInterventions in Figure are reached by dashed arrows, which start from affects and attractors, and have a dashed arrow pointing to focus. The dashed arrows, together with the labels *mediated by* and *amplify (or reduce) drive on*, indicate alternative paths in the process. That is, affects and attractors are mediated by interventions, which amplify or reduce their drive on the focus.\n\nThese interventions suggest that a mediator is a useful figure in a software development team. The mediator should be able to gently push the team member to let out their opinions, views, and affects. A more concrete example could be an agile coach or a team leader according to the team settings.\n\n## Focus\u2014progressing and goal setting\n\nIn this section, we explain the construct of focus, which is related to progressing toward goals and the setting of such goals. The $focus$ often referred to a general mental focus, e.g., \"*I was in focus after I could refactor all that code using Sublime Text search-and-replace capacity*.\"\u2014P2, which usually matched a focus on the current chunk of code. However, the focus on the current chunk of code was with respect to a goal. P2 mentioned focus in interview quote 8, where he told the interviewer that he could not focus on the programming task while at home, because of the realization of being depressed. A more tangible focus on the code at hand was portrayed by P1 in the following interview quote.\n\n1. \"*After our \\[between P1 and P2\\] reconciliation and after the meeting with \\[the head nurse\\], I often developed in full immersion. When I am in full immersion mode, nothing exists except what I am doing. I have a goal in mind and I work toward it. I don't think about anything else but my goal and my progress towards it*.\"\u2014P1\n\nDuring the last interview, P1 was directly asked about the way he focuses while developing software and what he thinks about. Besides the full immersion mode that P1 described in quote 11, he described a \"*lighter mode of immersion. I enter this mode when I am tired, when I write less functional aspects of the code*.\" but also \"*when I am interrupted by negative news or when I focus my attention more on some problems*.\".\n\nIn quote 12, P2 shared his view on negative affects and how they hinder performance by changing the way he perceived events as attractors.\n\n1. \"*My negative thoughts have been the same lately\u2014more or less\u2013but I sometimes change the way I look at them. It is often positive, but it is often negative, too. Maybe I realize this more when I have a negative attitude towards them. It influences my work in a particular way: my concerns become quicksand*.\"\u2014P2\n\nOur $focus$ appears to be similar to the flow as depicted by , and found in the related work by , which was described as an attention state of progressing and concentration.\n\nAdditionally, the participants often mentioned the term 'vision,' which was meant as the \"*ability to conceive what might be attempted or achieved*.\" . For this reason, we preferred using the term *goal setting*. The participants linked the focus and the capacity of setting goals. Goal settings has an established line of research in organizational behavior and psychology\u2014one of the seminal works is by \u2014that would deserve its own space in a separate article. It involves the development of a plan, which in our case is internalized, designed to guide an individual toward a goal . Those goals found in our study were related to future achievements in the short and long run, i.e., the task and the project. One example of task goals lies in the interview quotes 13. Whenever the focus of attention was on the current code melted with the goal setting of task and project, the performance was reported and observed as positive. However, if something was preventing the focus on the current code\u2014*now*\u2014and the focus on the goal or the goal setting of the task or project\u2014*then*\u2014the performance was reported and observed as negative. P2 summarized these reflections concisely in quote 13.\n\n1. \"*It does not matter how much it is actually going well with the code, or how I actually start being focused. Then it \\[my thoughts about my personal issues\\] comes back into mind. It is like a mood. I cannot define it in any way. But it is this getting rid of a thought, focusing back to work and the task goal. Here \\[shows commit message\\] I wanted to add the deletion of messages in the nurses' log. But when it happens, I lose the task vision. What was I trying to accomplish? WHY was I trying to do this? It happens with the project vision, too. I don't know what I am doing anymore*.\"\u2014P2\n\nThe project goal setting is similar to the task goal setting. The difference is that project goal setting is the capacity of perceiving the completion of a project in the future and visualizing the final product before its existence as P1 outlined in interview quote 14.\n\n1. \"*After we talked to \\[the head nurse\\], we gathered so much information that we overlooked or just did not think about. \\[...\\] between that and the time you \\[the researcher\\] invited us to speak about our issues and mediated among our opinions, we had a new way to see how the project looked like. The product was not there still, but we could see it. It was how the final goal looked like*.\"\u2014P1\n\nThere is a link between focusing on the code and focusing on the task goal. Staying focused on the code meant staying focused on the *now* (and here). It is the awareness of the meaning of each written line of code towards the completion of a task. Focusing on the task and project goals meant staying focused on the *then* (and there). It was meant as the capacity of envisioning the goal at the shorter term (the task) and the overall goal of the project. At the same time, focusing on the task and the project meant the possibility to define a task completion criteria, the awareness of the distance towards the completion of such task, and to re-define the goal during the work day.\n\nOur findings are in line with those of , where the participants in a survey perceived a productive day as a day where \"they complete their tasks, achieve a planned goals or make progress on their goals\" (p. 21). The number of closed work items, e.g. tasks and bugs, was the most valued productivity measurement among developers. The *full immersion mode* mentioned by P1 in interview quote 11 resembles the flow depicted by and mentioned in the related work by .\n\n## Performance\n\nThe performance was generally understood by the participants as their perceived effectiveness in reaching a previously set expectation or goal. Or, whenever *then* became *now*.\n\n1. \"*Last week has been chaotic. We worked very little on the code. P2 played around with the programming framework. P2 tried to adapt an example program to fit our needs. So, P2 studied the chosen framework. I can say that P2 was productive. I spent my time doing refactoring and little enhancements of what was already there. Little functionality was developed so far. In a sense, we still performed well. We did what we were expecting to do. Even if I did so little. I still laid down the basis for working on future aspects. So yeah, I am satisfied*.\"\u2014P1\n\n2. Interviewer: \"*What happened during this week?*.\" P2: '*'Well, it happened that..I did not behave correctly in this week. I could not do a single commit*.''\n\nWe observed that the affects have an impact on the programming performance of the developers. This is achieved by driving the focus that developers have on the currently focused code, the ongoing task, or the project itself[^8]. P2 suggested already, in interview quote 6, that the excitement caused by the discovery of the useful search-and-replace functionalities in his editor had pervaded his work day. This positive attractor caused him to be productive also when not using such functionalities. P2 could also offer cases of the opposite side, like the one in quote 17.\n\n1. \"*I was lost in my own issues. My desire to do stuff was vanishing because I felt very depressed. There was not point in what I was currently doing, to the point that I could not realize what I had to do*.\"\u2014P2\n\nMore precisely, positive affects have a positive impact on the programming performance\u2014as they drive the focus positively\u2014while negative affects have a negative impact on the programming performance\u2014as they drive the focus negatively. While most of the previous quotes are examples on the negative side, quote 6 and the following quote are instances of the positive case.\n\n1. P1: \"*I now feel supported and accompanied by P2. We are a proper team*.\". Interviewer: \"*What has changed?*.\" P1: \"*It's that now P2 is active in the project. Before \\[the reconciliation\\] P2 was not here at all. \\[...\\] If he joined after our meeting with \\[the head nurse\\], there was the risk to see him as an impediment instead of a valid resource and team member. Now, I feel happier and more satisfied. We are working very well together and I am actually more focused and productive*.\"\n\nA positive focus has a positive effect on programming performance. But, a focus on the code toward a task or project goals (or a combination of them) have an even stronger positive impact on the programming performance.\n\nWe provide some codes related to the consequences of positive and negative affects (respectively) while programming.\n\n*'Limiting the switch to personal issues because of feeling accompanied by a team member', 'Switching focus between the task and the positive feelings caused by a tool makes productive', 'Focusing better on code because of the positive feelings brought by reconciliation', 'Focusing less on personal issues \\[more on the code\\] because of a sense of being wanted at work', 'Focusing more on code because of feeling supported and in company', 'Committing code frequently if feeling in company of people'.*\n\n*'Abandoning work because of negative feelings fostered by negative events', 'Avoiding coming to work because of lost vision \\[and depression\\]', 'Avoiding committing working code during day because of loneliness', 'Choosing an own path because of the loneliness', 'Switching focus between personal issues and work-related task prevents solving programming tasks', 'Losing focus often when feeling alone', 'Losing the project vision because of quicksanding in negative affects', 'Not reacting to team member input because of bad mood', 'Realizing the impediments brought by personal issues when they are the focus of attention', 'Trying to self-regulate affects related to negative events and thoughts lowers performance', 'Underestimating an achievement because of loneliness', 'Worrying continuously about life achievements and avoiding work'.*\n\n### Comparison of the theory with related work\n\nThe proposed theory can be seen as a specialized version of Affective Events Theory (AET, ). It provides an affect-driven theory explaining how events, both work-related and not, impact the performance of developers through their focus and goal setting while programming. Therefore, our study produces evidence that AET is an effective macrostructure to guide research of affects on the job in the context of software development. At the same time, our proposed theory is reinforced by the existence of AET itself.\n\nWe also note that our theory is partially supported in independent study\u2014built upon one of our previous studies \u2014which was conducted at about the same time of the present study[^9]. Among their findings, the self-assessed progressing with the task is correlated with the affects of developers; the most negative affects were correlated with less focus on clear goal settings and positive affects were linked with focusing and progressing toward the set goals.\n\nFinally, our findings are in line with the general findings of goal settings research. That is, the task performance is positively influenced by shared, non conflicting goals, provided that there are fair individuals' skills .\n\n### Happy, therefore productive or Productive, therefore happy?\n\nLet us now reason a little on the causality aspects between affects and performance. We note that the participants have always explicitly stated or suggested that the influence of affects on performance is of a causality type. Some researchers have warned us that there might instead be a correlation between the constructs, as well as a double causality (*I am more productive because I am more happy, and I am more happy because I am more productive*). Indeed, so far in our previous studies we have argued for correlation, not causation.\n\nIn the present study, we could not find support in the data for a double causation, but for a causality chain *Happy, therefore productive*, in line also with related research . However, it seems reasonable that we are happier if we realize our positive performance.\n\nWe speculate here that a third, mediating option might exist. In the proposed theory, and in several other theories in psychology, being happy is reduced to frequent feeling of positive affects . As argued by , the centrality of affects might be relevant, as well. stated, as example, that the pleasure of eating a cracker is not enduring and probably not affecting happiness; therefore, it is considered as a peripheral affect. Peripheral affects arguably have smaller\u2014if not unnoticeable\u2014effects on cognitive activities. It might be the case that the positive (negative) affects triggered by being productive (unproductive) do exist but have a small to unnoticeable effect on future productivity. However, this is outside the scope of this study. We report our backed up speculation as causation for future work.\n\n# Conclusion\n\nIn this qualitative, interpretive study, we constructed a theory of the impact of affects on software developers with respect to their programming performance. As far as we know, this is the first study to observe and theorize a development process from the point of view of the affects of software developers. By echoing a call for theory building studies in software engineering, we offer first building blocks on the affects of software developers. For this reason, we designed our theory development study using a small sample adhering to guidelines for generating novel theories, thus enabling the development of an in-depth understanding of an otherwise too complex and complicated set of constructs.\n\nThe theory conceptualization portraits how the entities of events, attractors, affects, focus, goal settings, and performance interact with each other. In particular, we theorized a causal chain between the events and the programming performance, through affects or attractors.\n\nPositive affects (negative affects) have a positive (negative) impact on the programming task performance by acting on the focus on code, and task and project goals. The theory introduces the concept of attractors, which are affects that earn importance and priority to a developer's cognitive system and, often, to their conscious experience. Attractors have an even higher impact on programming performance than ordinary affects.\n\nFinally, we also provided evidence that fostering positive affects among developers boosts their performance and that the role of a mediator bringing reconciliations among the team members might be necessary for successful projects.\n\n## Contributions and implications\n\nOur study offers multiple contributions and implications. The theoretical contributions lie in the theory itself. The theory incorporates the impact of affects on performance through an influence on the focus of developer's consciousness on coding and on several aspects of goal settings (task, project). In addition, we introduced the concept of attractors for developers, which is a novel construct based on affects and events at different spheres (work-related and not, private or public). The theory is proposed as part of basic science of software engineering, and it is open to falsification and extension.\n\nAs stated by Lewin, \"there is nothing quite so practical as a good theory\" . The practical implication of our study is that, despite the idea among managers that pressure and some negative feelings help in getting the best results out, there is growing evidence that fostering (hindering) positive (negative) affects of software developers has a positive effect on the focus on code, and task and project goal settings, and, consequently, on their performance. Additionally, we found evidence that a mediator role to reconcile the developers' issues and conflicts is a way to foster positive affects and mediate negative attractors among them.\n\nThe proposed theory can be employed as a guideline to understand the affective dynamics in a software development process. The theory can be used to foster a better environment in a software development team and to guide managers and team leaders to enrich their performance by making the developers feel better. On the other hand, our conceptualized theory can guide the team leaders to understand the dynamics of negative performance when it is linked to negative affects.\n\n## Limitations\n\nThe most significant limitation of this research to be mentioned lies in its sample. Although it is very common for software engineering studies to recruit computer science students as participants to studies , for some readers this might still be considered a limitation. First, it is true that our participants were enrolled to a BSc study in computer science, but they both had a working history as freelancers in companies developing websites and Web applications. While our developers did not have to be concerned about assets and salaries, they were paid in credit points and a final award in terms of a BSc thesis project. argued that students are the next generation of software professionals as they are close to the interested population of workers, if not even more updated on new technologies. Indeed, the empirical studies comparing students in working settings with professionals did not find evidence for a difference between the groups . The conclusions from the previous studies are that students are indeed representatives of professionals in software engineering studies.\n\nThe non-inclusion of female participants might be considered a further limitation of this study. There is a widespread popular conception that there are gender differences in emotionality . Evidence has been found for gender differences at the neural level associated to reappraisal, emotional responding and reward processing , and for a female having greater reactivity to negative stimuli and adoption of different emotion regulation strategies . While more studies on gender differences are needed as the produced evidence is not enough yet , it might be the case that the inclusion of a female developer would have made the dataset richer, and perhaps would have led to a more gender-balanced theory.\n\nWhile we argued extensively about the choice of the sample size in section *Theory Construction and Representation*, we repeat here that there was the need to keep the phenomenon under study as simple as possible given its complex nature . Furthermore, when novel theories are to be developed in new domains, such as software engineering, a small sample should be considered . This strategy, while sometimes seen as limiting, pays off especially for setting out basic building blocks . As argued by , even one observation could be sufficient for theorizing as so far as \"phenomena should be directly explained by theory, and only indirectly supported by the data\" (quoted from Section 6.2). Our choice of the small sample size was seen as a benefit for the purposes of this explanatory investigation. The reason is that in a real company, the source of events is vast and complex. There are team dynamics with complicated, and even historical, reasons that are harder to grasp\u2014triggering a complex, powerful network of affects \u2014thus lifting the study's focus out from the programming itself.\n\n## Future work\n\nWe have three directions of research to suggest to the readers. The first one is an immediate continuation of our study. As our study was explanatory, we suggest future research to test the proposed theory and to quantify the relationships in quantitative studies, in software engineering field but also in other domains to understand if and how the specifics particular to the software engineering context affect the applicability of our theory. Although quantifying the impact of attractors was beyond the scope of this study, we feel that negative attractors triggered by non work-related events and positive attractors triggered by work-related events have the strongest impact on the performance of software developers. Furthermore, this study focused on the dimensions of positive and negative affects. It is expected that different types of affects and attractors matter more than other, and have different impact on the focus and performance. We leave future studies the option to study discrete affects, e.g., joy, anger, fear, frustration, or different affect dimensions, e.g., valence, arousal, and dominance.\n\nOur second suggestion for future studies is to focus on dynamic, episodic process models of affects and performance where time is taken into consideration. The underlying affects of developers change rapidly during a workday. The constituents and the effects of such changes should be explored. Additionally, exploring the dynamics of affects turning into attractors (and possibly vice-versa) and what causes such changes will provide a further understanding of the effectiveness of interventions and making developers feeling happier, thus more productive.\n\nFinally, our third direction for future research is to broaden the focus on (1) artifacts different than code, such as requirements and design artifacts, and (2) understanding the complex relationship of affects and software developers' motivation, commitment, job satisfaction, and well-being.\n\n# Acknowledgments\n\nWe thank our two participants, who openly, actively, and unhesitatingly collaborated during the research activities. We are grateful for the feedback provided by two anonymous reviewers, which let us improve the manuscript in terms of several aspects including clarity.\n\n[^1]: Corresponding author. E-mail: firstname.lastname@example.com.\n\n[^2]: accepted for publication at PeerJ Computer Science journal.\n\n[^3]: For the purposes of this study, we consider affect as an underlying term for emotions and moods, in line with several other authors, e.g., . See \"Affect, emotion, and mood\" for more information.\n\n[^4]: The stance that performance and productivity are two interchangeable terms is assumed in this study, in line with\n\n[^5]: The issues of defining the concepts under study is not trivial and it deserves separate discussions. We point the reader to two of our recent articles , in which we have discussed the theoretical foundations, the various theories, and the classification frameworks for affects, emotions, and moods, and the common misconceptions that occur when studying these constructs.\n\n[^6]: Removed treatment designs are part of single-group quasi-experiment designs. A removed treatment design allows one to test hypotheses about an outcome in the presence of the intervention and in the absence of the intervention . A pre-treatment measurement is taken on a desired outcome; a treatment is provided; a post-treatment measurement is conducted; a second post-treatment measurement is conducted; the treatment is removed; a final measurement is performed .\n\n[^7]: The careful readers might turn up their nose here. As we wrote in , affects are not motivation, as they are not job satisfaction, etc. Yet, affects are important components of these psychological constructs, and studying complex multifaceted constructs like motivation would require different approaches and different measurement instruments. For this reason, if the participants only stated that they felt motivated or satisfied, we considered them as affects, as it might well be the case that they were expressing emotional judgments about such constructs. In any case, the inclusion or exclusion of such terms as affects would not change the results of this study.\n\n[^8]: The aim of this study is to offer a theory of the impact of affects on performance while programming rather than proposing a performance or productivity theory. A plethora of factors influence the performance of developers\u2014see for a comprehensive review of the factors\u2014and affects are one of them, although they are not yet part of any review paper. At the same time, software development performance is composed of several complex interrelated constructs\u2014see for a review of productivity measurements\u2014to which we add those driven by cognitive processes and *also* influenced by affects, e.g., creativity and analytic problem solving capability\n\n[^9]: Furthermore, at our submission time the work by had just been publicly accepted for inclusion in ICSE 2015 proceedings, but it is currently still not published formally. We obtained their work through an institutional repository of preprints.","meta":{"dup_signals":{"dup_doc_count":25,"dup_dump_count":24,"dup_details":{"curated_sources":2,"2022-33":1,"2021-31":1,"2021-25":1,"2020-29":1,"2019-04":1,"2018-51":1,"2018-47":1,"2018-34":1,"2018-22":1,"2018-09":1,"2017-51":1,"2017-43":1,"2017-34":1,"2017-26":1,"2017-17":1,"2017-09":1,"2017-04":1,"2016-50":1,"2016-44":1,"2022-49":1,"2024-18":1,"2017-13":1,"2024-26":1}},"filename":"out\/1505.07240_extract_how_do_you_feel_dev.tex.md"},"subset":"arxiv"} +{"text":"author: Fang\u00a0Wu; Tao\u00a0An; Willem\u00a0A.\u00a0Baan ; Xiao-Yu\u00a0Hong; Carlo\u00a0Stanghellini ; Sandor\u00a0Frey ; Hai-Guang Xu ; Xiang Liu ; Jingying Wang\ndate: Received ..., 2012; accepted ..., 2012\ntitle: Kinematics of the compact symmetric object OQ\u00a0208 revisited\n\nA long-timeline kinematic study of the archetypal compact symmetric object (CSO) OQ\u00a0208 sheds light on the physical properties of the most compact radio sources. Archival data from the Very Long Baseline Array (VLBA) at 15 GHz over a time span of 13.6 yr were used to investigate the kinematics of the radio source. The flux density monitoring data obtained at the Michigan 26-meter radio telescope were also used as supplementary information for analyzing the geometry of the radio structure. At 2.3-GHz, the radio emission is dominated by two mini-lobes separated by $\\sim$``{=html}10 pc in a northeast-southwest (NE\u2013SW) direction. At 8.4 and 15 GHz, each lobe is further resolved into two subcomponents, which are identified as hotspots. A knotty jet is linked with the NE hotspot and traces back toward the geometric center. The core is too weak to be detected. Significant flux density variation is found in the primary hotspots with a maximum level of 62% (NE1) and 19% (SW1). The flare epoch of NE1 is earlier than that of SW1 by approximately 5.00 yr, suggesting that the northeast lobe is advancing and the southwest lobe is receding. This light travel difference indicates a radial distance difference between the two hotspots of 1.53 pc, which indicates an inclination angle of about 80.8 degrees between the radio jet and the line of sight. The angular separation rate between NE1 and SW1 is 0.027 mas yr$^{-1}$, corresponding to a projected speed of 0.133 c. The inner jet knot (J1) moves at 0.047 mas yr$^{-1}$ (or 0.230 c), about 3.5 times the hotspot advancing speed. The large viewing angle and the modest jet speed suggest a mildly relativistic jet. The jet axis is close to the plane of the sky. The separation rate and the distance between the two primary hotspots result in a kinematic age of $255 \\pm 17$ yr, confirming that OQ\u00a0208 is indeed a young radio source. In addition to the hotspot advancing motions, sideways motions provide evidence that the lobes are obstructed by the external interstellar medium.\n\n# Introduction\n\n**Compact symmetric objects** (CSOs) are a **subclass** of extragalactic radio sources that are characterized by a compact double or triple radio structure **with an** overall size less than 1 kpc. The physical nature of the compactness of CSOs is still a question under debate: the *youth* model (e.g. ) proposes that CSOs are small because they are in the infant stage of the extragalactic radio source evolution; the *frustration* model (e.g. ) attributes the small size of CSOs to extremely strong confinement by the dense external medium. The two models define **distinctly** different evolutionary **fates** of CSOs. In the *youth* model, all young CSOs eventually evolve into large-scale double sources, *i.e.*, Fanaroff-Riley (FR, ) type-II sources, over a few million years; whereas according to the *frustration* model, CSOs are confined within the host galaxy and experience stagnated growth. The CSO ages provide a critical **distinction** between the two models. Previous kinematics studies of individual CSOs and subsamples of **these** sources (e.g. ) show that the measured hotspot advancing speed values have a large scatter from $\\sim0.04\\,c$ to $\\sim0.5\\,c$, resulting in young kinematic ages in the range of only 100\u20132000 yr. However, additional sideways motions of hotspots and disturbed lobes indicate strong interactions between the jet heads and the surrounding interstellar medium (). Sideways motion is usually very slow compared to the dominant advancing motion, **which requires** highly accurate measurements. A detailed investigation of well-selected CSO samples is essential for exploring the complex kinematics of CSOs, and for understanding the physical environment of the host galaxies on 1\u2013500 pc scales.\n\nThe kinematic age ($\\tau_{k}$) of CSOs is traditionally determined by dividing the separation ($R$) between two terminal hotspots by the separation rate ($\\mu$). A **high-accuracy** measurement of $\\tau_{k}$ requires high-precision position determination for individual observations, and a sufficiently long timespan of the data. For more than two epochs, $\\mu$ is often determined from a linear regression fit to the changing hotspot separation with time. The statistical uncertainty of the fit is sensitive to the number of available data points and the uniformity of the time sampling. In addition to **this**, different resolutions of the interferometric images and **the** intrinsic opacity effect in different levels may introduce systematic errors in the proper motion measurements. An accurate measurement for the separation rate $\\mu$ requires CSOs with multiple-epoch Very Long Baseline Interferometry (VLBI) imaging data at the same observing frequency over a sufficiently long time span.\n\nIn this paper, we carry out a kinematic study of an archetypal CSO, OQ\u00a0208, on the basis of archival VLBI data over a time baseline of 13.6 yr. (also known as , ) is one of the closest CSOs ($z = 0.0766$: ) and provides a template for the dynamic properties of the most compact (and possibly the youngest) radio sources. The host galaxy of OQ\u00a0208 shows typical Seyfert-1 spectra with strong broad Balmer lines (FWHM$_{H\\alpha}=6000$ km s$^{-1}$) and also forbidden lines of \\[Ne III\\], \\[O III\\] and \\[S II\\] (). In the radio band, a broad 21-cm HI absorption line was detected in OQ\u00a0208, indicating a fast and massive outflow of neutral gas (). The radio emission is concentrated in a compact central region within $\\sim$``{=html}10 mas (). Additional evidence for the compactness comes from the convex radio spectrum with a turnover at about 4.9 GHz (), resulting from synchrotron self-absorption or free-free absorption (). The VLBI images of OQ\u00a0208 at 2.3 and 5 GHz reveal a typical CSO-type morphology with two compact (mini-)lobes along the NE\u2013SW direction (). At higher frequencies of 8 and 15 GHz, a more detailed structure is revealed: the mini-lobes are resolved into subcomponents, and internal jet knots are detected between the two lobes () Previous proper motion measurements of OQ\u00a0208 (e.g. ) were mostly based on 5- and 8-GHz data with lower angular resolutions and **are** affected by the opacity effect. Moreover, these measurements only cover a time span of a few years, insufficient for tracing the long-term change of the hotspots. In the present study, we only **made** use of 15-GHz VLBI data to determine the separation velocities of hotspots and the proper motions of the internal jet knots.\n\nThe structure of the paper is as follows. Section describes the VLBI data used for the kinematics study. Section presents the results, including radio images, light curves, and proper motion measurements. A conclusion and a summary are given in Section . Throughout this paper, we assume a flat cosmological model with $H_{0}$ = 73 km s$^{-1}$ Mpc$^{-1}$, $\\Omega_{M} = 0.27$, $\\Omega_{\\Lambda} = 0.73$. At the redshift of OQ\u00a0208, 1 mas angular size corresponds to 1.4 pc projected linear size, and a proper motion of 1 mas yr$^{-1}$ corresponds to 4.9 $c$ apparent speed.\n\n# VLBI data\n\n**Owing** to its compactness and high brightness, OQ\u00a0208 is often used as a calibrator in VLBI experiments. Consequently, there are abundant archival VLBI data sets from the past two decades. The 15-GHz VLBA data from the Monitoring of Jets in Active Galaxies with VLBA Experiments (MOJAVE) program[^1] have the highest resolution and sensitivity and are most appropriate for kinematic studies. The 15-GHz data spread over 28 epochs in the time range from early 1995 to the middle of 2009. We also made use of the VLBI data from the Radio Reference Frame Image Database (RRFID)[^2], which were obtained with the VLBA together with several geodetic radio telescopes simultaneously at 2.3 and 8.4 GHz. The multiple-frequency data were used for the spectral index analysis of compact components. The RRFID data cover the time range from 1994 to 2008.\n\nThe calibration of the archival VLBI data has already been made in the Astronomical Image Processing System (AIPS) following the standard procedure. We performed **additionally several** iterations of self-calibration in DIFMAP () to eliminate residual antenna-based phase errors. The 15-GHz visibilities were fitted with five Gaussian model components (two in the northwest lobe, another two in the southeast lobe, and one jet knot, see Figure ) using the DIFMAP task MODELFIT. Table 1 lists the fitted parameters, including the integrated flux density S$_{i}$, the relative separation $R$ from the primary hotspot used as the reference, the position angle of the VLBI component, and the deconvolved size $\\theta_{maj}\\times\\theta_{min}$. The statistical uncertainties of the observed parameters were estimated according to the formulae given by , except that we also included an additional 5% as the amplitude calibration error. The details of the error analysis method are given by .\n\nIn addition to VLBI imaging data, the single-dish flux density monitoring data observed with the 26-meter paraboloid telescope of the University of Michigan Radio Astronomical Observatory[^3] () were included for a supplementary analysis of the variability and the geometry of the radio structure. The total flux density measurements were made at 4.8, 8, and 14.5\u00a0GHz from July 1974 to August 2009.\n\n# Results\n\n## Radio morphology\n\nAt 2.3 GHz, OQ\u00a0208 exhibits the double-component morphology typical of the CSOs (Figure\u00a0-a). The total extent of the radio source is about 7 mas ($\\sim$``{=html}10 pc). The northeast (NE) component is much brighter than the southwest (SW) one with an intensity ratio of 18.0:1 at 2.3 GHz. Both components show a steep spectral index at high frequencies (defined as $S\\propto \\nu^{-\\alpha}$) with $\\alpha_{8.4GHz}^{15GHz}= 1.38 \\pm 0.03$ (SW) and $\\alpha_{8.4GHz}^{15GHz}= 1.12 \\pm 0.02$ (NE), identifying them as two mini-lobes of the CSO. The asymmetric brightness of two-sided jets\/lobes in radio sources is commonly attributed to the Doppler boosting effect that enhances the apparent flux density of the advancing jet\/lobe by a factor of $\\delta^{3+\\alpha}$, where $\\delta$ is the Doppler boosting factor and $\\alpha$ is the spectral index. However, the kinematic analysis below suggests that the jet in OQ\u00a0208 is mildly relativistic and the jets nearly align with the plane of the sky, therefore Doppler boosting cannot account for the brightness difference. Another explanation of the **strong** asymmetry in the brightness of the two lobes is that the receding SW lobe suffers from more free-free absorption than the advancing NE lobe. Indeed, the flux density ratio is even **higher** at a lower frequency of 1.66 GHz, $S_{NE}\/S_{SW}=60:1$ (), supporting this interpretation. A third possibility is that the inhomogeneous distribution of the external medium surrounding the lobes results in a larger conversion efficiency from jet kinetic energy to radiative energy in the northeast jet ().\n\nA 2.3-GHz image with a Gaussian taper with a half-value at 20\u00a0M$\\lambda$ wavelength (Figure\u00a0-b) reveals an extended feature about 30 mas ($\\sim$``{=html}40 pc) to the west, which was previously reported by . Extended emission features on kpc to Mpc scales has also been detected in OQ\u00a0208 () and in other CSO galaxies (e.g., 0108+388: ; 0941-080, 1345+125: ), which were interpreted as relics remaining from past ($>10^8$ yr ago) nuclear activity. Extended components on scales of $<$``{=html}100 pc are rarely seen in CSOs (J1511+0518 is **another** example; ) and this puzzling one-sided extended feature at 40 pc distance requires an interval between two intermittent activities shorter than $2\\times10^3$ yr (). The non-detection of **the** northeast fading lobe likely indicates asymmetric properties of the ambient **interstellar medium** (ISM) on pc scales, leading to more rapid radiative or adiabatic losses and a shorter life of the NE lobe. This, again, is consistent with the fact that the NE advancing lobe is much brighter.\n\nThe NE lobe is resolved into two subcomponents (NE1 and NE2) at 8 and 15 GHz. NE1 dominates the flux density of the whole source. The spectral index of NE1, determined from 8 and 15 GHz data at close epochs, is $\\alpha^{15GHz}_{8GHz}=0.99 \\pm 0.18$. The brightness temperature is calculated using the equation $$\\label{eq:Tb}\n T_{b} = 1.22 \\times 10^{12}\\frac{S_{ob}}{\\nu_{ob}^{2}\\theta_{maj}\\theta_{min}}(1+z),$$ where $S_{ob}$ is the observed flux density in Jy, $\\nu_{ob}$ is the observing frequency in GHz, $\\theta_{maj}$ and $\\theta_{min}$ are the major and minor axis of the Gaussian model component in units of mas, and $T_b$ is the derived brightness temperature in source rest frame in units of Kelvin. The average brightness temperature of NE1 is $4.4 \\times 10^{10}$ K. The secondary component NE2 is weaker than NE1 in the range 4.3\u201316.3. It has a lower brightness temperature of $1.4 \\times 10^{9}$ K and a much steeper spectral index of $\\alpha^{15GHz}_{8GHz}=2.09 \\pm 0.13$. The high brightness temperature and the relatively flatter spectral index of NE1 identify it as the primary hotspot formed by the reverse shock when the jet head impacts on the wall of the external medium. The continuous emission structure between NE1 and NE2 would suggest a physical connection.\n\nThe SW lobe is resolved into two components (SW1 and SW2) at 8 and 15 GHz. The two components have comparable sizes. The integrated flux density of SW1 is slightly higher than SW2 with a flux density ratio $R(SW1\/SW2)$ in the range of 1.3\u20132.4. Their spectral indices are $\\alpha^{15GH}_{8GHz}=1.11\\pm0.07$ (SW1) and $1.82\\pm0.26$ (SW2).\n\nThe presence of double hotspots in the 10-pc lobe structure of OQ\u00a0208 remains a puzzle. Similar morphology is found in **the** multiple-hotspot appearance of large-scale double sources such as 3C\u00a020 (), where the hotspot usually identified as the primary hotspot is extremely compact and bright, and the other is more diffuse and shows various structures. Three models may account for the double hotspots inside the 10-pc nuclear region of OQ208:\n\n\\(1\\) *Blocking of the jet head by the external medium*. In this scenario, the jet impacts on the wall of the surrounding medium and generates the hotspot there. As the jet head may slide along the wall, the hotspot position also changes accordingly (dentist's drill model: ). The primary hotspot is the **currently** active jet-ISM interaction, while the secondary hotspot represents the past primary hotspot, which is now fading away.\n\n\\(2\\) *Jet precession*. The present images do not allow us to distinguish whether a single jet beam is precessing () or **if** there are intrinsically two beams () due to the similar compactness and shape of two hotspots and the absence of emission between the hotspots and the nucleus. One possible driving mechanism of jet precession is associated with a binary black hole system where the black hole launching the jet has a relative motion with respect to the surrounding ISM and the multiple hotspots reflect the different impact locations. Taking the brighter SW1 as the primary (current) and SW2 as the secondary (older) impact of the jet beam(s) on the external medium, the older hotspot has stopped receiving direct energy supply from the nucleus, and synchrotron aging will result in a steeper radio spectrum at high frequencies. Indeed, SW2 has a steeper spectral index than SW1. If SW2 is the old hotspot, it should lie behind the primary hotspot SW1, implying **that** there is a jet bending between SW2 and SW1.\n\n\\(3\\) *Redirected flows*. Alternatively, the secondary hotspot represents either a re-directed outflow originating from the primary hotspot () or a deflected jet flow () escaping from a weak point in the cocoon\/lobe structure. Different from the model (1), the secondary hotspot in model (3) continues to receive energy supply from the primary hotspot. Therefore, it would last longer, would not show spectral steepening, and be much fainter and less compact than the primary hotspot. In the eastern lobe, NE2 has no direct connection with the nucleus, but shows a collimated bridge **that links it** with NE1. NE2 is also larger than NE1 and has an amorphous shape. These morphological characters make NE2 more likely **to be** the result of a re-directed outflow from the primary hotspot NE1 than a directed flow from the nucleus.\n\nThe continuous emission between two hotspots NE1 and SW1 at 2.3-GHz (Figure\u00a0-a) is resolved into a knotty jet at 8.4 and 15 GHz (Figure\u00a0-c and -d). From east to west, these knots are labeled J1, J2, J3, and J4. J1 is the brightest jet knot located at the inlet of the northeast lobe and is detected at 15 and 8.4 GHz at all epochs. It has a 15 GHz to 8.4 GHz spectral index between 0.81 and 1.45, indicative of a typical optically thin jet knot, and a brightness temperature of $3.4 \\times 10^9$\u00a0K. J2 appears in the 8.4-GHz images at epochs from 2002.044 to 2002.945, where the VLBI images have the highest sensitivity and resolution. J2 only appears in the most sensitive 15-GHz image at epochs 1995.958, 1996.378, 2003.318, 2008137, and 2009.558. It has a steep spectrum with $\\alpha^{15GHz}_{8GHz}=1.43$ and a brightness temperature $T_{b} \\leq 1.9 \\times 10^7$\u00a0K. J3 is closest to the geometric center and only appears in the highest sensitivity 15-GHz images. The brightness temperature of J3 is $\\leq 3.0\\times 10^{7}$\u00a0K derived from the 15-GHz data. Lacking the spectral index information, the active nucleus or the innermost jet knot nature of J3 remains uncertain. Regardless of the identification of J3, the non-detection at 8.4 GHz would imply an extremely high absorption opacity toward the nucleus. J4 is only marginally detected at 15 GHz on 2009.558. More sensitive observations are necessary to to reveal the physical nature of these components.\n\n## Variability of the hotspot NE1\n\nThe steep-spectrum lobes dominate the total flux density of most CSOs. monitored a sample of seven CSOs using the VLA at 8.5 GHz and found extremely stable flux densities (rms variability **less** than 1%) over a period of eight months. Some exceptional CSOs show moderate-level variability on time scales of a few years, for example, OQ\u00a0208 (the present paper) and J1324+4048 (). Unlike with blazars with a jet luminosity amplified by Doppler boosting, the low- and moderate-level variability observed in lobe-dominated CSOs is probably related to variation of the energy supply by the relativistic jet flow and the changes of the energy dissipation process including the adiabatic expansion losses and radiative losses.\n\nThe light curves observed with the Michigan 26-meter radio telescope (UMRAO) at three frequencies (4.8, 8, and 14.5 GHz) show two distinct flares in the past 25 years, around 1983 and 2000 (Figure\u00a0-a). The accurate peak epoch around 1983 cannot be clearly determined because of the broad gap in the sampling, while the second-peak epoch is not well constrained because the light curves only cover the declining part of the flare. The 5-GHz flux density variation of OQ\u00a0208 using the VLA shows a continuous decrease from 1980 to 1995. In addition, Waltman et al. (1991) observed a decrease of the 8.1-GHz flux density from about 2.7 Jy in mid-1983 to about 2.0 Jy in mid-1989 using the Green Bank Interferometer. Starting from 1989, the flux density has been constant around 1.9 Jy. This evidence consistently suggests that OQ\u00a0208 has intrinsic variability on time scales of a few years.\n\nFor completeness, the 15-GHz summed flux densities of the VLBI components **were** added to diagrams covering the time range between July 1995 and August 2009, which confirms the second flare peak between 1998 and 2000. The declining part of the VLBI flux density data is fully consistent with the total flux density data. The comparison of the VLBI and single-dish data also suggests that a significant **percentage** ($>85\\%$) of the total flux density originates from the compact radio structure.\n\n**To** quantitatively evaluate the variability, we introduced a variability index $V$ to illustrate the relative variability scale (): $$\\label{eq:var}\n V = \\frac{S_{max} - S_{min}}{S_{max} + S_{min}},$$ where $S_{max}$, $S_{min}$ are the maximum and minimum flux density of an individual flare. A value of $V \\approx 0$ represents indistinguishable or undetectable variability, and $V \\approx 1$ indicates an extreme variability The 15-GHz light curves of individual components (Figure\u00a0-b) suggest a high-variability index $V = 0.62\\pm0.12$ for NE1 during the 1998 flare and a mild $V=0.19\\pm0.04$ for SW1 during the 2003 flare. The standard deviation of the measured flux densities with respect to the mean value is adopted as the statistical error. The variability index of NE1 derived from VLBI data is consistent with that measured from the single-dish monitoring data (Figure\u00a0-a), confirming that the total flux density variation is dominated by the brightest hotspot NE1. The other VLBI components do not show significant variability at 15 GHz.\n\nThe variability at 8 and 14.5 GHz using the single-dish data (Figure\u00a0-a) indicates a **maximum** level of $(22\\pm4)\\%$ and $(40\\pm8)\\%$, respectively. The 5-GHz variability scale is lower than at 8 and 14.5 GHz, probably **because of** the increasing opacity at lower frequencies. The mini-lobes at 2.3 GHz based on the RRFID data did not show **any** prominent variability. At 1.4 GHz, the total flux density of OQ\u00a0208 increased from $\\sim$``{=html}755 mJy in 1979 to $\\sim$``{=html}854 mJy in 2001 (de Bruyn, private communication), which is closely connected with the growth of the overall source size.\n\nThe complex variability behavior of OQ\u00a0208 can be understood separately in the low- and high-frequency regimes:\n\n\\(1\\) Up to 1 GHz, the emission mostly arises from the extended structure surrounding the compact jets and lobes, which are strongly absorbed. The flux density of an opaque source can be simply expressed as $S_\\nu \\propto T_{b} \\times A$, where $S_\\nu$ is the flux density at an optically thick frequency $\\nu$, $T_{b}$ is the brightness temperature at frequency $\\nu$, and $A$ is the surface area of the extended structure. The continuous increase of the flux density from 1.4 GHz to 335 MHz results from the (slow) growth of the overall source size as a result of the expansion of the radio jets and lobes.\n\n\\(2\\) Above 1.4 GHz, the synchrotron radiation from the extended structure drops abruptly and the flux density is dominated by the compact hotspot, the mini-lobe, and the jet components. Arguments can be made that radiative losses play a dominant role in the energy balance of young and compact radio sources in the presence of strong (milligauss; mG) magnetic fields (). The adiabatic expansion would then only lead to a decrease in the optically thin section of the spectrum. However, the source does not show prominent variability at 2.3-GHz, which could mean that the source is still opaque at this frequency and that any low-amplitude flux density variation is smeared out by the opacity effect.\n\n\\(3\\) At frequencies above 5 GHz, the spectrum becomes optically thin and the observed variability is governed at the hotspots, i.e., the flux density from the most active jet-ISM interaction interface varies with the balance between the feeding and acceleration of fresh relativistic electrons and the radiative losses. The variation of the 15-GHz flux density with the component size (Figure\u00a0-c) shows and increase of the flux density increases with the increasing size before the flare peak at epoch 1998.313, indicating the injection of particles, energy, and magnetic fields into the lobes. During this stage, the hotspot is still optically thick. After 1998, the intermittent feeding through the jet decreases and the hotspot continues to expand adiabatically and to become more optically thin. The strong synchrotron radiation results in a sharp decrease of the flux density from $\\sim$``{=html}900 mJy at 1998.313 to $\\sim$``{=html}400 mJy at 2005.473. Figure\u00a0-d shows the variation of the spectral index $\\alpha^{15GHz}_{8GHz}$ with time, using the 15-GHz MOJAVE data and 8.4-GHz RRFID data at close epochs. After the clear division around epoch 2006, the spectrum is significantly steepened. This supports the **idea described above** that the flux density variation of hotspot NE1 is regulated by intermittent nuclear feeding. When there are no or less fresh relativistic electrons inserted into the hotspots, the old electrons suffer from synchrotron aging, and the spectral index in the optically-thin part steepens.\n\nStanghellini et al. (1997) estimated the lifetime of the electrons radiating at 5 GHz **assuming** equipartition magnetic fields and a homogeneous spherical geometry for the hotspot. The resulting magnetic field in hotspot NE1 (component A in Stanghellini et al. 1997) is 90 mG, and an electron lifetime at 5 GHz of 25 yr. The synchrotron aging time scales with the observing frequency and the brightness temperature $T_b$ as $t_{syn} \\sim \\nu^{-2} T_b^{3}$. Using the same arguments, the lifetime of electrons radiating at 15 GHz is about 4 yr, in agreement with the observed variability time scale of $\\sim$``{=html}15 yr (from 1983 to 1998).\n\n## Geometry of the radio structure\n\nAccording to Figure\u00a0-b, the light curve of the northeast hotspot NE1 peaks around 1998.313. The peak of the light curve of SW1 is not very prominent and falls in a broad time range between 2002 and 2005. A mediate epoch of $2003.31^{+2.16}_{-1.53}$ is chosen as an estimate of the peak epoch, in which the uncertainty is represented by the scattering of the peak epoch. If the radio flux density enhancement in both hotspots is associated with the same episodic nuclear activity, the observation that **the flare of NE1 is leading that of SW1** by $5.00^{+2.16}_{-1.53}$ yr suggests that NE1 is the advancing lobe and SW1 is the receding one. The light travel time along the line of sight between the two hotspots corresponds to a radial difference of $5.00^{+2.16}_{-1.53}$ light years, or $1.53^{+0.67}_{-0.47}$ pc. Assuming that the two lobes are symmetric relative to the core (separation $\\sim$``{=html}9.5 pc) and that both jets are ejected at the same velocity, an inclination angle of the radio structure of $80.8\\degr^{+2.8\\degr}_{-3.8\\degr}$ may be derived between the jet and the line of sight. This calculation suggests that the jet axis of OQ\u00a0208 is very close to the plane of the sky. It is supported by the observation that the jet emission at the nucleus is not Doppler boosted (Sections 3.1 and 3.4).\n\nHowever, this implication appears to **be** conflict with the optical spectroscopic identification of OQ\u00a0208 as a Seyfert 1 galaxy with broad Balmer emission lines (). In the standard unification model for AGNs (), the Type-I AGNs should have an inclination angle less than 70(). Such an angle of 70would predict a time delay between NE1 and SW1 of about 10.97 yr, for which there is no evidence in the light curves. The inconsistency between these inclination angles may suggest that the radio jet does not align with the optical axis of symmetry.\n\nIf the brightness asymmetry of the two lobes is **caused by** differential free-free absorption (Section 3.1), the physical properties of the ISM can be constrained. The spectral fit of free-free absorption gives a differential opacity $\\Delta \\tau_{ff}=5.3$ (). The electron density is about 2700 cm$^{-3}$ along a 1.53 pc path, assuming that the ISM surrounding the northeast and southwest lobes is homogeneous and the electron temperature is $T_{e} = 10^4$ K. This electron density is typical for the narrow-line regions (NLR). An inhomogeneous ISM would require a higher density (in a clumpy medium) to account for the free-free absorption discrepancy.\n\n## Kinematics and jet properties \u2013 a young radio source\n\nExisting kinematics studies of CSOs show that the hotspot advancing speed is typically around 0.2$c$, and that they are young with kinematic ages of only between 100 and 2000 yr (). In the present work, the slow proper motion of OQ\u00a0208 components are determined **from** making use of the long time-baseline 15-GHz VLBA data to eliminate systematic errors induced by different resolutions, different *uv* coverage, and different frequencies. Since the identification of the central core of OQ\u00a0208 is not certain yet, the brightest hotspot NE1 is used as the reference, and relative proper motions of other VLBI components are measured.\n\nFigure\u00a0 shows the variation of the distance of the VLBI components using 28 individual measurements over a time span is 13.6 yr. Linear regression was used to determine the separation rate. The positional uncertainty ($\\sigma \\approx 0.02$ has) of each component at each epoch was used for weighting the individual data points ($1\/\\sigma^2$). The fits give the separation rates $0.027 \\pm 0.002$ mas\u00a0yr$^{-1}$ ($0.134 \\pm 0.009\\,c$, SW1\u2013NE1), $0.022 \\pm 0.002$ mas\u00a0yr$^{-1}$($0.108 \\pm 0.012\\,c$, SW2\u2013NE1) and $-0.034 \\pm 0.001$ mas\u00a0yr$^{-1}$($0.168 \\pm 0.005\\,c$, J1\u2013NE1). Positive velocities of SW1 and SW2 signify an advancing motion of both hotspots. The negative velocity of J1 indicates that the internal jet moves toward the terminal hotspot NE1. The separation of J1 at the first three epochs, 1995.266, 1995.395, and 1996.375, shows **a strong** deviation from the general trend of the data points beyond 1997. During the fitting process, double uncertainties were assigned to these data points in order to reduce their weight and thus to avoid a bias induced by these large deviations. The separation of NE2\u2013NE1 is about 1.2 mas, and the linear regression fit did not give a significant proper motion, because NE2 shows a significant transverse motion rather than a radial motion (Fig. ).\n\nLiu et al. (2000) reported a first determination of the NE1\u2013SW1 angular separation rate of $0.058\\pm0.038$ mas\u00a0yr$^{-1}$, derived from 8.4-GHz data with only six epochs between 1994 and 1997. **Subsequently,** Luo et al. (2007) improved this value to $0.031\\pm0.006$ mas\/yr (5$\\sigma$), on the basis of a longer time span (11 yr) and **nine** epochs. Compared to the previous 8.4-GHz work, the current proper motion measurements greatly improve the accuracy in the following ways: (1) The 15 GHz data provide a higher resolution (typically $0.8\\times0.6$ mas$^{2}$), while the previous measurements at 8.4 GHz had a resolution ($1.3\\times1.0$ mas$^{2}$) that is comparable with the separation between SW1 and SW2. At 8.4-GHz the positions of SW1 and SW2 are affected by the flux density variation of the two components (mainly SW1), which is also true for the northeast lobe where the positions of NE2 and J1 are affected by the flux density and structure variations of NE1. At 15 GHz, SW1 is **clearly** separated from SW2. (2) The present work covers a time span of 13.6 yr, four times longer than in the earliest work by Liu et al. (2000), and $\\sim$``{=html}2.3 yr longer than that in Luo et al. (2007). (3) The data from 28 individual epochs give a better time sampling compared to the sparse sampling in previous works. (4) The 15-GHz VLBA data provide better and more uniform *u,v* coverage than the 8.4-GHz data.\n\nAccording to the new measurement of $\\mu=0.027\\pm0.002$ mas\u00a0yr$^{-1}$ for SW1\u2013NE1 14$\\sigma$ detection), the kinematic age of OQ\u00a0208 is calculated as $255\\pm17$ yr in the source rest frame. This age classifies OQ\u00a0208 as one of the youngest CSOs ().\n\nAssuming equal advancing velocities for NE1 and SW1, the internal jet J1 is found to move 3.5 times faster (v$_{J1}$ = 0.23 c) than the terminal hotspots. A similar phenomenon has also been observed in other CSOs (e.g., B0710+439 and B2352+495: ; J0132+5620: ). While the radio lobes sweep the ambient ISM, a jet knot moves in an excavated channel with a relatively lower ISM density.\n\nThe kinematic parameters of the OQ\u00a0208 jet derived above can be used to calculate the jet flow properties using the following the equations: $$\\beta_{app} = \\frac{\\beta \\sin\\theta}{1 - \\beta\\cos\\theta}$$ $$\\Gamma = \\frac{1}{\\sqrt{1 - \\beta^{2}} }$$ $$\\delta = \\frac{1}{\\Gamma (1 - \\beta\\cos\\theta)},$$ where, $\\beta_{app}$ is the apparent speed of the jet component, $\\beta$ is the intrinsic jet speed (both in units of $c$), $\\theta$ is the inclination angle between the jet axis and the line of sight, $\\Gamma$ is the bulk Lorentz factor, and $\\delta$ is the Doppler boosting factor. Taking the apparent speed of the jet knot J1 ($\\beta_{app}=0.168\\,c$) and the inclination angle ($\\theta=80.8\\degr$: Section 3.3) into account, we **obtain** $\\beta=0.166\\,c$, $\\Gamma=1.014$ and $\\delta=1.013$. These calculations indicate a mildly relativistic flow in OQ\u00a0208. Relativistic beaming does not play a major role in determining the observed radiative and kinematic properties.\n\n## Sideways motion and frustrated jets\n\nIn addition to the advancing motion of the hotspots, OQ\u00a0208 shows evidence of sideways motion. Figure\u00a0 displays the two-dimensional plots of the VLBI components' relative locations. The primary hotspot NE1 is used as the reference at the (0,0) position. The southwest hotspot SW1 in general indicates an advancing motion along the NE1\u2013SW1 line, although during the first half of the period 1996\u20132003 it shows a curved trajectory. Similar to SW1, the internal jet J1 shows a general motion toward NE1, but it makes a loop-like path during this same initial period. This common feature of SW1 and J1 suggests that the reference NE1 may have followed a curved path between 1995 and 2003. If SW1 is used as the reference (plot is not shown here), NE1 indeed follows a bending trajectory, first to the north and then to the east, in a mirror-symmetric pattern with SW1\u2013NE1. J1 follows a straight trajectory along the connecting line J1\u2013SW1.\n\nSW2 shows a complex and disordered motion pattern (Fig. ), but generally it moves to the southwest along the connecting line NE1\u2013SW2 within the past fourteen years. measured a proper motion of SW2 (relative to NE1) as 0.032$\\pm$``{=html}0.020 mas\u00a0yr$^{-1}$ based on five-epoch 5-GHz data. Their measurement **agrees with ours** ($\\mu=0.022\\pm0.002$ mas\u00a0yr$^{-1}$) with a much higher accuracy.\n\nThe hotspot NE2 shows an apparent motion to the southwest, in an opposite direction of the jet advance. As discussed in Section 3.1, NE2 is likely a hotspot generated by the deflected jet hitting the wall of the surrounding ISM. While the hotspot NE1 is advancing faster than NE2, NE2 itself might be a stationary component.\n\nThe wandering of the jet heads NE1 and SW1, as well as the disturbed lobe structure with deflected jet or double hotspots, provide the signature of an obstructed jet flow. According to the *frustration* model, the advance motion of the radio lobes is confined by the surrounding dense ISM due to the intrinsically low jet power, or the reduction or cessation of the jet power. A critical requirement for a CSO to evolve into medium-sized symmetric objects (MSO) is that the jet remains supersonic at the interface between the ISM and intergalactic medium (IGM). Through analytic modeling of the expansion of the hotspot and cocoon, and derived that the initial hotspot advance velocity should at least be 0.3$c$ so that the CSO can evolve beyond the ISM-IGM boundary of the host galaxy. Jet flows with velocities below this threshold become subsonic before **they reach** the ISM-IGM boundary and have a distorted morphology.\n\nThe present data show that the radio source of OQ\u00a0208 is still growing, although the advancing velocity is low. The radio power of OQ\u00a0208 is $P_{1.4GHz} = 10^{25.0}$ W\u00a0Hz$^{-1}$ without correction for the low-frequency absorption, placing it in the low-jet-power regime (). A low-power CSO like OQ\u00a0208 can relatively easily develop hydrodynamic surface instabilities in the jet, **which makes the jet more likely to loss momentum**. If a significant fraction of the jet momentum flux is dissipated, the jet cannot sustain **the** supersonic laminar flow and forms a **stagnating** standing shock (*i.e.*, the location of the hotspot) and becomes flaring and diffused beyond that point. Such a flared jet is found in the CSS quasar 3C\u00a048, which has a compact and bright hotspot about 300 parsec from the central AGN (). The decrease of jet power can happen in any stage of the radio source evolution when the nuclear activity is reduced or terminated, or the jet experiences significant loss of kinetic energy because of jet-ISM interactions (). Frustrated sources may continue to grow but they do not have compact symmetric lobes. Eventually, frustrated CSOs and MSOs become radio relics at frequencies below a few hundred MHz.\n\n# Conclusions and summary\n\nMulti-frequency multi-epoch radio images of OQ\u00a0208 with the highest angular resolution of $\\sim 0.5$ mas present a typical CSO morphology with two compact mini-lobes along a position angle about $-126\\degr$ and separated by $\\sim$``{=html}10 pc. The flux density ratio of the structural components varies with the observing frequency. At 8.4- and 15-GHz images, each lobe is resolved into two subcomponents, among which the brightest components are identified as the primary hotspots. In the highest-resolution 15-GHz images, a knotty jet is detected between the NE and SW lobes. The core of OQ\u00a0208 cannot be securely identified from the present data.\n\nThe two primary hotspots NE1 and SW1 show significant flux density variations at 15 GHz of 62% and 19%, respectively. The 15-GHz variability of the hotspots in OQ\u00a0208 may result from the balance between feeding from the central engine and adiabatic expansion loss.\n\n**The peak epoch of the NE1 flare in 1998.3 is earlier than that of the SW1 flare** by an estimated 5.00 yr, suggesting that NE1 is moving toward the observer and SW1 is receding. This light travel time difference between the two hotspots corresponds to a radial distance difference of about 1.53 pc. Combining the projected and radial separation between NE1 and SW1, we estimate the inclination angle between the radio jet and the line of sight to be $\\sim80.8\\degr$. This value is **higher** than the 70opening angle generally assumed for the nuclear NLR, which suggest that the jet axis and the galaxy axis are not aligned.\n\nUsing the brightest hotspot NE1 as the reference, the relative proper motions **were** estimated for SW1, SW2, and J1 **to be** 0.027 mas\u00a0yr$^{-1}$, 0.022 mas\u00a0yr$^{-1}$, and $-$``{=html}0.034 mas\u00a0yr$^{-1}$. The separation speed between SW1 and NE1 corresponds to a hotspot advancing speed of $0.065\\,c$, assuming symmetric advancing and receding hotspot motions. The proper motion of the jet component J1 relative to the systemic center is about $-$``{=html}0.047 mas\u00a0yr$^{-1}$ and corresponds to a velocity of 0.230 c, making the jet mildly relativistic. The internal jet knot moves significantly faster than the terminal hotspots because it encounters a lower density in the excavated jet channel than the hotspots within the lobes. The angular separation rate leads to an estimate of the kinematic age of about 255 yr, suggesting that OQ\u00a0208 is one of the youngest CSOs known.\n\nThe observed sideways motion of hotspots and disturbed lobe morphology are signatures of obstruction of the jet head by the surrounding ISM, although the overall radio source is still growing. During the early CSO stages of a radio source evolution, such disturbed lobe structures seem common (). A young radio source may experience intermittent jet power or several failed starts before entering into a continuous and steady growth phase. During each intermittent activity, the jets may impact different regions of the ISM, failing to make a breakthrough. The notion of intermittent energy feeding is supported by the detection of multiple hotspots and the fading lobe $\\sim$``{=html}30 mas ($\\sim$``{=html}40 pc) to the southwest in a different position angle from the main jet body. According to the evolution modeling of extragalactic radio sources (), a critical requirement for a CSO to evolve beyond the ISM\u2013IGM boundary (typically 1\u20133 kpc) of the host galaxy is that the jet remains supersonic and maintains **a** laminar flow. Compared to high-power high-velocity sources, low-power low-velocity CSOs such as OQ\u00a0208 can easily develop turbulent jet flow after losing a significant amount of **their** momentum flux and kinetic energy during **their** interactions with the ambient ISM. Frustrated CSO radio sources with turbulent jet flow would not evolve into large-scale symmetric FR\u00a0II-type radio galaxies.\n\n# Acknowledgments\n\nThe authors thank the anonymous referee for helpful comments. This work is supported in part by the National Basic Research Program of China (973 Program) under grant Nos. 2009CB24900 and 2013CB837901, the Science & Technology Commission of Shanghai Municipality (06DZ22101), the China-Hungary Collaboration and Exchange Program funded by the International Cooperation Bureau of the Chinese Academy of Sciences (CAS) and the Strategic Priority Research Program (XDA04060700) of the CAS. F.W. thanks the JIVE\/ASTRON Summer Student Program and for the hospitality of the JIVE. S.F. was supported by the Hungarian Scientific Research Fund (OTKA K104539). The authors thank Ger de Bruyn for discussions about low-frequency variability of OQ\u00a0208. This research has made use of the NASA\/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. The MOJAVE project is supported under National Science Foundation grant 0807860-AST and NASA-Fermi grant NNX08AV67G. This research has made use of the United States Naval Observatory (USNO) Radio Reference Frame Image Database (RRFID). The University of Michigan Radio Astronomy Observatory is supported by funds from the NSF, NASA, and the University of Michigan.\n\n[^1]: http:\/\/www.physics.purdue.edu\/MOJAVE\/\n\n[^2]: http:\/\/rorf.usno.navy.mil\/rrfid.shtml\n\n[^3]: https:\/\/dept.astro.lsa.umich.edu\/datasets\/umrao.php","meta":{"dup_signals":{"dup_doc_count":11},"filename":"out\/1211.4287_extract_ms.tex.md"},"subset":"arxiv"}