Latest/Contributing.Html 12 13 14

Total Page:16

File Type:pdf, Size:1020Kb

Latest/Contributing.Html 12 13 14 v0.14+332.gd98c6648.dirty Handbook Introduction • Advanced topics • Use cases ADINA WAGNER &MICHAEL HANKE with Laura Waite, Kyle Meyer, Marisa Heckner, Benjamin Poldrack, Yaroslav Halchenko, Chris Markiewicz, Pattarawat Chormai, Lisa N. Mochalski, Lisa Wiersch, Jean-Baptiste Poline, Nevena Kraljevic, Alex Waite, Lya K. Paas, Niels Reuter, Peter Vavra, Tobias Kadelka, Peer Herholz, Alexandre Hutton, Sarah Oliveira, Dorian Pustina, Hamzah Hamid Baagil, Tristan Glatard, Giulia Ippoliti, Christian Mönch, Togaru Surya Teja, Dorien Huijser, Ariel Rokem, Remi Gau, Judith Bomba, Konrad Hinsen, Wu Jianxiao, Małgorzata Wierzba, Stefan Appelhoff, Michael Joseph, Tamara Cook, Stephan Heunis, Joerg Stadler, Sin Kim, Oscar Esteban CONTENTS I Introduction1 1 A brief overview of DataLad2 1.1 On Data..........................................2 1.2 The DataLad Philosophy.................................3 2 How to use the handbook5 2.1 For whom this book is written..............................5 2.2 How to read this book..................................5 2.3 Let’s get going!......................................8 3 Installation and configuration 10 3.1 Install DataLad...................................... 10 3.2 Initial configuration................................... 17 4 General prerequisites 19 4.1 The Command Line................................... 19 4.2 Command Syntax.................................... 20 4.3 Basic Commands..................................... 20 4.4 The Prompt........................................ 21 4.5 Paths........................................... 21 4.6 Text Editors........................................ 22 4.7 Shells........................................... 22 4.8 Tab Completion...................................... 23 5 What you really need to know 24 5.1 DataLad datasets..................................... 24 5.2 Simplified local version control workflows....................... 25 5.3 Consumption and collaboration............................. 25 5.4 Dataset linkage...................................... 26 5.5 Full provenance capture and reproducibility...................... 26 5.6 Third party service integration............................. 27 5.7 Metadata handling.................................... 27 5.8 All in all. ......................................... 28 II Basics 29 6 DataLad datasets 31 6.1 Create a dataset..................................... 31 6.2 Populate a dataset.................................... 34 i 6.3 Modify content...................................... 40 6.4 Install datasets...................................... 43 6.5 Dataset nesting...................................... 50 6.6 Summary......................................... 53 7 DataLad, Run! 56 7.1 Keeping track....................................... 56 7.2 DataLad, Re-Run!..................................... 61 7.3 Input and output..................................... 67 7.4 Clean desk........................................ 75 7.5 Summary......................................... 78 8 Under the hood: git-annex 80 8.1 Data safety........................................ 80 8.2 Data integrity....................................... 82 9 Collaboration 89 9.1 Looking without touching................................ 89 9.2 Where’s Waldo?..................................... 96 9.3 Retrace and reenact................................... 98 9.4 Stay up to date...................................... 100 9.5 Networking........................................ 102 9.6 Summary......................................... 107 10 Tuning datasets to your needs 109 10.1 DIY configurations................................... 109 10.2 More on DIY configurations.............................. 114 10.3 Configurations to go.................................. 123 10.4 Summary........................................ 126 11 Make the most out of datasets 132 11.1 A Data Analysis Project with DataLad......................... 132 11.2 YODA: Best practices for data analyses in a dataset................. 133 11.3 YODA-compliant data analysis projects........................ 140 11.4 Summary........................................ 153 12 One step further 161 12.1 More on Dataset nesting................................ 161 12.2 Computational reproducibility with software containers............... 163 12.3 Summary........................................ 169 13 Third party infrastructure 174 13.1 Beyond shared infrastructure............................. 174 13.2 Dataset hosting on GIN................................. 185 13.3 Amazon S3 as a special remote............................ 192 13.4 Overview: Publishing datasets............................. 201 13.5 Summary........................................ 205 14 Help yourself 207 14.1 What to do if things go wrong............................. 207 14.2 Miscellaneous file system operations......................... 207 14.3 Back and forth in time................................. 228 14.4 How to get help..................................... 241 ii 14.5 Gists........................................... 256 III Advanced 263 15 Advanced options 265 15.1 How to hide content from DataLad.......................... 265 15.2 DataLad extensions................................... 268 15.3 DataLad’s result hooks................................. 270 15.4 Configure custom data access............................. 273 15.5 Remote Indexed Archives for dataset storage and backup.............. 277 15.6 Prioritizing subdataset clone locations........................ 293 15.7 Subsample datasets using datalad copy-file...................... 296 16 Go big or go home 306 16.1 Going big with DataLad................................ 306 16.2 Calculate in greater numbers............................. 309 16.3 Fixing up too-large datasets.............................. 311 16.4 Summary........................................ 313 17 Computing on clusters 314 17.1 DataLad on High Throughput or High Performance Compute Clusters....... 314 17.2 DataLad-centric analysis with job scheduling and parallel computing....... 315 17.3 Walkthrough: Parallel ENKI preprocessing with fMRIprep.............. 324 18 Better late than never 335 18.1 Transitioning existing projects into DataLad..................... 335 19 Special purpose showrooms 340 19.1 Reproducible machine learning analyses: DataLad as DVC............. 340 IV Use cases 365 20 A typical collaborative data management workflow 367 20.1 The Challenge...................................... 367 20.2 The DataLad Approach................................. 367 20.3 Step-by-Step....................................... 368 21 Basic provenance tracking 372 21.1 The Challenge...................................... 372 21.2 The DataLad Approach................................. 372 21.3 Step-by-Step....................................... 372 22 Writing a reproducible paper 377 22.1 The Challenge...................................... 377 22.2 The DataLad Approach................................. 378 22.3 Step-by-Step....................................... 378 22.4 Automation with existing tools............................ 381 23 Student supervision in a research project 385 23.1 The Challenge...................................... 385 23.2 The DataLad Approach................................. 386 iii 23.3 Step-by-Step....................................... 386 24 A basic automatically and computationally reproducible neuroimaging analysis 390 24.1 The Challenge...................................... 390 24.2 The DataLad Approach................................. 390 24.3 Step-by-Step....................................... 391 25 An automatically and computationally reproducible neuroimaging analysis from scratch 398 25.1 The Challenge...................................... 398 25.2 The DataLad Approach................................. 399 25.3 Step-by-Step....................................... 399 26 Scaling up: Managing 80TB and 15 million files from the HCP release 411 26.1 The Challenge...................................... 412 26.2 The DataLad Approach................................. 412 26.3 Step-by-Step....................................... 413 27 Building a scalable data storage for scientific computing 421 27.1 The Challenge...................................... 421 27.2 The DataLad approach................................. 422 27.3 Step-by-step....................................... 423 28 Using Globus as a data store for the Canadian Open Neuroscience Portal 427 28.1 The Challenge...................................... 427 28.2 The Datalad Approach................................. 428 28.3 Step-by-Step....................................... 429 28.4 Resources........................................ 431 29 DataLad for reproducible machine-learning analyses 432 29.1 The Challenge...................................... 432 29.2 The DataLad Approach................................. 433 29.3 Step-by-Step....................................... 433 29.4 References........................................ 446 30 Contributing 447 V Appendix 448 A Glossary 449 B Frequently Asked Questions 457 B.1 What is Git?....................................... 457 B.2 Where is Git’s “staging area” in DataLad datasets?.................. 457 B.3 What is git-annex?.................................... 457 B.4 What does DataLad add to Git and git-annex?..................... 458 B.5 Does DataLad host my data?.............................. 458 B.6 How does GitHub relate to DataLad?.......................... 459 B.7 Does DataLad scale to large dataset
Recommended publications
  • Solaris 10 End of Life
    Solaris 10 end of life Continue Oracle Solaris 10 has had an amazing OS update, including ground features such as zones (Solaris containers), FSS, Services, Dynamic Tracking (against live production operating systems without impact), and logical domains. These features have been imitated in the market (imitation is the best form of flattery!) like all good things, they have to come to an end. Sun Microsystems was acquired by Oracle and eventually, the largest OS known to the industry, needs to be updated. Oracle has set a retirement date of January 2021. Oracle indicated that Solaris 10 systems would need to raise support costs. Oracle has never provided migratory tools to facilitate migration from Solaris 10 to Solaris 11, so migration to Solaris has been slow. In September 2019, Oracle decided that extended support for Solaris 10 without an additional financial penalty would be delayed until 2024! Well its March 1 is just a reminder that Oracle Solaris 10 is getting the end of life regarding support if you accept extended support from Oracle. Combined with the fact gdpR should take effect on May 25, 2018 you want to make sure that you are either upgraded to Solaris 11.3 or have taken extended support to obtain any patches for security issues. For more information on tanningix releases and support dates of old and new follow this link ×Sestive to abort the Unix Error Operating System originally developed by Sun Microsystems SolarisDeveloperSun Microsystems (acquired by Oracle Corporation in 2009)Written inC, C'OSUnixWorking StateCurrentSource ModelMixedInitial release1992; 28 years ago (1992-06)Last release11.4 / August 28, 2018; 2 years ago (2018-08-28)Marketing targetServer, PlatformsCurrent: SPARC, x86-64 Former: IA-32, PowerPCKernel typeMonolithic with dynamically downloadable modulesDefault user interface GNOME-2-LicenseVariousOfficial websitewww.oracle.com/solaris Solaris is the own operating system Of Unix, originally developed by Sunsystems.
    [Show full text]
  • Adventures with Illumos
    > Adventures with illumos Peter Tribble Theoretical Astrophysicist Sysadmin (DBA) Technology Tinkerer > Introduction ● Long-time systems administrator ● Many years pointing out bugs in Solaris ● Invited onto beta programs ● Then the OpenSolaris project ● Voted onto OpenSolaris Governing Board ● Along came Oracle... ● illumos emerged from the ashes > key strengths ● ZFS – reliable and easy to manage ● Dtrace – extreme observability ● Zones – lightweight virtualization ● Standards – pretty strict ● Compatibility – decades of heritage ● “Solarishness” > Distributions ● Solaris 11 (OpenSolaris based) ● OpenIndiana – OpenSolaris ● OmniOS – server focus ● SmartOS – Joyent's cloud ● Delphix/Nexenta/+ – storage focus ● Tribblix – one of the small fry ● Quite a few others > Solaris 11 ● IPS packaging ● SPARC and x86 – No 32-bit x86 – No older SPARC (eg Vxxx or SunBlades) ● Unique/key features – Kernel Zones – Encrypted ZFS – VM2 > OpenIndiana ● Direct continuation of OpenSolaris – Warts and all ● IPS packaging ● X86 only (32 and 64 bit) ● General purpose ● JDS desktop ● Generally rather stale > OmniOS ● X86 only ● IPS packaging ● Server focus ● Supported commercial offering ● Stable components can be out of date > XStreamOS ● Modern variant of OpenIndiana ● X86 only ● IPS packaging ● Modern lightweight desktop options ● Extra applications – LibreOffice > SmartOS ● Hypervisor, not general purpose ● 64-bit x86 only ● Basis of Joyent cloud ● No inbuilt packaging, pkgsrc for applications ● Added extra features – KVM guests – Lots of zone features –
    [Show full text]
  • Jupyter Tutorial Release 0.8.0
    Jupyter Tutorial Release 0.8.0 Veit Schiele Oct 01, 2021 CONTENTS 1 Introduction 3 1.1 Status...................................................3 1.2 Target group...............................................3 1.3 Structure of the Jupyter tutorial.....................................3 1.4 Why Jupyter?...............................................4 1.5 Jupyter infrastructure...........................................4 2 First steps 5 2.1 Install Jupyter Notebook.........................................5 2.2 Create notebook.............................................7 2.3 Example................................................. 10 2.4 Installation................................................ 13 2.5 Follow us................................................. 15 2.6 Pull-Requests............................................... 15 3 Workspace 17 3.1 IPython.................................................. 17 3.2 Jupyter.................................................. 50 4 Read, persist and provide data 143 4.1 Open data................................................. 143 4.2 Serialisation formats........................................... 144 4.3 Requests................................................. 154 4.4 BeautifulSoup.............................................. 159 4.5 Intake................................................... 160 4.6 PostgreSQL................................................ 174 4.7 NoSQL databases............................................ 199 4.8 Application Programming Interface (API)..............................
    [Show full text]
  • BSCW Administrator Documentation Release 7.4.1
    BSCW Administrator Documentation Release 7.4.1 OrbiTeam Software Mar 11, 2021 CONTENTS 1 How to read this Manual1 2 Installation of the BSCW server3 2.1 General Requirements........................................3 2.2 Security considerations........................................4 2.3 EU - General Data Protection Regulation..............................4 2.4 Upgrading to BSCW 7.4.1......................................5 2.4.1 Upgrading on Unix..................................... 13 2.4.2 Upgrading on Windows................................... 17 3 Installation procedure for Unix 19 3.1 System requirements......................................... 19 3.2 Installation.............................................. 20 3.3 Software for BSCW Preview..................................... 26 3.4 Configuration............................................. 30 3.4.1 Apache HTTP Server Configuration............................ 30 3.4.2 BSCW instance configuration............................... 35 3.4.3 Administrator account................................... 36 3.4.4 De-Installation....................................... 37 3.5 Database Server Startup, Garbage Collection and Backup..................... 37 3.5.1 BSCW Startup....................................... 38 3.5.2 Garbage Collection..................................... 38 3.5.3 Backup........................................... 38 3.6 Folder Mail Delivery......................................... 39 3.6.1 BSCW mail delivery agent (MDA)............................. 39 3.6.2 Local Mail Transfer Agent
    [Show full text]
  • Datasette Documentation
    Datasette Documentation Simon Willison Sep 22, 2021 Contents 1 Contents 3 1.1 Getting started..............................................3 1.1.1 Play with a live demo......................................3 1.1.2 Try Datasette without installing anything using Glitch.....................3 1.1.3 Using Datasette on your own computer............................4 1.1.4 datasette –get..........................................5 1.1.5 datasette serve –help......................................6 1.2 Installation................................................7 1.2.1 Basic installation........................................7 1.2.2 Advanced installation options.................................8 1.3 The Datasette Ecosystem......................................... 10 1.3.1 sqlite-utils............................................ 11 1.3.2 Dogsheep............................................ 11 1.4 Pages and API endpoints......................................... 11 1.4.1 Top-level index......................................... 11 1.4.2 Database............................................ 12 1.4.3 Table.............................................. 12 1.4.4 Row............................................... 12 1.5 Publishing data.............................................. 13 1.5.1 datasette publish........................................ 13 1.5.2 datasette package........................................ 16 1.6 Deploying Datasette........................................... 17 1.6.1 Deployment fundamentals................................... 18 1.6.2
    [Show full text]
  • Json Schema Python Package
    Json Schema Python Package Epiphytical and irascible Stanly often tetanized some caraway astern or recap terminably. Alicyclic or sepaloid, Rajeev never lambastes any paedophilia! Lubricious and unperilous Martin unsolders while tonish Sherwin uprears her savers first-class and vitiates ungovernably. Thats it uses json package The jsonschema package must be installed separately in order against use this decorator Combining schemas Understanding JSON Schema 70 Apr 12 2020. It is in xml processing of specific use regular expressions are examples show you can create a mandatory conversion tactic can see full list. Any changes take effect, our example below is there is a given types, free edition of code example above, happy testing process generated from any. By which require that in addition, and click actions on disk, but you have a standard. Learn about JSON Schemas and how you agree use sometimes to build your own JSON Validator Server using Python and Django. You maybe transitive dependencies. The instance object. You really fast json package manager for packaging into a packages in python library in json schema, if there any errors with. Build Your Own Schema Registry Server Using Python and. Jsonchema Custom type format and validator in Python. Debian - Details of package python3-jsonschema in stretch. If you to your messy. Pyarrow datatype. In go to properly formatted string. Validate an XML or JSON file against a LIXI2 Schema Validate a LIXI package XML or JSON against a Schematron file that contains business. See if not build one. Lightweight data to configure canonical logging, seems somewhat different tasks that contain all news about contact search.
    [Show full text]
  • Web Scraping with Python
    2nd Edition Web Scraping with Python COLLECTING MORE DATA FROM THE MODERN WEB Ryan Mitchell www.allitebooks.com www.allitebooks.com SECOND EDITION Web Scraping with Python Collecting More Data from the Modern Web Ryan Mitchell Beijing Boston Farnham Sebastopol Tokyo www.allitebooks.com Web Scraping with Python by Ryan Mitchell Copyright © 2018 Ryan Mitchell. All rights reserved. Printed in the United States of America. Published by O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472. O’Reilly books may be purchased for educational, business, or sales promotional use. Online editions are also available for most titles (http://oreilly.com/safari). For more information, contact our corporate/insti‐ tutional sales department: 800-998-9938 or [email protected]. Editor: Allyson MacDonald Indexer: Judith McConville Production Editor: Justin Billing Interior Designer: David Futato Copyeditor: Sharon Wilkey Cover Designer: Karen Montgomery Proofreader: Christina Edwards Illustrator: Rebecca Demarest April 2018: Second Edition Revision History for the Second Edition 2018-03-20: First Release See http://oreilly.com/catalog/errata.csp?isbn=9781491985571 for release details. The O’Reilly logo is a registered trademark of O’Reilly Media, Inc. Web Scraping with Python, the cover image, and related trade dress are trademarks of O’Reilly Media, Inc. While the publisher and the author have used good faith efforts to ensure that the information and instructions contained in this work are accurate, the publisher and the author disclaim all responsibility for errors or omissions, including without limitation responsibility for damages resulting from the use of or reliance on this work. Use of the information and instructions contained in this work is at your own risk.
    [Show full text]
  • Ansible 2.2 Documentation Release 2.4
    Ansible 2.2 Documentation Release 2.4 Ansible, Inc October 06, 2017 Contents 1 About Ansible 1 i ii CHAPTER 1 About Ansible Welcome to the Ansible documentation! Ansible is an IT automation tool. It can configure systems, deploy software, and orchestrate more advanced IT tasks such as continuous deployments or zero downtime rolling updates. Ansible’s main goals are simplicity and ease-of-use. It also has a strong focus on security and reliability, featuring a minimum of moving parts, usage of OpenSSH for transport (with other transports and pull modes as alternatives), and a language that is designed around auditability by humans–even those not familiar with the program. We believe simplicity is relevant to all sizes of environments, so we design for busy users of all types: developers, sysadmins, release engineers, IT managers, and everyone in between. Ansible is appropriate for managing all envi- ronments, from small setups with a handful of instances to enterprise environments with many thousands of instances. Ansible manages machines in an agent-less manner. There is never a question of how to upgrade remote daemons or the problem of not being able to manage systems because daemons are uninstalled. Because OpenSSH is one of the most peer-reviewed open source components, security exposure is greatly reduced. Ansible is decentralized–it relies on your existing OS credentials to control access to remote machines. If needed, Ansible can easily connect with Kerberos, LDAP, and other centralized authentication management systems. This documentation covers the current released version of Ansible (2.3) and also some development version features (2.4).
    [Show full text]
  • Pyscaffold Documentation Release 4.1.Post1.Dev13+G5e6c226
    PyScaffold Documentation Release 4.1.post1.dev13+g5e6c226 Blue Yonder Oct 01, 2021 CONTENTS 1 Installation 3 1.1 Requirements...............................................3 1.2 Installation................................................3 1.3 Alternative Methods...........................................4 1.4 Additional Requirements.........................................4 2 Usage & Examples 5 2.1 Quickstart................................................5 2.2 Examples.................................................6 2.3 Package Configuration..........................................7 2.4 PyScaffold’s Own Configuration..................................... 10 3 Advanced Usage & Features 11 3.1 Dependency Management........................................ 11 3.2 Migration to PyScaffold......................................... 17 3.3 Updating from Previous Versions.................................... 17 3.4 Extending PyScaffold.......................................... 19 4 Why PyScaffold? 29 5 Features 31 5.1 Configuration, Packaging & Distribution................................ 31 5.2 Versioning and Git Integration...................................... 33 5.3 Sphinx Documentation.......................................... 33 5.4 Dependency Management in a Breeze.................................. 34 5.5 Automation, Tests & Coverage...................................... 34 5.6 Management of Requirements & Licenses................................ 35 5.7 Extensions................................................ 36 5.8 Easy Updating.............................................
    [Show full text]
  • Serdar Temiz
    OPEN DATA AND INNOVATION ADOPTION: Lessons From Sweden Serdar Temiz Doctoral thesis submitted to the School of Industrial Engineering and Management for the fulfillment of the requirements for the degree of Ph.D. in Industrial Engineering and Management KTH Royal Institute of Technology School of Industrial Engineering and Management Department of Industrial Economics and Management SE-10044 Stockholm, Sweden OPEN DATA AND INNOVATION ADOPTION: LESSONS FROM SWEDEN ISBN 978-91-7873-083-4 © Temiz, Serdar, December 2018 [email protected] Academic thesis, with permission and approval of the KTH Royal Institute of Technology, is submitted for public review and doctoral examination in fulfillment of the requirements for the degree of Doctor of Phi- losophy in Industrial Engineering and Management on Monday, February 22, 2019, at 09:00 in Hall F3, Lindstedtsvägen 26, KTH, Stockholm, Sweden. Printed in Sweden, Universitetsservice US-AB ii Abstract The Internet has significantly reduced the cost of producing, accessing, and using data, with governments, companies, open data advocates, and researchers observing open data’s potential for promoting democratic and innovative solutions and with open data’s global market size estimated at billions of dollars in the European Union alone (Carrara, Chan, Fischer, & Steenbergen, 2015). This thesis explores the concept of open data, describing and analyzing how open data adoption occurs to better identify and understand key challenges in this process and thus contribute to better use of the available data resources
    [Show full text]
  • VIP Documentation Release 0.8.1
    VIP Documentation Release 0.8.1 Carlos Gomez Gonzalez, Olivier Wertz & VORTEX team Nov 17, 2017 Contents 1 Attribution 3 2 Introduction 5 3 Jupyter notebook tutorial 7 4 Mailing list 9 5 Quick Setup Guide 11 6 Installation and dependencies 13 6.1 Installation and dependencies...................................... 13 6.2 Loading VIP............................................... 14 7 Frequently asked questions 15 7.1 FAQ.................................................... 15 8 Package structure 17 8.1 vip package................................................ 19 9 API 21 i ii VIP Documentation, Release 0.8.1 Contents 1 VIP Documentation, Release 0.8.1 2 Contents CHAPTER 1 Attribution Please cite Gomez Gonzalez et al. 2016 (submitted) whenever you publish data reduced with VIP. Astrophysics Source Code Library reference [ascl:1603.003] 3 VIP Documentation, Release 0.8.1 4 Chapter 1. Attribution CHAPTER 2 Introduction VIP is an open source package/pipeline for angular, reference star and spectral differential imaging for exoplanet/disk detection through high-contrast imaging. VIP is being developed by the VORTEX team and collaborators. You can check on github the direct contributions to the code (tab contributors). Most of VIP‘s functionalities are mature but it doesn’t mean it’s free from bugs. The code will continue evolving and therefore all the feedback and contributions will be greatly appreciated. If you want to report a bug, suggest or add a functionality please create an issue or send a pull request on the github repository. The goal of VIP is to incorporate robust, efficient, easy-to-use, well-documented and open source implementations of high-contrast image processing algorithms to the interested scientific community.
    [Show full text]
  • Ubuntu:Precise Ubuntu 12.04 LTS (Precise Pangolin)
    Ubuntu:Precise - http://ubuntuguide.org/index.php?title=Ubuntu:Precise&prin... Ubuntu:Precise From Ubuntu 12.04 LTS (Precise Pangolin) Introduction On April 26, 2012, Ubuntu (http://www.ubuntu.com/) 12.04 LTS was released. It is codenamed Precise Pangolin and is the successor to Oneiric Ocelot 11.10 (http://ubuntuguide.org/wiki/Ubuntu_Oneiric) (Oneiric+1). Precise Pangolin is an LTS (Long Term Support) release. It will be supported with security updates for both the desktop and server versions until April 2017. Contents 1 Ubuntu 12.04 LTS (Precise Pangolin) 1.1 Introduction 1.2 General Notes 1.2.1 General Notes 1.3 Other versions 1.3.1 How to find out which version of Ubuntu you're using 1.3.2 How to find out which kernel you are using 1.3.3 Newer Versions of Ubuntu 1.3.4 Older Versions of Ubuntu 1.4 Other Resources 1.4.1 Ubuntu Resources 1.4.1.1 Unity Desktop 1.4.1.2 Gnome Project 1.4.1.3 Ubuntu Screenshots and Screencasts 1.4.1.4 New Applications Resources 1.4.2 Other *buntu guides and help manuals 2 Installing Ubuntu 2.1 Hardware requirements 2.2 Fresh Installation 2.3 Install a classic Gnome-appearing User Interface 2.4 Dual-Booting Windows and Ubuntu 1 of 212 05/24/2012 07:12 AM Ubuntu:Precise - http://ubuntuguide.org/index.php?title=Ubuntu:Precise&prin... 2.5 Installing multiple OS on a single computer 2.6 Use Startup Manager to change Grub settings 2.7 Dual-Booting Mac OS X and Ubuntu 2.7.1 Installing Mac OS X after Ubuntu 2.7.2 Installing Ubuntu after Mac OS X 2.7.3 Upgrading from older versions 2.7.4 Reinstalling applications after
    [Show full text]