Welcome to the Jupyter Project documentation. This website acts as “meta” documentation for the Jupyter ecosystem. It has a collection of resources to navigate the tools and communities in this ecosystem, and to help you get started.
Get started with Jupyter Notebook
Try the notebook
Community
Sustainability and growth
Architecture
What is Jupyter?
Contributor Guides
How to contribute to the projects
Narratives and Use Cases
Narratives of common deployment scenarios
Release Notes
New features, upgrades, deprecation notes, and bug fixes
IPython
An interactive Python kernel and REPL
Reference
APIs
Installation, Configuration, and Usage
Documentation for users
Advanced
Documentation for advanced use-cases
Jupyter User Interfaces
Jupyter Notebook
Jupyter console
Qt console
JupyterHub
configurable HTTP proxy
dockerspawner
ldapauthenticator
oauthenticator
sudospawner
Education
nbgrader
Conversion and Formatting
nbconvert
nbformat
Kernels
IRkernel
IJulia
Community maintained kernels
ipykernel
ipyparallel
jupyter_client
jupyter_core
Deployment
docker-stacks
jupyter-sphinx-theme
kernel_gateway
nbviewer
tmpnb
traitlets
JupyterLab
ipywidgets
Contents
Try in Your Browser. No Installation Needed.
Are You Ready to Install Jupyter?
Try Jupyter (https://try.jupyter.org) is a site for trying out the Jupyter Notebook, equipped with kernels for several different languages (Julia, R, C++, Scheme, Ruby) without installing anything. You will also find there an example to try out the new JupyterLab interface.
When running the examples on the Try Jupyter site, you will get a temporary Jupyter server running on mybinder.org which you can use to play around until you close your browser session.
The Classic Notebook example should look like this:
Notebook Dashboard
Notebook Editor
If you have tried Jupyter and like it, please use our detailed Installation Guide to install Jupyter on your computer.
This page contains information and links about installing and using tools across the Jupyter ecosystem. Generally speaking, the documentation of each tool is the place to learn about the best-practices for how to install and use the tool.
The Jupyter Notebook interface is a Web-based application for authoring documents that combine live-code with narrative text, equations and visualizations.
GitHub Repo
Docs
This section includes instructions on how to get started with Jupyter Notebook. But there are multiple Jupyter user interfaces one can use, based on their needs. Please checkout the list and links below for additional information and instructions about how to get started with each of them.
This information explains how to install the Jupyter Notebook and the IPython kernel.
While Jupyter runs code in many programming languages, Python is a requirement (Python 3.3 or greater, or Python 2.7) for installing the Jupyter Notebook.
We recommend using the Anaconda distribution to install Python and Jupyter. We’ll go through its installation in the next section.
For new users, we highly recommend installing Anaconda. Anaconda conveniently installs Python, the Jupyter Notebook, and other commonly used packages for scientific computing and data science.
Use the following installation steps:
Download Anaconda. We recommend downloading Anaconda’s latest Python 3 version (currently Python 3.7).
Install the version of Anaconda which you downloaded, following the instructions on the download page.
Congratulations, you have installed Jupyter Notebook. To run the notebook:
jupyter notebook
See Running the Notebook for more details.
Important
Jupyter installation requires Python 3.3 or greater, or Python 2.7. IPython 1.x, which included the parts that later became Jupyter, was the last version to support Python 3.2 and 2.6.
As an existing Python user, you may wish to install Jupyter using Python’s package manager, pip, instead of Anaconda.
First, ensure that you have the latest pip; older versions may have trouble with some dependencies:
pip3 install --upgrade pip
Then install the Jupyter Notebook using:
pip3 install jupyter
(Use pip if using legacy Python 2.)
pip
Congratulations. You have installed Jupyter Notebook. See Running the Notebook for more details.
Upgrading Jupyter Notebook using Anaconda
Upgrading IPython Notebook to Jupyter Notebook
If using Anaconda, update Jupyter using conda:
conda
conda update jupyter
See Run the Notebook for running the Jupyter Notebook.
The Jupyter Notebook used to be called the IPython Notebook. If you are running an older version of the IPython Notebook (version 3 or earlier) you can use the following to upgrade to the latest version of the Jupyter Notebook.
or
If using pip:
pip install -U jupyter
See also
The migrating document has additional information about migrating from IPython 3 to Jupyter.
JupyterLab is a next-generation web-based user interface for Project Jupyter.
Install instructions
JupyterHub is a multi-user hub for interactive computing sessions, made for teams and organizations, and with pluggable authentication and scalability.
The Jupyter Console is a terminal-based console for interactive computing.
Docs and Install instructions
The Jupyter QtConsole is a Qt application for interactive computing with rich output.
You can install Jupyter Kernels to add support for new languages and code behavior.
This information gives a high-level view of using Jupyter Notebook with different programming languages (kernels).
Yes, installing the Jupyter Notebook will also install the IPython kernel. This allows working on notebooks using the Python programming language.
To install an additional version of Python, i.e. to have both Python 2 and 3 available, see the IPython docs on installing kernels.
To run notebooks in languages other than Python, such as R or Julia, you will need to install additional kernels. For more information, see the full list of available kernels.
Detailed installation instructions for individual Jupyter or IPython projects.
Information about additional programming language kernels.
Technical information about kernels.
Basic Steps
Starting the Notebook Server
Introducing the Notebook Server’s Command Line Options
How do I open a specific Notebook?
How do I start the Notebook using a custom IP or port?
How do I start the Notebook server without opening a browser?
How do I get help about Notebook server options?
Start the notebook server from the command line:
You should see the notebook open in your browser.
After you have installed the Jupyter Notebook on your computer, you are ready to run the notebook server. You can start the notebook server from the command line (using Terminal on Mac/Linux, Command Prompt on Windows) by running:
This will print some information about the notebook server in your terminal, including the URL of the web application (by default, http://localhost:8888):
http://localhost:8888
$ jupyter notebook [I 08:58:24.417 NotebookApp] Serving notebooks from local directory: /Users/catherine [I 08:58:24.417 NotebookApp] 0 active kernels [I 08:58:24.417 NotebookApp] The Jupyter Notebook is running at: http://localhost:8888/ [I 08:58:24.417 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
It will then open your default web browser to this URL.
When the notebook opens in your browser, you will see the Notebook Dashboard, which will show a list of the notebooks, files, and subdirectories in the directory where the notebook server was started. Most of the time, you will wish to start a notebook server in the highest level directory containing notebooks. Often this will be your home directory.
The following code should open the given notebook in the currently running notebook server, starting one if necessary.
jupyter notebook notebook.ipynb
By default, the notebook server starts on port 8888. If port 8888 is unavailable or in use, the notebook server searches the next available port. You may also specify a port manually. In this example, we set the server’s port to 9999:
jupyter notebook --port 9999
Start notebook server without opening a web browser:
jupyter notebook --no-browser
The notebook server provides help messages for other command line arguments using the --help flag:
--help
jupyter notebook --help
Detailed information about command line arguments, configuration, and usage.
Information relevant to using the various tools in the Jupyter ecosystem.
Description
Narrative examples
The Notebook Narratives explore uses of the Jupyter Notebook in a variety of applications.
Using the Notebook for data exploration
Using extensions and widgets
Using nbconvert for code execution and workflow simplification
Using nbconvert for publishing
Using multiple language kernels
Note
We’re actively working on this section of the documentation to improve it for you. Thanks for your patience.
JupyterHub Narratives explore deployment and scaling of the Jupyter Notebook for a group of users. JupyterHub allows flexibility in configuration and deployment which makes JupyterHub valuable to education, industry research teams, and service providers. In these Narratives, we will look at differences in deployment, deployment advantages, and best practices.
A basic JupyterHub deployment
A reference deployment of JupyterHub using Docker
Teaching a Course with JupyterHub and nbgrader using a reference deployment on a single server and Ansible scripts to automate set up
Teaching a Course with JupyterHub, nbgrader, and containers
JupyterHub deployments using Containers including Docker
This section presents some examples of integrating different projects together. These projects form the foundation of innovative services and provide building blocks for future applications.
A Narrative about Creating Dashboards
A Narrative about Thebe
A Narrative about Hydrogen
The purpose of this page is to highlight kernels and other projects that are central to the usage of Jupyter in data science. This page is not meant to be comprehensive or unbiased, but rather to provide an opinionated view of the usage of Jupyter in data science based on our interactions with users.
The following Jupyter kernels are widely used in data science:
IPython (GitHub Repo)
IRkernel (Documentation, GitHub Repo)
IRdisplay (GitHub Repo)
repr (GitHub Repo)
IJulia Kernel (GitHub Repo)
Interactive Widgets (GitHub Repo)
Bash (GitHub Repo)
Example Use-Cases
Businesses, especially those that started their ‘digital transformation’ journey, are producing ever-increasing volumes of data. Enterprise data science aims to unearth the hidden value of those digital assets, which are typically siloed, uncategorized, and inaccessible to humans.
Jupyter and JupyterHub can play a major role in related initiatives, especially in companies with an established open-source culture. The intent of this page is to provide you with ideas how Jupyter technology can fit into your organization’s processes and system landscapes, by providing real-world examples and showcases.
Beyond Interactive: Notebook Innovation at Netflix
Part 2: Scheduling Notebooks at Netflix
PayPal Notebooks: Data science and machine learning at scale, powered by Jupyter (JupyterCon 2018 · video)
Bloomberg BQuant platform
Jupyter & Python in the corporate LAN
DevOps Intelligence with JupyterHub
We’re actively working on this section of the documentation to improve it for you. If you’ve got a suggestion for a resource that would be helpful, please create an issue or a pull request!
Narratives are collaborative, shareable, publishable, and reproducible. We believe that Narratives help both yourself and other researchers by sharing your use of Jupyter projects, technical specifics of your deployment, and installation and configuration tips so that others can learn from your experiences.
jupyter <subcommand> [options]
Commands like jupyter notebook start Jupyter applications. The jupyter command is primarily a namespace for subcommands. A command like jupyter-foo found on your PATH will be available as a subcommand jupyter foo.
jupyter
jupyter-foo
PATH
jupyter foo
The jupyter command can also be used to do actions other than starting a Jupyter application.
-h
,
Show help information, including available subcommands.
--config-dir
Show the location of the config directory.
--data-dir
Show the location of the data directory.
--runtime-dir
Show the location of the runtime directory.
--paths
Show all Jupyter directories and search paths.
--json
Print directories and search paths in machine-readable JSON format.
Configuration files
Data files
Runtime files
Summary
Jupyter stores different files (i.e. configuration, data, runtime) in a number of different locations. Environment variables may be set to customize for the location of each file type.
Jupyter separates data files (nbextensions, kernelspecs) from runtime files (logs, pid files, connection files) from configuration (config files, custom.js).
Config files are stored by default in the ~/.jupyter directory.
~/.jupyter
JUPYTER_CONFIG_DIR
Set this environment variable to use a particular directory, other than the default, for Jupyter config files.
Besides the JUPYTER_CONFIG_DIR, additional directories to search can be specified through JUPYTER_CONFIG_PATH.
JUPYTER_CONFIG_PATH
Set this environment variable to provide extra directories for the config search path.
:envvar:`JUPYTER_CONFIG_PATH` should contain a series of directories, seperated by
`` os.pathsep`` (``;`` on Windows, ``:`` on Unix).
An example of where the JUPYTER_CONFIG_PATH can be set is if notebook or server extensions are installed in a custom prefix. Since notebook and server extensions are automatically enabled through configuration files, automatic enabling will only work if the custom prefix’s etc/jupyter directory is added to the Jupyter config search path.
etc/jupyter
Besides the user config directory mentioned above, Jupyter has a search path of additional locations from which a config file will be loaded. Here’s a table of the locations to be searched, in order of preference:
Unix
Windows
{sys.prefix}/etc/jupyter/
/usr/local/etc/jupyter/ /etc/jupyter/
/usr/local/etc/jupyter/
/etc/jupyter/
%PROGRAMDATA%\jupyter\
To list the config directories currrently being used you can run the below command from the command line:
jupyter --paths
The following command shows the config directory specifically::
jupyter --config-dir
Jupyter uses a search path to find installable data files, such as kernelspecs and notebook extensions. When searching for a resource, the code will search the search path starting at the first directory until it finds where the resource is contained.
Each category of file is in a subdirectory of each directory of the search path. For example, kernel specs are in kernels subdirectories.
kernels
JUPYTER_PATH
Set this environment variable to provide extra directories for the data search path. JUPYTER_PATH should contain a series of directories, separated by os.pathsep (; on Windows, : on Unix). Directories given in JUPYTER_PATH are searched before other locations. This is used in addition to other entries, rather than replacing any.
os.pathsep
;
:
Linux (& other free desktops)
Mac
JUPYTER_DATA_DIR or (if not set) ~/.local/share/jupyter/ (respects $XDG_DATA_HOME)
JUPYTER_DATA_DIR
~/.local/share/jupyter/
$XDG_DATA_HOME
JUPYTER_DATA_DIR or (if not set) ~/Library/Jupyter
~/Library/Jupyter
JUPYTER_DATA_DIR or (if not set) %APPDATA%\jupyter
%APPDATA%\jupyter
{sys.prefix}/share/jupyter/
/usr/local/share/jupyter /usr/share/jupyter
/usr/local/share/jupyter
/usr/share/jupyter
%PROGRAMDATA\jupyter
The config directory for Jupyter data files, which contain non-transient, non-configuration files. Examples include kernelspecs, nbextensions, or voila templates.
Set this environment variable to use a particular directory, other than the default, as the user data directory.
As mentioned above, to list the config directories currently being used you can run the below command from the command line:
The following command shows the data directory specificially::
jupyter --data-dir
Things like connection files, which are only useful for the lifetime of a particular process, have a runtime directory.
On Linux and other free desktop platforms, these runtime files are stored in $XDG_RUNTIME_DIR/jupyter by default. On other platforms, it’s a runtime/ subdirectory of the user’s data directory (second row of the table above).
$XDG_RUNTIME_DIR/jupyter
runtime/
An environment variable may also be used to set the runtime directory.
JUPYTER_RUNTIME_DIR
Set this to override where Jupyter stores runtime files.
The following command shows the runtime directory specifically::
jupyter --runtime-dir
JUPYTER_CONFIG_DIR for config file location
JUPYTER_CONFIG_PATH for config file locations
JUPYTER_PATH for datafile directory locations
JUPYTER_DATA_DIR for data file location
JUPYTER_RUNTIME_DIR for runtime file location
jupyter_core.paths
The Python API to locate these directories.
Locate these directories from the command line.
The Python config file
Command line options for configuration
Common Jupyter configuration system The Jupyter applications have a common config system, and a common config directory. By default, this directory is ~/.jupyter.
Kernel configuration directories If kernels use config files, these will normally be organised in separate directories for each kernel. For instance, the IPython kernel looks for files in the IPython directory instead of the default Jupyter directory ~/.jupyter.
To create a default config file, run:
jupyter {application} --generate-config
The generated file will be named jupyter_application_config.py.
jupyter_application_config.py
By editing the jupyter_application_config.py file, you can configure class attributes like this:
c.NotebookApp.port = 8754
Be careful with spelling. Incorrect names will simply be ignored, with no error message.
To add to a collection which may have already been defined elsewhere, you can use methods like those found on lists, dicts and sets: append, extend, prepend() (like extend, but at the front), add, and update (which works both for dicts and sets):
append
extend
prepend()
add
update
c.TemplateExporter.template_path.append('./templates')
Every configurable value can also be set from the command line and passed as an argument, using this syntax:
jupyter notebook --NotebookApp.port=8754
Frequently used options will also have short aliases and flags, such as --port 8754 or --no-browser.
--port 8754
--no-browser
To see the abbreviated options, pass --help or --help-all as follows:
--help-all
jupyter {application} --help # Just the short options jupyter {application} --help-all # Includes options without short names
Command line options will override options set within a configuration file.
traitlets.config
The low-level architecture of this config system.
Abstract
Understanding the Migration Process
Automatic migration of files
Where have my configuration files gone?
Finding the Location of Important Files
Data files: kernelspecs and notebook extensions
Since Jupyter does not have profiles, how do I customize it?
Changing the Jupyter notebook configuration directory
Changing the Jupyter notebook configuration file
Changing IPython’s profile using custom kernelspecs
Understanding Installation Changes
Notebook extensions
Understanding Changes in imports
The Big Split moved IPython’s various language-agnostic components under the Jupyter umbrella. Going forward, Jupyter will contain the language-agnostic projects that serve many languages. IPython will continue to focus on Python and its use with Jupyter.
This document describes what has changed, and how you may need to modify your code or configuration when migrating from IPython version 3 to Jupyter.
The first time you run any jupyter command, it will perform an automatic migration of files. The automatic migration process copies files, instead of moving files, leaving the originals in place and the copies in the Jupyter file locations. You can re-run the migration, if needed, by calling jupyter migrate. Your custom configuration will be migrated automatically and should work with Jupyter without further editing. When you update or modify your configuration in the future, please keep in mind that the file locations may have changed.
jupyter migrate
Also known as: “Why isn’t my configuration having any effect anymore?”
Jupyter splitting out from IPython means that the locations of some files have moved, and Jupyter projects have not inherited everything from how IPython did it.
When you start your first Jupyter application, the relevant configuration files are automatically copied to their new Jupyter locations. The original configuration files in the IPython locations have no effect on Jupyter’s execution. If you accidentally edit your original IPython config file, you may not see the desired effect with Jupyter now. You should check that you are editing Jupyter’s configuration file, and you should see the expected effect after restarting the Jupyter server.
This section provides quick reference for common locations of IPython 3 files and the migrated Jupyter files.
Configuration files customize Jupyter to the user’s preferences. The migrated files should all be automatically copied to their new Jupyter locations. Here are the location changes:
IPython location
Jupyter location
~/.ipython/profile_default/static/custom
→
~/.jupyter/custom
~/.ipython/profile_default/ipython_notebook_config.py
~/.jupyter/jupyter_notebook_config.py
~/.ipython/profile_default/ipython_nbconvert_config.py
~/.jupyter/jupyter_nbconvert_config.py
~/.ipython/profile_default/ipython_qtconsole_config.py
~/.jupyter/jupyter_qtconsole_config.py
~/.ipython/profile_default/ipython_console_config.py
~/.jupyter/jupyter_console_config.py
To choose a directory location other than the default ~/.jupyter, set the JUPYTER_CONFIG_DIR environment variable. You may need to run jupyter migrate after setting the environment variable for files to be copied to the desired directory.
Data files include files, other than configuration files, which are user installed. Examples include kernelspecs and notebook extensions. Like the configuration files, data files are also automatically migrated to their new Jupyter locations.
In IPython 3, data files lived in ~/.ipython.
~/.ipython
In Jupyter, data files use platform-appropriate locations:
OS X: ~/Library/Jupyter
Windows: the location specified in %APPDATA% environment variable
%APPDATA%
Elsewhere, $XDG_DATA_HOME is respected, with the default of ~/.local/share/jupyter
~/.local/share/jupyter
In all cases, the JUPYTER_DATA_DIR environment variable can be used to set a location explicitly.
Data files installed system-wide (e.g. in /usr/local/share/jupyter) have not changed. Per-user installation of data files has changed location from .ipython to the platform-appropriate Jupyter location.
.ipython
While IPython has the concept of profiles, Jupyter does not have profiles.
In IPython, profiles are collections of configuration and runtime files. Inside the IPython directory (~/.ipython), there are directories with names like profile_default or profile_demo. In each of these are configuration files (ipython_config.py, ipython_notebook_config.py) and runtime files (history.sqlite, security/kernel-*.json). Profiles could be used to switch between configurations of IPython.
profile_default
profile_demo
ipython_config.py
ipython_notebook_config.py
history.sqlite
security/kernel-*.json
Previously, people could use commands like ipython notebook --profile demo to set the profile for both the notebook server and the IPython kernel. This is no longer possible in one go with Jupyter, just like it wasn’t possible in IPython 3 for any other kernels.
ipython notebook --profile demo
If you want to change the notebook configuration, you can set the JUPYTER_CONFIG_DIR:
JUPYTER_CONFIG_DIR=./jupyter_config jupyter notebook
If you just want to change the config file, you can do:
jupyter notebook --config=/path/to/myconfig.py
If you do want to change the IPython kernel’s profile, you can’t do this at the server command-line anymore. Kernel arguments must be changed by modifying the kernelspec. You can do this without relaunching the server. Kernelspec changes take effect every time you start a new kernel. However, there isn’t a great way to modify the kernelspecs. One approach uses jupyter kernelspec list to find the kernel.json file and then modifies it, e.g. kernels/python3/kernel.json, by hand. Alternatively, a2km is an experimental project that tries to make these things easier.
jupyter kernelspec list
kernel.json
kernels/python3/kernel.json
For example, add the --profile option to a custom kernelspec under kernels/mycustom/kernel.json (see the Jupyter kernelspec directions here):
--profile
kernels/mycustom/kernel.json
{ "argv": ["python", "-m", "ipykernel", "--profile=my-ipython-profile", "-f", "{connection_file}"], "display_name": "Custom Profile Python", "language": "python" }
You can then run Jupyter with the --kernel=mycustom command-line option and IPython will find the appropriate profile.
--kernel=mycustom
See the Install and Use page for more information about installing Jupyter. Jupyter automatically migrates some things, like Notebook extensions and kernels.
Any IPython notebook extensions should be automatically migrated as part of the data files migration.
Notebook extensions were installed with:
ipython install-nbextension [--user] EXTENSION
Now, extensions are installed with:
jupyter nbextension install [--user] EXTENSION
The notebook extensions will be installed in a system-wide location (e.g. /usr/local/share/jupyter/nbextensions). If doing a --user install, the notebook extensions will go in the JUPYTER_DATA_DIR location. Installation SHOULD NOT be done manually by guessing where the files should go.
/usr/local/share/jupyter/nbextensions
--user
Kernels are installed in much the same way as notebook extensions. They will also be automatically migrated.
Kernel specs used to be installed with:
ipython kernelspec install [--user] KERNEL
They are now installed with:
jupyter kernelspec install [--user] KERNEL
By default, kernel specs will go in a system-wide location (e.g. /usr/local/share/jupyter/kernels). If doing a --user install, the kernel specs will go in the JUPYTER_DATA_DIR location. Installation SHOULD NOT be done manually by guessing where the files should go.
/usr/local/share/jupyter/kernels
IPython 4.0 includes shims to manage dependencies; so, all imports that work on IPython 3 should continue to work on IPython 4. If you find any differences, please let us know.
Some changes include:
IPython 3
Jupyter and IPython 4.0
IPython.html
notebook
IPython.html.widgets
IPython.kernel
jupyter_client, ipykernel
IPython.parallel
IPython.qt.console
qtconsole
IPython.utils.traitlets
IPython.config
The IPython.kernel Split
IPython.kernel became two packages:
jupyter_client for the Jupyter client-side APIs.
ipykernel for Jupyter’s IPython kernel
Jupyter can be used for many different use-cases. Below are a few user stories for when and how you might use tools in the Jupyter ecosystem, as well as a diagram that helps lay out some of your options.
digraph decide_project { size="10,4"; graph [fontname = "helvetica", fontsize="18"]; node [fontname = "helvetica", fontsize="18"]; edge [fontname = "helvetica", fontsize="18"]; label = "Note:\nGreen ringed shapes are links on this site and open in this window\nBlue ones are links to external sites and open in a new tab/window." tooltip="How do I decide which packages I need?"; // Top Level what [ shape=parallelogram, label="What are you\ntrying to do?", tooltip="You have several possibilities!", style=filled, color=".3 .7 1.0"] // Second Level try [ shape=box, label="Try the notebook", tooltip="Try the Jupyter notebook!"] install [shape=box, label="Install the notebook", tooltip="Go To Install Jupyter!"] team [ shape=box, label="Install notebooks\n for my team, class,\n or group", tooltip="Install for teams, etc!"] convert [ shape=box, label="Convert my\n notebook file to\n another format", tooltip="Convert to other formats."] lang [ shape=box, label="Use another\n programming\n language such as\n R or Java", tooltip="Use other programming languages."] custom [ shape=box, label="Customize the\n notebook for my\n needs", tooltip="Customize Notebooks."] // 3rd Level online [ label="Try Online", tooltip="Try Notebooks Online - without any installation!", target="_blank", color=blue, // External Link href="https://try.jupyter.org/"] doinstall [ label="Install Jupyter", tooltip="How to install", target="_top", color=green, // Local Link href="../install.html"] // 4th level hub [ label="JupyterHub", tooltip="Install JupyterHub", target="_blank", color=blue, // External Link href="https://github.com/jupyterhub/jupyterhub"] nbconvert [ label="nbconvert", tooltip="How to convert notebooks", target="_blank", color=blue, // External Link href="https://nbconvert.readthedocs.io/en/latest/"] kernel [ label="Install a\nlanguage\nkernel", tooltip="How to install kernels", target="_top", color=green, // Local Link href="../projects/subprojects.html?highlight=jupyterhub#kernels"] // Path labels noinst [ shape=plaintext, label="No installation\nneeded!", tooltip="Don't wish to install yet?"] like [ shape=plaintext, label="I like it,\nand I wish to install it!", tooltip="You like it!"] my [ shape=plaintext, label="On my system", tooltip="Local Installation"] // below kernel widgets [ label="Widgets", tooltip="Install & use ipywidgets", target="_blank", color=blue, // External Link href="https://ipywidgets.readthedocs.io/en/latest/"] extend [ label="Extensions", tooltip="Install & use extensions", target="_blank", color=blue, // External Link href="https://github.com/ipython-contrib/IPython-notebook-extensions"] dash [ label="Dashboards", tooltip="Install & use dashboards", target="_blank", color=blue, // External Link href="https://github.com/jupyter/dashboards"] help [ label="Help", tooltip="Ask on jupyter help", target="_blank", color=blue, // External Link href="https://github.com/jupyter/help"] // Edges what -> {try install team convert lang custom} try -> noinst -> online online -> like -> doinstall install -> my -> doinstall team -> hub convert -> nbconvert lang -> kernel custom -> {widgets extend dash help} // Arangement {rank=same; what} {rank=same; try; install; team; convert; lang; custom;} {rank=same; widgets; extend;} {rank=same; my; dash; help;} }
The Jupyter community is composed of several sub-communities and projects. These are organized around particular use-cases, users, or other aspects of the Jupyter community. This section contains information to help navigate these projects both from the perspective of a user and a community member.
The Jupyter user interfaces offer a foundation of interactive computing environments where scientific computing, data science, and analytics can be performed using a wide range of programming languages.
Web-based application for authoring documents that combine live-code with narrative text, equations and visualizations. Documentation | Repo
Terminal based console for interactive computing. Documentation | Repo
Qt application for interactive computing with rich output. Documentation | Repo
The Jupyter team maintains the IPython kernel since the Jupyter notebook server depends on the IPython kernel functionality. Many other languages, in addition to Python, may be used in the notebook.
The community maintains many other language kernels, and new kernels become available often. Please see the list of available kernels for additional languages and kernel installation instructions to begin using these language kernels.
Kernels are programming language specific processes that run independently and interact with the Jupyter Applications and their user interfaces. IPython is the reference Jupyter kernel, providing a powerful environment for interactive computing in Python.
interactive computing in Python. Documentation | Repo
interactive widgets for Python in the Jupyter Notebook. Documentation | Repo
lightweight parallel computing in Python offering seamless notebook integration. Documentation | Repo
Jupyter kernels
A full list of kernels available for other languages. Many of these kernels are developed by third parties and may or may not be stable.
Jupyter Notebooks offer exciting and creative possibilities in education. The following subprojects are focused on supporting the use of Jupyter Notebook in a variety of educational settings.
tools for managing, grading, and reporting of notebook based assignments. Documentation | Repo
To serve a variety of users and use cases, these subprojects are being developed to support notebook deployment in various contexts, including multiuser capabilities and secure, scalable cloud deployments.
Multi-user notebook for organizations with pluggable authentication and scalability. Documentation | Repo
Store notebooks on Google Drive. Documentation | Repo
Share notebooks as static HTML on the web. Documentation | Repo
Create temporary, transient notebooks in the cloud. Documentation | Repo
Deployment tools for tmpnb. Documentation | Repo
Deploy notebooks for ‘jupyterhub’ inside Docker containers. Documentation | Repo
Stacks of Jupyter applications and kernels as Docker containers. Documentation | Repo
Notebooks are rich interactive documents that combine live code, narrative text (using markdown), visualizations, and other rich media. The following utility subprojects allow programmatic format conversion and manipulation of notebook documents.
Convert dynamic notebooks to static formats such as HTML, Markdown, LaTeX/PDF, and reStrucuredText. Documentation | Repo
Work with notebook documents programmatically. Documentation | Repo
Background
Resources
IPython provides a rich architecture for interactive computing with:
A powerful interactive shell.
A kernel for Jupyter.
Support for interactive data visualization and use of GUI toolkits.
Flexible, embeddable interpreters to load into your own projects.
Easy to use, high performance tools for parallel computing.
IPython is a growing project, with increasingly language-agnostic components. IPython 3.x was the last monolithic release of IPython, containing the notebook server, qtconsole, etc. As of IPython 4.0, the language-agnostic parts of the project: the notebook format, message protocol, qtconsole, notebook web application, etc. have moved to new projects under the name Jupyter. IPython itself is focused on interactive Python, part of which is providing a Python kernel for Jupyter.
The projects with repos in the IPython organization on GitHub include:
IPython ipykernel interactive computing in Python.
ipyparallel lightweight parallel computing in Python offering seamless notebook integration
ipywidgets interactive widgets for Python in the Jupyter Notebook
The Jupyter architecture relies on these projects’ specifications and implementation.
The specification of the Jupyter message protocol and a client library in Python. Documentation | Repo
Core functionality and miscellaneous utilities. Documentation | Repo
Descriptions
Try the Incubator Projects
The Jupyter incubator gives emerging projects a place to evolve.
Interesting projects include:
content management extensions - Jupyter Content Management Extensions
dashboards - Jupyter Dynamic Dashboards from Notebooks
declarative widgets - Jupyter Declarative Widgets Extension
kernel gateway bundlers - Converts a notebook to a kernel gateway microservice bundle for download
showcase - A spot to try demos of one or more incubating Jupyter projects in Binder
sparkmagic - Jupyter magics and kernels for working with remote Spark clusters
traittypes - Traitlets types for NumPy, SciPy and friends
There’s also a repository with proposals of projects wishing to enter the incubator.
The showcase application allows you to try demos of one or more incubating Jupyter projects in Binder. Just head over to the showcase repo and press the Launch Binder button badge to launch the online trial. You should see an interactive notebook similar to this one:
This page has information about the different architectural designs of core pieces in the Jupyter ecosystem. Some of these are individual projects, and others show the relationships between projects.
This section focuses on IPython and kernels. When we discuss IPython, we talk about two fundamental roles:
Terminal IPython as the familiar REPL
The IPython kernel that provides computation and communication with the frontend interfaces, like the notebook
When you type ipython, you get the original IPython interface, running in the terminal. It does something like this:
ipython
while True: code = input(">>> ") exec(code)
Of course, it’s much more complex, because it has to deal with multi-line code, tab completion using readline, magic commands, and so on. But the model is like code example: prompt the user for some code, and when they’ve entered it, execute it in the same process. This model is often called a REPL, or Read-Eval-Print-Loop.
readline
All the other interfaces —- the Notebook, the Qt console, ipython console in the terminal, and third party interfaces —- use the IPython Kernel. The IPython Kernel is a separate process which is responsible for running user code, and things like computing possible completions. Frontends, like the notebook or the Qt console, communicate with the IPython Kernel using JSON messages sent over ZeroMQ sockets; the protocol used between the frontends and the IPython Kernel is described in Messaging in Jupyter.
ipython console
The core execution machinery for the kernel is shared with terminal IPython:
A kernel process can be connected to more than one frontend simultaneously. In this case, the different frontends will have access to the same variables.
This design was intended to allow easy development of different frontends based on the same kernel, but it also made it possible to support new languages in the same frontends, by developing kernels in those languages, and we are refining IPython to make that more practical.
Today, there are two ways to develop a kernel for another language. Wrapper kernels reuse the communications machinery from IPython, and implement only the core execution part. Native kernels implement execution and communications in the target language:
Wrapper kernels are easier to write quickly for languages that have good Python wrappers, like octave_kernel, or languages where it’s impractical to implement the communications machinery, like bash_kernel. Native kernels are likely to be better maintained by the community using them, like IJulia or IHaskell.
Making kernels for Jupyter
Jupyter Notebooks are structured data that represent your code, metadata, content, and outputs. When saved to disk, the notebook uses the extension .ipynb, and uses a JSON structure. For more information about the notebook format structure and specification, see the nbformat documentation.
.ipynb
Jupyter Notebook and its flexible interface extends the notebook beyond code to visualization, multimedia, collaboration, and more. In addition to running your code, it stores code and output, together with markdown notes, in an editable document called a notebook. When you save it, this is sent from your browser to the notebook server, which saves it on disk as a JSON file with a .ipynb extension.
The notebook server, not the kernel, is responsible for saving and loading notebooks, so you can edit notebooks even if you don’t have the kernel for that language—you just won’t be able to run code. The kernel doesn’t know anything about the notebook document: it just gets sent cells of code to execute when the user runs them.
The Nbconvert tool in Jupyter converts notebook files to other formats, such as HTML, LaTeX, or reStructuredText. This conversion goes through a series of steps:
Preprocessors modify the notebook in memory. E.g. ExecutePreprocessor runs the code in the notebook and updates the output.
An exporter converts the notebook to another file format. Most of the exporters use templates for this.
Postprocessors work on the file produced by exporting.
The nbviewer website uses nbconvert with the HTML exporter. When you give it a URL, it fetches the notebook from that URL, converts it to HTML, and serves that HTML to you.
IPython also includes a parallel computing framework, IPython.parallel. This allows you to control many individual engines, which are an extended version of the IPython kernel described above.
JupyterHub is a multi-user Hub that spawns, manages, and proxies multiple instances of the single-user Jupyter notebook server. This can be used to serve a variety of interfaces and environments, and can be run on many kinds of infrastructure. JupyterHub on Kubernetes is a Helm Chart for running JupyterHub on kubernetes infrastructure, and BinderHub is a customized JupyterHub deployment for sharable, reproducible interactive computing environments.
The links below describe the architecture of JupyterHub and several distributions of JupyterHub.
JupyterHub core architecture
JupyterHub for Kubernetes architecture
BinderHub architecture
JupyterLab is a flexible, extensible interface for interactive computing. Below are a few links that are useful for understanding the JupyterLab architecture.
JupyterLab document model
JupyterLab notebook model
Design patterns in JupyterLab
Below is a high level visual overview of project relationships. It is current as of 2017.
Notebook Conversion and Formatting
Links to information on usage, configuration and development hosted on Read The Docs or in the GitHub project repo.
jupyter_console
configurable-http-proxy
List of community maintained language kernels
jupyter-drive
jupyter-js-notebook
jupyter-js-phosphide
jupyter-js-plugins
jupyter-js-services
jupyter-js-ui
jupyter-js-utils
When projects release a new version, they often include “release notes” with information about improvements, bug-fixes, and other changes. These projects are the best source of information about what has recently changed.
Below you’ll find links to the release notes for several major projects in the Jupyter ecosystem.
You can find a changelog at the Jupyter Notebook documentation.
You can find a changelog at the Jupyter Lab documentation.
You can find a list of changelogs in the JupyterHub community within the JupyterHub team compass documentation.
Information relevant to understanding the many projects in the Jupyter ecosystem, including their technical components and how they work and relate to one another.
Welcome to the Community Guides for Jupyter. These guides are intended to provide information about the Jupyter community such as background, events, and communication channels. As our community is highly dynamic, information may change, and we will do our best to keep it up to date.
The Jupyter Community Call is an open video call. Think of this as a “monthly, virtual JupyterCon”. It’s a place for anyone to announce and share fun things happening in the Jupyter community. Everyone is welcome (even if you’re not presenting). We record these videos and post them on YouTube. For more information, read this discourse thread.
Date: November 17th, 2020, at 9am Pacific (your timezone)
Discourse
Youtube Link
Please Note:
Community calls are recorded and posted to this playlist
These notes will be recorded and posted here
Everyone present is held to the Jupyter Code of Conduct
Think of it as a monthly, virtual JupyterCon. It’s a place to announce and share fun things happening in the Jupyter community.
For more discussion on the format of these calls, see the thread here.
This is a place to make short announcements (without a need for discussion).
Zach Shout out to Isabela for reviving the Jupyter Community Calls :tada:
Add agenda items here before the meeting. We will reorganize the agenda so that it fits in the 60m meeting slot.
🦌 IPyElk 0.2.0
Jupyter Widgets for interactive graphs at scale powered by the Eclipse Layout Kernel (ELK), sprotty, networkx.
dgaf (deathbeds generalized automation framework)
Eric What is a Jovyan?
Name
Institution
GitHub Handle
Nick Bollweg
Jupyter, Deathbeds, GTRI
@bollwyvl @nrbgt
tony fast
@deathbeds
Eric Charles
Datalayer, Quansight
@echarles
Isabela Presedo-Floyd
Quansight
@isabela-pf
Zach Sailer
Apple
@Zsailer
A. T. Darian
Two Sigma
@afshin
Date: September 24, 2019, at 9am Pacific (your timezone)
Link: Youtube Video
Co-locating a Jupyter Server and Dask Scheduler _By Matthew Rocklin [name=tonyfast]
Language server protocol
current work on jupyterlab-lsp [name=krassowski]
wip binder [name=bollwyvl]
importing notebooks with importnb [name=tonyfast]
importnb
importnb provides easy, flexible reuse of IPython notebooks from other notebooks, .py scripts and libraries, and even the command line.
.py
🤖 RobotLab: a one-click Jupyter and Robot Framework environment [name=bollwyvl]
RobotLab was built to support a workshop introducing acceptance-test-driven development and robot process automation with robotkernel, a Robot Framework kernel, in JupyterLab. It is built with conda, constructor and azure pipelines. You can use the installers to learn how you can use Robot Framework and Jupyter, or adapt its pipeline to produce your own, cross-platform installers
RobotLab
Project Drawdown [name=dgentry]
Project Drawdown is a global research organization that identifies, reviews, and analyzes the most viable solutions to climate change, and shares these findings with the world. We’ll show current state of climate solution models using Voila and JupyterHub, and ask for suggestions about git operation driven from the Notebook.
Zach | Jupyter Cal Poly | @Zsailer |
Denton | Project Drawdown | @DentonGentry |
Nick | GTRI | @nrbgt |
Chico Venancio | BMC Group K. K. | @chicocvenancio |
Tony Fast | Quansight | @tonyfast |
Wayne Decatur | Upstate Medical | @fomightez |
Kevin Bates | IBM | @kevin-bates |
Carol | Jupyter | @willingc |
Date: August 27, 2019, at 9am Pacific (your timezone)
Welcome!
[x] Tracking Jupyter Newsletter, 22nd edition. Thank you, Tony Hirst!
[x] JupyterLab 1.1.0rc0 is up on PyPI and conda-forge. Please test! JupyterLab 1.1.0 will be released tomorrow.
pip install --pre jupyterlab==1.1.0rc0 or
pip install --pre jupyterlab==1.1.0rc0
conda install -c conda-forge/label/prerelease-jupyterlab jupyterlab=1.1.0rc0
[ ] [name=yournamehere]
[x] [name=Jupyter Cal Poly Intern Team] Rich Text JupyterLab extension
[x] [name=Jupyter Cal Poly Intern Team] Python Package Installer JupyterLab extension
[x] [name=Tony Fast] Gist to Binder
https://gist.github.com/Zsailer/6da0dc3c97ec873685b7fe58e52d36d7
People say reproducibility is hard.
User mybinder.org to generate a live notebook from a Gist.
dependencies in requirement.txt lead to a faster build.
[x] [name=Saul Shanabrook] JupyterLab Rich Context Extensions
[ ] Data registry with HDF5 Viewer
[ ] It’s meant to make it easier to create new extensions in JupyterLab that deal with different data formats. If you have a data viewer extension, you shouldn’t care if the data came from a file or from a notebook output or a database, you just want to be able to render a certain format.
[ ] Able to visualize HDF5 datasets using this extension too!
[ ] Metadata/linked data service
If you are joining the Jupyter Community Call, sign in below so we know who was here.
Derek | Jupyter Cal Poly (Intern) | @dLamSlo8 |
Isabela | Jupyter Cal Poly (Intern) | @isabela-pf |
Marisa | Jupyter Cal Poly (Intern) | @marisaaquilina |
Markelle | Jupyter Cal Poly (Intern) | @markellekelly |
Javier | Jupyter Cal Poly (Intern) | @javag97 |
Denton | Project Drawdown | @dgentry |
Tony | Quansight | @tonyfast |
Chico Venancio| BMC Group K. K.| @chicocvenancio|
Elizabeth DuPre | McGill University | @emdupre|
Wayne Decatur | Upstate Medical University | @fomightez|
Saul Shanabrook | Quansight | @saulshanabrook |
Joe | Mavenomics | @quigleyj-mavenomics|
A. T. Darian | Two Sigma | @afshin |
Erik Sundell | Sandvik | @consideRatio |
Date: June 25th, 2019, at 9am PST (your timezome)
Link Youtube Video
[ ] [name=willingc] Congrats again to our newest BinderHub/JupyterHub team member, Sarah :tada:
[ ] [name=Zsailer] Welcome the new Jupyter Cal Poly Intern team.
[ ] [name=tgeorgeux] Introduction to Rich Context Work: Metadata, Data Registry, Commenting and Annotation, Real-Time Collaboration, Telemetry.
Data Registry/Explorer
Commenting
Metadata
Telemetry
[ ] [name=Erik Sundell] repo2docker / MyBinder.org support for git repositories with Pipfile / Pipfile.lock coming up soon I hope ;D repo2docker#649
[ ] [name=kbates] Brief overview of Jupyter Enterprise Gateway
Questions:
Where can we find information to share with others?
See this post
Comments:
Add talks to EG docs
Check if Docker Stacks has a link to the EG docker stacks
Add link to this demo to the EG docs
[ ] [name=dgentry] existing system for git commit+push without requiring a forked repo? Similar to submitting homework for grading?
nbgrader and hubshare mentioned by Zach
nbstripout / fastai-nbstripout can play a part of making git merges more reasonable
Ask the Jupyter education community
Jupyter Education Mailing List
Check out solutions from Gigantum, a tech company started out of Johns Hopkins
[ ] [name=YourNameHere]
Erik | Sandvik | @consideRatio |
Denton | individual | @DentonGentry |
Tim | Jupyter Cal Poly | @tgeorgeux |
Kevin | IBM | @kevin-bates |
Wayne | Upstate Medical University| @Fomightez |
Dav | Gigantum | @davclark |
Mike Trizna | Smithsonian | @MikeTrizna |
Anton Akhmerov | TU Delft | @akhmerov |
Markelle | Jupyter Cal Poly | |
Saul | Quansight | @saulshanabrook |
Yair | Mavenomics | @YairMarcowMavenomics|
Ruixin | Microsoft | @ruixinxu |
Date: 28 May 2019 at 9am PST (your timezome)
The purpose of these monthly video conference calls is to share and demonstrate the awesome things happening in Jupyter community. We invite anyone to present their work, engage in discussion, or just sit in and listen. Whether you have a new lab extension you’ve created, a new jupyterhub deployment you’re excited about, or an nteract papermill pipeline powering your business, we’d love to hear about it! And, we’ll record and publish these calls on YouTube for anyone unable to attend.
This is a place to make short announcements (without a need for discussion). This is also a great place to give shout-outs to contributors! We’ll read through these at the beginning of the meeting.
Hello from The Turing Way Book Dash :books::dash::dash: in London. This call is right at the end of a day of 18 people working together to improve the openly developed Jupyter book. Our goal is to make “Reproducibility too easy not to do” (:laughing::grimacing::scream_cat:), build a strong and supportive community, and give people the knowledge and confidence they need to contribute to open source projects [name=Kirstie & Turing Way team]
GitHub repo: https://github.com/alan-turing-institute/the-turing-way
The book: https://the-turing-way.netlify.com
Chat room: https://gitter.im/alan-turing-institute/the-turing-way
Add your potential agenda item here 24 hours before the meeting at the latest. We will reorganize the agenda so that it fits in the 60m meeting slot.
Feedback and report from Research Software Reactor Sprint [name=Sarah]
Event website: https://www.microsoftevents.com/profile/form/index.cfm?PKformID=0x6927743abcd
Project list: https://github.com/research-software-reactor
New binderhub in the world :baby: :confetti_ball: https://twitter.com/gerardjgorman/status/1131097133292183553
Feedback and report from Jupyter Server Design and Roadmap workshop [name=Luciano]
sponsored by bloomberg
thomas, kevin, etc.
functionality directions, high-level roadmap
decouple frontend from backend: notebook, lab, nteract, voila
using the backend functionality, providing different UIs
how do we accomplish that? jupyter_server extension points
ui
backend
handlers
roadmap
phase 1: integrating with existing uis (notebookclassic, lab)
phase 2: expand functionality
kernel provider pr (e.g. different kernel managers, like Enterprise Kernel Gateway)
next steps:
pr from zak
blog post (illustration)
biweekly calls
timeline:
6.0 release of notebook?
post jupyterlab 1.0?
Qt(5.6)-based PDF output from JupyterLab (0.35) Quick demo of using QtWebEngine to load a notebook in lab and make a PDF. \(_{n+1}\)th time’s the charm! | issue | repo | [name=Nick]
If you are joining the All-Jupyter Community video meeting, sign in below so we know who was here. Roll call:
| Zach | Jupyter Cal Poly | @Zsailer |
| Amit | ReviewNB | @amit1rrr |
| Kirstie | Alan Turing Institute | @KirstieJane |
| Pete | Thorn | @parente |
| Tony | Quansight | @tonyfast |
| Nick | GTRI | @bollwyvl |
| Luciano | IBM | @lresende |
| Sarah | The Alan Turing Institute | @sgibson91 |
| Saul | Quansight | @saulshanabrook |
| James | MUN | @jmunroe |
| Darian | Two Sigma | @afshin |
| Matthias| UC Merced | @carreau |
| Tania | Microsoft | @trallard |
| Carol | Project Jupyter | @willingc |
| Chris | UNC-Chapel Hill | @cbcunc |
Date: 30 April 2019 at 9am PST (your timezome)
Video-conference link: https://calpoly.zoom.us/my/jupyter
Hello!
[x] Shout-out to @GrahamDumpleton for continued well-researched improvements to security (e.g., https://github.com/jupyter/docker-stacks/pull/845) [name=Peter Parente]
[x] Some initial work on enabling Jupyter documentation translation is happening in https://github.com/jupyter/docker-stacks/issues/827. If you would like to contribute a translation, review one in progress, or help document the process itself, please reach out in the issue. Thanks to @michiboo (Micky), @Nico769 (Nicola), and @Allanfs (Allan) for starting translations! [name=Peter Parente]
[x] Work is being done to use BinderHub fully in AWS (hosted docker registry, ECR, and hosted git repository, CodeCommit are the current areas). [name=Chico Venancio]
[x] JupyterLab 1.0.0a3 has been released since our last call. There are a ton of updates. One thing I specifically want to give a shout out to is the document search feature that Andrew Schlaepfer built which allows building custom search functionality for different document types. Install the new version to take it for a spin: pip install --pre jupyterlab [name=Darian]
pip install --pre jupyterlab
[x] RISE 5.5.0 was released, more info about the release in this blogpost: http://damianavila.github.io/blog/posts/rise-550-is-out.html [name=Damián]
[x] A big shout-out to Paul Ivanov for helping organize a “Jupyter Open Studio Day” at Bloomberg in SF. Lots of great people came, shared ideas, learned, and generally celebrated open source stuff! [name=Chris H (but I won’t be able to attend the meeting)]
[x] (a few quick shout-outs from Chris) Papermill 1.0 is released! Congrats to the nteract community for releasing papermill 1.0, a tool for parametizing / pipelining notebooks. See the Discourse for some more links. [name=Chris H (but I won’t be able to attend the meeting)]
[x] Many thanks to @KirstieJane and @betatim for improving the contributing documentation in repo2docker (link to one PR on this) [name=Chris H (but I won’t be able to attend the meeting)]
[x] IPython 7.5 out ! (mostly bugfixes) [name=Matthias, at airport so will stay muted] (Shout out to Matthias for this too!)
[x] Matthias,Paul,Carol… are at PyCon for a tutorial (Thursday), and have Jupyter stickers ! [name=Matthias, at airport so will stay muted]
[ ] add item here [name=add your name]
[x] Updates from the nteract project [name=Safia Abdalla]
New release of the cross-platform nteract desktop app is out.
papermill v1.0 is released.
[x] Jupyter at the University of Edinburgh, a walkthrough of the Noteable service [name=James Slack]
Overview of what’s happening at University of Edinburgh
Large support teams.
Very open mindset.
http://open.ed.ac.uk/
How Jupyter came to the University
Small scale adoption to start
Students were installing Jupyter on university computers.
Make Jupyter a shared central service.
JupyterHub service university wide for use in education.
Pilot phase of Noteable.
6 courses with 6 different schools (~600 students).
Political science, art, computer science…
Integrated with nbgrader.
Jupyter Commnity Hackathon on NBGrader in Edinburgh.
Questions and comments:
Connections or interactions with Canadian group? https://syzygy.ca [name=Lindsey Heagy]
Is this running in the cloud or university server? [name=Anton Akhmerov]
University server
Have you explored Binderhub? [name=Chico Vernancio]
BinderHub now can run with “auth” and persistent storage, see https://github.com/jupyterhub/binderhub/issues/794 for more details and a live demo (ask @betatim if you have more questions)
[ ] Introducing the rewrite of jupyter-sphinx (example here) and a discussion about markup format in the notebooks (https://github.com/jupyter/nbformat/issues/80) [name=Anton Akhmerov]
Publishing documents with complex content and computed output.
Questions/Comments: - [name=matthias (can’t speak)]: I’d love to merge that with ipython built-in sphinx-ext, /(deprecate sphinx-ext potentially as it’s not well maintained (sphix-ext docs)
[ ] Quantum Chemistry Database exploration with Jupyter through the QCArchive project, [name=Daniel Smith].
The Molecular Sciences Software Institute
NSF funded.
Designed to serve and enhance software development efforts of broad field of computational molecular science.
8 universities represented on Board of Directors
Github org: https://github.com/MolSSI
https://molssi.org
QCFractal
Join our Slack to get involved.
Questions/comments:
Is this user created content or hostesd content?
Completely user created.
[ ] May 13 & 14 W4A Hackathon on Accessibility of JupyterLab
[ ] Add your item here [name=and your name]
Contact james.slack@ed.ac.uk
Noteable - https://noteable.edina.ac.uk/
Blog https://thinking.is.ed.ac.uk/noteable/
nbgrader Hackathon - http://edin.ac/2SGmzNu
Presentation - https://edin.ac/2PBJc1T
Name / Institution / Github Handle
Zach / Jupyter Cal Poly / @Zsailer
Pete / Thorn / @parente
James Slack / University of Edinburgh / @jamesaslack
Anton Akhmerov / TU Delft / @akhmerov
Peter Rose / UC San Diego / @pwrose
A. T. Darian / Two Sigma / @afshin
Ward Harold / Google / @wkharold
Damián Avila / / @damianavila
Chico Venancio / BMC Group K.K. / @chicocvenancio
Eric Lesch / BMC Group K.K. / @EricLesch
James Munroe / Memorial University of Newfoundland / @jmunroe
Lindsey Heagy / UC Berkeley / @lheagy
Daniel Smith / The Molecular Sciences Software Institute / @dgasmith
Michael Milligan / U. Minnesota/MSI / @mbmilligan
Kevin Bates / IBM / @kevin-bates
Cindy Wu / experiment.com / @cindywu
Joe Hamman / NCAR / @jhamman
Nick Bollweg / GTRI / @bollwyvl
Tom Baldwin / Cascade Data Labs / @baldwint
Pete Blois / Google Colab / @blois
Ana Ruvalcaba / Cal Poly, San Luis Obispo/Jupyter / @ruv7
Tim Head / Project Binder / Zurich / @betatim
Matthias Bussonnier / Jupyter Project / @carreau
Tim George / Cal Poly, San Luis Obispo/Jupyter / @tgeorgeux
Date: 26 March 2019 at 9am PST (your timezome)
Link to prior meeting’s virtual meeting report
[x] Shout out to Damian Avila for the second community docker stack: umsimads/education-notebook [Peter Parente, jupyter/docker-stacks]
[x] Thank you Tony Hirst for your weekly newsletter Tracking Jupyter [Carol Willing, JupyterHub]
[x] Thank you to Professor Lorena Barba at GWU for hosting the recent Jupyter team meeting [Carol Willing, JupyterHub]
[x] Kudos, Zach Sailer et al, for kicking off the revitalization of this meeting over on the Jupyter Discourse site [Peter Parente (and everyone in attendance I suspect)]
[x] Leave your feedback about our new all-Jupyter community call here
[x] Come join us on the Jupyter Discourse Forum! (in case you aren’t already tired of me telling you this :-) )
https://discourse.jupyter.org/t/introduce-yourself/17
Turn off add blocker if you want to login with google or github.
Let’s collect all potential agenda items here before the start of the meeting (closing time 24h before). We will then attempt to create a coherent agenda that fits in the 60m meeting slot. If there are similar items try and group them already.
[x] Suggested demo: docker-stacks project now posts image build manifests back to the GitHub project wiki for inspection [Peter Parente, https://github.com/jupyter/docker-stacks/wiki, 5 minutes]
[x] Introduction to mybinder.org, how to make a repository ready for running on mybinder.org [Tim Head, 10minutes]
https://mybinder.org/
https://github.com/betatim/my-first-binder
Lots of example repositories: https://github.com/binder-examples/
Not just notebooks: https://github.com/betatim/vscode-binder
Documentation on how to specify dependencies and such https://repo2docker.readthedocs.io/en/latest/
Questions, comments and support: https://discourse.jupyter.org/c/questions
Write questions you have here:
…
Is it true that you personally are the only user of https://github.com/Carreau/open-with-binder ?
If running an event/tutorial type thing with Binder, how many users should we feel comfortable sending to MyBinder vs setting up our own Z2JH infrastructure?
By default it prevents more than 100 concurent launches of the same repo. You can ask for more if you are nice.
fun fact: apparently the reason this launch was slow is because somebody is teaching a “learn java” course on Binder just now and a bunch of people connected all at once https://github.com/santanche/java2learn
Show the log dashboard :P
sure thing: https://grafana.mybinder.org/d/fZWsQmnmz/pod-activity?refresh=1m&panelId=1&fullscreen&orgId=1
We’d love to figure out a funding model for binder
[x] [10minutes ] Update from the nteract team [Safia].
Version 1.0 of papermill, notebook execution and parameterization library, will be released in early April. (https://github.com/nteract/papermill)
Scrapbook, tool for extracting and assembling generated outputs and data from notebooks, known as scraps, will allow you to host scraps remotely on various cloud providers. (https://github.com/nteract/scrapbook)
Plan laid out for version 1.0 of the kernel-relay, a GraphQL API for interfacing with Jupyter kernels. (https://github.com/nteract/kernel-relay)
Effort to modularize the Data Explorer, an automatic visualization library, is underway.
nteract is participating in Google Summer of Code (https://summerofcode.withgoogle.com/organizations/5447807656263680/)
How to learn more:
Nteract slack https://slack.nteract.io/
Weekly meeting on Monday Afternoon
Where is the Data Explorer repo?
[x] deathbeds stuff [Nick, github.com/deathbeds, 5min].
lintotype: interactive linting & formatting
wxyz: some crazy widgets
The SVG messed with my eyes. <3
Can you expand upon the “cell-id” things?
cell id in lintotype
cell id in irobotframework
Are those suposed to be extension ? Built-in ?…?
ready to be packaged (not yet). Some of the stuff could go into ipywidgets
[x] IPython releases [Matthias, IPython, https://github.com/ipython/ipython, 2min.]
Trying to do monthly release close to End Of Month.
7.4 has been released ! Thanks to everyone.
If you want to help with 7.5 subscribe to the following issue.
https://github.com/ipython/ipython/issues/11657
[x] Async Kernels startup : [Matthias, Jupyter Client , https://github.com/jupyter/jupyter_client/pull/428, 4min.]
https://github.com/jupyter/jupyter_client/pull/428
add item here [name, project name/url, estimated time for discussion].
Don’t get limitted to technical discusions !
Name | Institution | Github Handle
Zach | Jupyter Cal Poly | @Zsailer
Kyle | Netflix | @rgbkrk
Pete | Valassis Digital | @parente
Carol | Project Jupyter | @willingc (will miss due to OpenEdX conference)
Brian | AWS | @ellisonbg
Darian | Two Sigma | @afshin
Ward | Google | @wkharold
Tim | Project Jupyter | @betatim
Chris | Berkeley/Project Jupyter| @choldgraf
Pete | Google Colab | @blois
Matthias | UC Merced | @carreau
Saul | Quansight | @saulshanabrook
Vidar | Simula Research | @vidartf
Lindsey Heagy | UC Berkeley | @lheagy
Safia ABdalla | Microsoft/nteract | @captainsafia
Leticia Portella | - | @leportella
Damián Avila | Project Jupyter | @damianavila
Craig Citro | Google Colab | @craigcitro
Nick Bollweg | GTRI | @bollwyvl
Michael Milligan | U Minnesota / MSI | @mbmilligan
Liang Bin Hsueh | InfuseAI | @hlb
Thomas Vander Velde | - | @tomasdelcampo
Luciano Resende | IBM | @lresende
Tony Fast | Quansight | @tonyfast
James Slack | University of Edinburgh | @jamesaslack
Erik Sundell | Sandvik | @consideratio
The Jupyter community often meets (usually on-line) in order to discuss matters in the Jupyter community, share new ideas and discussions, and connect with one another. This often happens within specific sub-project (more information below), though there are also occasional community-wide meetings.
This following calendar shows the various meetings and events from Jupyter sub-projects:
All-Jupyter Community Calls generally happen on the last Tuesday of the month, and are focused around demonstrations and sharing information across all of the Jupyter projects.
Find information on this Discourse thread.
Watch previous calls on our YouTube channel.
Read the notes from previous calls.
In addition, you can find the notes from previous community meetings below.
The core developers of various Jupyter sub-projects have regular meetings to discuss and demo what they have been working on, discuss future plans, and bootstrap conversation. These meetings are public and you are welcome to join remotely.
Each team has their own processes around logistics and planning for the team meetings. The following pages should help you find the information for each.
JupyterHub meetings happen monthly. For a calendar of future team meetings, see the JupyterHub team compass repository.
JupyterLab meetings happen weekly. For more information about when these meetings happen, as well as notes from each meeting, see the JupyterLab README.
General meeting conversation and planning often happens in the dev-meeting-attendance Gitter channel. We recommend checking it periodically for new information about when meetings are happening.
As a general rule, most project-wide conversation happens in the Jupyter community forum. There are also many other kinds of communication that happens within the community. See below for links and other relevant information.
Community forum https://discourse.jupyter.org/
Blog https://blog.jupyter.org/
Newsletter https://newsletter.jupyter.org/
Website https://jupyter.org
Twitter https://twitter.com/ProjectJupyter
Gitter https://gitter.im/jupyter/jupyter
Mailing lists (Jupyter, Jupyter in Education) https://jupyter.org/community.html
Steering council: Information about the steering council and its members can be found on the Jupyter website.
Information about Jupyter’s governance process can be found on the Jupyter governance website.
Jupyter Enhancement Proposal (JEP) process: Details about the process can be found in the enhancement proposals website.
Information can be found in the Jupyter Code of Conduct page.
Members of the Jupyter community often get together to share what they’re working on, to work together, and to teach and learn from one another.
If you’re organizing an event with the Jupyter community (whether it’s as small as a JupyterDays meetup, or as large as JupyterCon) you should ensure that the event follows the values and goals of the Jupyter project - to be a place where everyone feels welcome and supported and that reflects the diversity of developers and users in the Jupyter community.
Shoot for having 25% of your participants come from under-represented groups. If you’re organizing a Jupyter event, here are some resources to help out.
Mozilla has a number of excellent resources on hosting open events. The Mozilla Open Events guide. This is an excellent resource for planning and running an open, inclusive event. In particular, this section on making events more inviting is a good way to make your event more welcoming, interesting, and accessible to the Jupyter community. Finally, the Diversity and Inclusion wiki includes conference calls related to this topic.
The NumFocus DISCOVER cookbook is another collection of resources for making your event more productive, diverse, and inclusive.
The National Center for Women & Information Technology has an excellent collection of resources for creating a diverse and inclusive environment. In particular, we recommend their Inclusive Environment Assessment Guide and 10 actionable ways to actually increase diversity.
The PyCascades community has several efforts in improving diversity and inclusion.
Write the Docs has a Welcome Wagon Guide to help first-time attendees feel welcome and included.
Ultimately, making events more inclusive is not rocket science and there is no magic bullet. It requires clear, focused dedication, planning ahead, and sustained resources and effort over time. However, we believe this effort is worth it!
You may see the word Jovyan used in Jupyter tools (such as the user ID in the Jupyter Docker stacks or referenced in conversations. But what is a Jovyan?
In astronomical terms, the word “Jovian” means “like Jupiter”. It describes several planets that share Jupiter-like properties.
Much like the planet Jupiter and our solar system, the Jupyter community is large, distributed, and nebulous. We like to use the word Jovyan to describe members of this community. Jovyans are fellow open enthusiasts that use, develop, promote, teach, learn, and otherwise enjoy tools in Jupyter’s orbit. They make up the Jupyter community. If you’re not sure whether you’re a Jovyan, you probably are :-)
Major Repos and Issue Types
Python
Documentation
JavaScript/TypeScript
DevOps
Web Development
Welcome fellow contributor! We appreciate your help.
We typically label issues appropriate for new contributors as good first issue or help wanted. To start an issue, you may comment to let everyone know that you’ll be working on it. Our repos typically provide development installation instructions in each repo’s CONTRIBUTING.md or README.md. Should you find an issue with our development installation instructions please let us know in our issues. We want to ensure that our documentation for development installation is accurate. Additionally, other contributors are often available to answer questions about fixing the issue on the issue number or on the repo’s gitter channel.
good first issue
help wanted
CONTRIBUTING.md
README.md
We strive to have a inclusive, welcoming community. Please read our code of conduct to learn more.
IPython, nbgrader, JupyterHub, repo2docker, and Binder are major repos written primarily in Python.
Notable Issues
Development Installation
Gitter Channel
All our repos have documentation issues that are relevant for new contributors. For example, the source for the documentation you are reading right now can be found in the jupyter/jupyter repository on GitHub.
Jupyter Notebook, JupyterLab, and IPyWidgets use JavaScript and TypeScript.
Development Instalation
JupyterHub, repo2docker, and Binder have many issues related to devops. See the links above.
We have issues related to the website.
Developer Installation
Submitting Pull Requests
Contribution Workflow
Core Developer Workflow
Submitting a Bug
Reporting a Vulnerability
Contributing to open source can be a nerve-wrecking process, but don’t worry everyone on the Jupyter team is dedicated to making sure that your open source experience is as fun as possible. At any time during the process described below, you can reach out to the Jupyter team on Gitter or the mailing list for assistance. If you are nervous about asking questions in public, you can also reach out to one of the Jupyter developers in private. You can use the public Gitter to find someone who has the best knowledge about the code you are working with and interact with the in a personal chat.
As you begin your open source journey, remember that it’s OK if you don’t understand something, it’s OK to make mistakes, and it’s OK to only contribute a small amount of the code necessary to fix the issue you are tackling. Any and all help is welcome and any and all people are encouraged to contribute.
Individuals are welcome, and encouraged, to submit pull requests and contribute to the Jupyter source. If you are a first-time contributor looking to get involved with Jupyter, you can use the following query in a GitHub search to find beginner-friendly issues to tackle across the Jupyter codebase. This query is particularly useful because the Jupyter codebase is scattered across several repositories within the jupyter organization, as opposed to a single repository. You can click the link below to find sprint-friendly issues.
is:issue is:open is:sprint-friendly user:jupyter
Once you’ve found an issue that you are eager to solve, you can use the guide below to get started. If you experience any problems while working on the issue, leave a comment on the issue page in GitHub and someone on the core team will be able to lend you assistance.
Please keep in mind that what follows are guidelines. If you work through the steps and have questions or run into time constraints, please submit what you already have worked on as a pull request and ask questions on it. Your effort, including partial or in-progress work, is appreciated.
Fork the repository associated with the issue you are addressing and clone it to a local directory on your machine.
cd into the directory and create a new branch using git checkout -b insert-branch-name-here. Pick a branch name that gives some insight into what the issue you are fixing is. For example, if you are updating the text that is logged out by the program when a certain error happens you might name your branch update-error-text.
cd
git checkout -b insert-branch-name-here
Refer to the repository’s README and documentation for details on configuring your system for development.
Identify the module or class where the code change you will make will reside and leave a comment in the file describing what issue you are trying to address.
Open a pull request to the repository with [WIP] appended to the front so that the core team is aware that you are actively pursuing the issue. When creating a pull request, make sure that the title clearly and concisely described what your code does. For example, we might use the title “Updated error message on ExampleException”. In the body of the pull request, make sure that you include the phrase “Closes #issue-number-here”, where the issue number is the issue number of the issue that you are addressing in this PR.
[WIP]
Feel free to open a PR as early as possible. Getting early feedback on your approach will save you time and prevent the need for an extensive refactor later.
Run the test suite locally in order to ensure that everything is properly configured on your system. Refer to the repository’s README for information on how to run the test suite. This will typically require that you run the nosetests command on the commandline. Alternatively, you may submit a pull request. Our Continuous Integration system will test your code and report test results.
nosetests
Find the test file associated with the module that you will be changing. In the test file, add some tests that outline what you expect the behavior of the change should be. If we continue with our example of updating the text that is logged on error, we might write test cases that check to see if the exception raised when you induce the error contains the appropriate string. When writing test cases, make sure that you test for the following things.
What is the simplest test case I can write for this issue?
What will happen if your code is given messy inputs?
What will happen if your code is given no inputs?
What will happen if your code is given too few inputs?
What will happen if your code is given too many inputs?
If you need assistance writing test cases, you can place a comment on the pull request that was opened earlier and one of the core team members will be able to help you.
Go back to the file that you are updating and begin adding the code for your pull request.
Run the test suite again to see if your changes have caused any of the test cases to pass. If any of the test cases have failed, go back to your code and make the updates necessary to have them pass.
Once all of your test cases have passed, commit both the test cases and the updated module and push the updates to the branch on your forked repository.
Once you are ready for your pull request to be reviewed, remove the [WIP] tag from the front of issue, a project reviewer will review your code for quality. You can expect the reviewer to check for the documentation provided in the changes you made, how thorough the test cases you provided are, and how efficient your code is. Your reviewer will provide feedback on your code and you will have the chance to edit your code and apply fixes.
Once your PR is ready to become a part of the code base, it will be merged by a member of the core team.
To help you understand our review process by core developers after you submit a pull request, here’s a guide that outlines the general process (specifics may vary a bit across our repositories). Here is an example for Jupyter notebook 4.x:
In general, Pull Requests are against master unless they only affect a backport branch. If a PR affects master and should be backported, the general flow is:
master
mark the PR with milestone for the next backport release (4.3) merge into master backport to 4.x push updated 4.x branch
mark the PR with milestone for the next backport release (4.3)
merge into master
backport to 4.x
push updated 4.x branch
Backports can be done in a variety of ways, but we have a script for automating the common process to:
download the patch ` e.g. <https://patch-diff.githubusercontent.com/raw/jupyter/notebook/pull/1645.patch>` checkout the 4.x branch apply the patch make a commit
download the patch ` e.g. <https://patch-diff.githubusercontent.com/raw/jupyter/notebook/pull/1645.patch>`
checkout the 4.x branch
apply the patch
make a commit
which works for simple cases, at least.
In this case, it would be:
python /path/to/ipython-repo/tools/backport_pr.py jupyter/notebook 4.x 1645
While using the Notebook, you might experience a bug that manifests itself in unexpected behavior. If so, we encourage you to open issues on GitHub. To make the navigating issues easier for both developers and users, we ask that you take the following steps before submitting an issue.
Search through StackOverflow and existing GitHub issues to ensure that the issue has not already been reported by another user. If so, provide your input on the existing issue if you think it would be valuable.
Prepare a small, self-contained snippet of code that will allow others to reproduce the issue that you are experiencing.
Prepare information about the environment that you are executing the code in, in order to aid in the debugging of the issue. You will need to provide information about the Python version, Jupyter version, operating system, and browser that you are using when submitting bugs. You can also use pip list or conda list and grep in order to identify the versions of the libraries that are relevant to the issue that you are submitting.
pip list
conda list
grep
Prepare a simple test that outlines the expected behavior of the code or a description of the what the expected behavior should be.
Prepare an explanation of why the current behavior is not desired and what it should be.
If you believe you’ve found a security vulnerability in a Jupyter project, please report it to security@ipython.org. If you prefer to encrypt your security reports, you can use this PGP public key.
While using the Notebook, you might discover opportunities for growth and ideas for useful new features. If so, feel free to submit an enhancement proposal. The process for submitting enhancements is as follows:
Identify the scope of the enhancement. Is it a change that affects only on part of the codebase? Is the enhancement, to the best of your knowledge, fairly trivial to implement? If the scope of the enhancement is small, it should be be submitted as an issue in the project’s repository. If the scope of your enhancement is large, it should be submitted to the official Jupyter Enhancement Proposals repository.
Prepare a brief write-up of the problem that your enhancement will address.
Prepare a brief write-up of the proposed enhancement itself.
If the scope of your enhancement (as defined in step 1) is large, then prepare a detailed write-up of how your enhancement can be potentially implemented.
Identify a brief list of the pros and cons associated with implementing the enhancement that you propose.
Identify the individuals who might be interested in implementing the enhancement.
Depending on the scope of your enhancement, submit it either as an issue to the appropriate repository or as a Jupyter Enhancement Proposal.
Jupyter consists of a bunch of small projects, and a few larger ones. This lays out the basic process of releasing a smaller project, which should also apply to larger projects, though they may have some added steps.
Most Jupyter projects use a GitHub milestone system for marking issues and pull requests in releases. Each release should have a milestone associated with it. The first step in preparing for a release is to make sure that every issue and pull request has the right milestone.
Go through any open Issues and Pull Requests marked with the current milestone. If there are any, they need to be resolved or bumped to the next milestone. It’s fine to bump issues - they are typically marked with the earliest feasible milestone, but many such optimistically marked tasks aren’t complete when it’s time to release. There’s always next time!
Check closed Issues and Pull Requests, using the milestone filter “Issues with no milestone”. There should never be any closed issues or pull requests without a milestone. If you find any, go through and mark them with the current milestone or “no action” as appropriate.
A release may be ready to go when it has zero open issues or pull requests.
Once all of the issues and pull requests are dealt with, it’s time to make release notes. The smaller projects generally have a changelog.rst in the docs directory, where you can add a section for the new release. Look through the pull requests merged for the current milestone (this is why we use milestones), and write a short summary of the highlights of the changes in this release. There should generally be a link to the milestone itself for more details.
changelog.rst
Make a pull requests with these notes. It’s a good idea to cc @willingc for review of this PR. Make sure to mark this PR with your release’s milestone!
Now that your changelog is merged, we can actually build and publish the release. We’ll assume that V has been declared as a shell variable containing the release version:
V
export V=5.1.2
Start by making sure you have a clean checkout of master, with no extra files:
git pull git clean -xfd
First, update the version of the package, often in the file <pkg>/_version.py or similar.
<pkg>/_version.py
Commit that change:
git commit -am "release $V"
At this point, I like to run the tests just to be sure that setting the version didn’t confuse anything.
Build the distributions:
python setup.py sdist --formats=gztar python setup.py bdist_wheel
Tag the commit:
git tag -am "release $V" $V
And finally, publish everything, to github and PyPI using twine:
twine upload dist/* git push origin git push origin --tags
We have a release! You can now bump the version to the next ‘.dev’ version, by editing <pkg>/_version.py (or similar) again, and commit:
git commit -am "back to dev" git push origin
The pushes assume that origin points to the main jupyter/ipython repo. Depending how you use git, this could be upstream or something else.
Whether you are a new contributor or a seasoned developer, we’re pleased that you are working on Jupyter. We hope you find the Developer Guide is useful. Please suggest changes or ask questions about the contents. Thanks!
If you are interested in installing a specific project from source, each project has documentation on ReadTheDocs. For example, IPython documentation can be found on ReadTheDocs. Most of our packages can be installed from the source directory like any other Python package, by running:
pip install .
The Jupyter notebook needs some extra pieces to build Javascript components; the information about that is in the notebook contributor documentation.
Preparing for your first contribution
Developing your contribution
Clone the repository
Edit the documentation source file
Testing changes
Creating a pull request
Asking questions
Our documentation uses reStructured Text, Markdown, and Jupyter notebooks.
We use Sphinx extensively to build documentation.
We use Transifex to help translate documentation to multiple languages.
We host our documentation on Read the Docs.
Jupyter’s documentation is split across several projects, listed on the Jupyter documentation home page. These instructions apply to all Jupyter projects, though some projects have further contribution guidelines.
Fork the appropriate project repository on GitHub, depending on which project’s documentation you want to contribute to.
Clone the repository to your system.
Source files for projects are typically found in the project’s docs/source directory. The reStructured text filenames end with .rst, and Jupyter notebook files end with .ipynb.
docs/source
.rst
In your favorite text editor, make desired changes to the .rst file when working with a reStructured text source file.
If a notebook file requires editing, you will need to install Jupyter notebook according to the Installation document. Then, run the Jupyter notebook and edit the desired file. Before saving the Jupyter .ipynb file, please clear the output cells. Save the file and test your change.
Sphinx should be installed to test your documentation changes. For best results, we recommend that you install the stable development version Sphinx (pip install git+https://github.com/sphinx-doc/sphinx@stable) or the current released version of Sphinx (pip install sphinx).
pip install git+https://github.com/sphinx-doc/sphinx@stable
pip install sphinx
In addition, you may need the following packages: sphinxcontrib-spelling, sphinx_rtd_theme, nbsphinx, pyenchant, recommonmark 0.4.0 and jupyter_sphinx_theme, which can be installed via pip install sphinxcontrib-spelling sphinx_rtd_theme nbsphinx pyenchant recommonmark==0.4.0 jupyter_sphinx_theme.
pip install sphinxcontrib-spelling sphinx_rtd_theme nbsphinx pyenchant recommonmark==0.4.0 jupyter_sphinx_theme
If you are on Linux, you may also need to install the Enchant C library by running sudo apt-get install enchant.
sudo apt-get install enchant
Once everything is installed, the following commands should be executed using the Terminal/command line from the docs directory:
docs
make html builds a local html version of the documentation. The output message will either display errors or provide the location of the html documents. For example, the location provided may be build/html and to view these documents in your browser enter open build/html/index.html.
make html
build/html
open build/html/index.html
make linkcheck will check whether the external links in the documentation are valid or if they are not longer current (i.e. cause a 500 not found error).
make linkcheck
Note: We recommend using Python 3.4+ for building the documentation. If you are editing the documentation, you can use Python 2.7.9+ or the Github editor.
Once you are satisfied with your changes, submit a GitHub pull request, per the instructions above. If the documentation change is related to an open GitHub issue, please mention the issue number in the pull request message.
A project reviewer will look over your changes and provide feedback or merge your changes into the documentation.
Feel free to ask questions in the Google Group for Jupyter or on an open issue on GitHub.
High level documentation workflow
Identify a documentation change.
Typos: please go ahead and fix it (or report as a bug).
Open issues: leave a note in the issue comments that you are working on the issue.
New documentation: open an issue with your idea or suggestion. We’ll review the issue and work with you to identify next steps.
Update the source file.
Commit the change.
Test changes locally.
Open a pull request.
Check response of automated tests.
If tests pass: Nice job. Wait for reviewer feedback and/ or your pull request to be merged.
If tests show an error: Revise and resubmit your pull request. You do not need to open a new pull request. If needed, please ask for assistance.
Celebrate your documentation contribution.
Repeat. If you would like suggestions for a new documentation issue to work on, please ask.
Thanks for contributing!
Packages
Source file formats
Sphinx themes
Git and Github Resources
For user documentation, contributor guides, and communications content, we use:
Sphinx
For developer API documentation (especially for JupyterLab js repos), we use:
swagger
We use the following input source file formats when developing Sphinx documentation:
reStructuredText (.rst)
Markdown (.md)
.md
Notebook (.ipynb)
A modern code editor should be used. Many are available including Atom, SublimeText, gedit, vim, emacs. Atom is a good choice for new contributors.
Our projects use the following themes:
sphinx_rtd_theme (currently used by Jupyter projects)
jupyter_sphinx_theme (used by ipywidgets)
If this is your first time working with Github or git, you can leverage the following resources to learn about the tools.
Try Git
Github Guides
Git Real
Git Documentation
Git Rebase
Contents:
docs directory : All source files for documentation go here.
readthedocs.yml : configuration file for readthedocs to build using conda
readthedocs.yml
source directory : contains all content source files in .rst, .md, or .ipynb
source
makefile : used by Sphinx to build the docs
makefile
environment.yml : conda build instructions
environment.yml
conf.py : Sphinx configuration file
conf.py
index.rst of contents.rst : Sphinx master table of contents file
index.rst
contents.rst
_static directory : contains images, drawings, icons
_static
_templates directory: overrides theme templates and layouts
_templates
build directory : html files generated by Sphinx (do not check this directory into GitHub)
build
Providing users and developers consistency across repos is a valuable time saver and improves user productivity.
On a larger scope, having the Jupyter name appear prominently in a repo’s README.md file improves the project’s name awareness.
Please include a link to the documentation in the repo’s description.
One common way that individuals find documentation is to look for and click on the doc badge that commonly is found right after the title. Another benefit is an easy visual indication if the docs are not rendering properly.
A Resources section at the end of the README.md gives useful links and information to users about the individual project and the larger Project Jupyter organization. Make sure to include any links to the individual project’s demo notebooks, if available.
The Resources section includes:
[ ] Add a link to documentation in repo description (requires GitHub repo privileges)
[ ] Add badges to README (Edit README.md and submit pull request)
[ ] Add resources section to README (Edit README.md and submit pull request)
Dated: 1-4-2016 Revised: 1-7-2016
This explains how to automatically rebuild documentation on ReadtheDocs every time a pull request is merged into its corresponding GitHub repo.
Webhooks and services can be enabled in GitHub repo settings to allow third party services such as ReadTheDocs. The ReadTheDocs service rebuilds the project documentation whenever a pull request is merged into the GitHub repo.
Each GitHub repo has a Settings tab at the far right of the repo menubar. Navigate to Settings and then the Webhooks & services submenu tab.
Select Add service and enter ReadTheDocs in the Available Services input box.
The Services/Add ReadTheDocs window will open. Press the green Add service button to activate the ReadTheDocs service.
The ReadTheDocs service is added successfully. The service will take effect on the next merged pull request to the project repo.
Created: 01-07-2016
We support and encourage the translation of Jupyter documentation to other languages as one way of making our community more inclusive and diverse. We are working toward having a consistent model for translation of Sphinx documentation across Jupyter projects based on prior work in the Python and Django communities. This project (https://jupyter.readthedocs.io) and the Jupyter Docker Stacks project are early adopters, meant to prove out the workflows described on this page.
After initial project setup, changes to Sphinx documentation and its translations follow a continuous integration (CI) and continuous deployment (CD), much like project source code.
Anyone is welcome to participate in writing and translating Jupyter documentation by participating in the workflow described below. This workflow has a handful of actors and components:
A person who makes changes to the English project documentation
A person who translates snippets of text in the English documentations into another language (locale)
Portable object files (.po) for the source documentation language (e.g., U.S. English, en-US) and for other locales (e.g., Brazilian Portuguese, pt-BR; Moroccan Arabic, ar-MA)
.po
en-US
pt-BR
ar-MA
A continuous integration system like TravisCI, CircleCI, or GitHub Actions, responsible for
ReadTheDocs, our preferred service for building and hosting documentation
Transifex, a localization platform with free plans for open source projects, a friendly web interface, and support for .po files
A user creates or edits reStructuredText (.rst) or Markdown (.md) documents written in U.S. English.
The user submits a pull request on GitHub.
A project maintainer reviews and merges the pull request.
ReadTheDocs runs Sphinx to convert U.S. English source documents into HTML (e.g., https://jupyter.readthedocs.io/en/latest/architecture/how_jupyter_ipython_work.html)
Meanwhile, the CI service runs Sphinx commands to extract translatable messages from U.S. English documents into en-US portable object (.po) files. For example:
# 5164fcd91a8a4700ac734562245773ad #: ../../source/architecture/how_jupyter_ipython_work.rst:13 #: f68a21b0bc884dad9021c276e6490e6d msgid "" "The IPython kernel that provides computation and communication with the " "frontend interfaces, like the notebook" msgstr ""
The CI service commits the English .po files to the project on GitHub. (e.g., https://github.com/jupyter/jupyter/commit/1330bc409842d8b8a7bbb3a1c63259c34a543be0)
Transifex makes the messages in the English .po files available for translation in all configured languages.
Over time, translation teams use the Transifex web application to create, review, and update translations for those languages (e.g., https://docs.transifex.com/translation/translating-with-the-web-editor)
Transifex submits a pull request to the GitHub project containing a localized .po file when all of the English messages have been translated, and optionally reviewed, for a given language (e.g., https://github.com/jupyter/jupyter/pull/485). For example:
# 5164fcd91a8a4700ac734562245773ad #: ../../source/architecture/how_jupyter_ipython_work.rst:13 msgid "" "The IPython kernel that provides computation and communication with the " "frontend interfaces, like the notebook" msgstr "" "The IPython kernel que fornece o cálculo e a comunicação com as interfaces " "de frontend como o notebook"
ReadTheDocs once again runs Sphinx to convert U.S. English source documents into HTML.
ReadTheDocs also runs Sphinx to load localized .po files, substitute translations into the original English text, and convert those translated documents into HTML (e.g., https://jupyter.readthedocs.io/pt_BR/latest/architecture/how_jupyter_ipython_work.html)
Note: We recognize this flow assumes documentation starts life written in U.S. English. We should look into removing this assumption in the future if it becomes a significant barrier to new contributions.
We are delighted when members of the Jupyter community want to help translate documentation. We use Transifex to on-board translators in a friendly web interface without requiring knowledge of git, GitHub, Sphinx, or other software developer tools.
Getting Started as a Translator is an excellent on-boarding guide for new Transifex users. Follow the instructions to create an account. When prompted to join a team, look for jupyter-meta-documentation to start contributing translations to this documentation site. Alternatively, visit https://www.transifex.com/project-jupyter/jupyter-meta-documentation/ after creating your account and request to join the project. A project maintainer or language team coordinator will review and approve your request.
Transifex supports Reviewing Translations, peer review by members of a language team, to ensure translation quality. Project maintainers can choose whether Transifex should immediately send a pull request when translations of all text in a document are available or delay submitting a pull request until after all of those translations are also reviewed (the current setting for this project).
Project maintainers can also grant Transfex team members the role of language coordinator. Language coordinators have permission to invite users to language teams, approve or deny join requests, assign language team roles, and perform other administrative actions for a particular project language. Empowering trusted members of the community as coordinators can help grow translation teams without software developer involvement.
The translation CI/CD workflow described above requires configuration in GitHub and in Transifex to function. Project maintainers can follow the instructions below to enable translations for their Sphinx documentation.
Transifex organizes translation projects under organizations that mirror organizations and repositories on GitHub. At present, only the https://github.com/jupyter organization has a corresponding org on Transifex (https://www.transifex.com/project-jupyter/public/) with the following organization administrators:
@choldgraf
@parente
@willingc
GitHub users with permissions to install applications in a GitHub org can follow these instructions to create a new Transifex-GitHub organization link (e.g., for https://github.com/jupyterhub, https://github.com/jupyterlab).
Create a new user account at https://transifex.com.
Complete the sign-up wizard.
Create and name a new organization.
Click the organization drop down in the top right of the Transifex dashboard page and select Organization Settings.
Click Details in the left sidebar.
Click inviting administrators in the Management section to add additional admins to the Transifex org.
Click Manage integrations in the left sidebar.
Click Install the Transifex app in the GitHub section.
Select the GitHub organization to associate with the new Transifex organization.
Select the repositories that Transifex will have permission to access.
Return to the tab where you clicked Install the Transifex app and click authorize Transifex in the GitHub section.
Choose the GitHub organization you just configured in the popup dialog.
Note that you can revise the GitHub-Transifex integration at any time by visiting https://github.com/settings/installations.
Transifex organization administrators can follow the instructions below to configure new translation projects for GitHub projects in the GitHub org corresponding to the one on Transifex.
Visit https://www.transifex.com.
Sign in with the appropriate admin user account for the organization.
Click Create new project in the lower left sidebar.
Name the translation project after the project on GitHub.
Select Public as the privacy type, indicate that the project is open source, and provide the GitHub URL for the repository.
Select a file-based project.
Create a new team for the project.
Select English (en) as the source language.
Select known target languages. (You can add these later as well.)
Click Create project.
Click Settings under the project name in the left sidebar.
Click the Maintainers tab.
Invite additional project maintainers, typically software developers who will be responsible for maintaining the continuous integration and bootstrapping language teams.
Transifex organization admins and project managers can add translation languages to a project.
Click Languages under the project name in the left sidebar.
Click Edit languages.
Add or remove target translation languages.
Click Apply.
Organization admins, project maintainers, and team managers can add users to translation teams with the roles of language coordinator, reviewer, or translator.
Click Teams in the top nav bar.
Click the Invite Collaborators button in the top right.
Enter the username, email address, or full name of a person to add to the project. Note that the autocomplete in this field does not always display a popup for the user you wish to invite. Confirm you’ve entered the correct value and move on.
Select the role to assign to the user.
If the role applies to a specific team, select the team.
If the role applies to a specific language, select the language.
Click Invite more to enter additional users or Send invitation.
After configuring organization and project resources on Transifex, project developers can:
configure Sphinx to produce .po files for the source language and read .po files containing translations
configure Transifex to watch for source language .po file changes
configure the project CI service to update source language .po files when contributors make changes to the source documentation
The instructions in this section assume a git repository already contains Sphinx documentation in the following directory structure:
my-project/ docs/ build/ # built sphinx artifacts go here source/ # documentation source is in here conf.py # sphinx config file index.rst # root of the documentation requirements.txt # sphinx, sphinx-intl, etc.
Project developers can do the following to configure Sphinx to seed source .po files and recognization translation .po files.
Add sphinx-intl to your Sphinx project requirements.txt or environment.yaml if it does not already exist.
sphinx-intl
requirements.txt
environment.yaml
Run sphinx-intl create-txconfig in the docs/ directory.
sphinx-intl create-txconfig
docs/
Add the following to the Sphinx source/conf.py file.
source/conf.py
# -- Translation ------------------------------------------------------------- gettext_uuid = True locale_dirs = ["locale/"]
Run make gettext to extract all strings from the English source documentation.
make gettext
Run sphinx-intl update -l en to generate the English source .po files.
sphinx-intl update -l en
Submit, review, and merge a pull request with the changes and generated .po files.
After merging the pull request, link to the Transifex project to the GitHub repository.
Click the Integrations tab.
Click Link Repository in the GitHub section.
Select the appropriate GitHub repository and integration branch. Then click Next.
Copy and paste the following configuration into the dialog, adjusting the commented values as appropriate, and then click Next.
filters: - filter_type: dir file_format: PO source_file_extension: po # Change this if you selected a different source language during project setup source_language: en # The path in the GitHub repository where the source .po files reside source_file_dir: "docs/source/locale/en/LC_MESSAGES" # The path in the GitHub repository where translation .po files reside translation_files_expression: "docs/source/locale/<lang>/LC_MESSAGES"
Select when Transifex will submit translations a back to the repository. Then click Save & Sync.
Click Close.
Watch the sync status progress.
Click Resources in the left sidebar.
Click one of the .po files to see translation progress by language.
Click one of the languages to see details about translation progress, translate text, and review translations. See the Translator workflows section above for details.
After confirming the initial English .po files have reached Transifex, set up continuous integration to ensure source strings are kept up-to-date in Transifex whenever the English documentation changes. The steps to accomplish this end vary depending on the CI provider. The following describes how what to do when using GitHub Actions.
Create a new GitHub actions workflow file .github/workflows/gettext.yml in the project.
.github/workflows/gettext.yml
Add the following content to the file. Note that secrets.GITHUB_TOKEN is a built-in secret, not something you need to configure ahead of time.
secrets.GITHUB_TOKEN
name: Extract Translatable Strings # Run on all branch pushes on: push: paths: - "docs/source/**" - "!docs/source/locale/**" - ".github/workflows/gettext.yml" jobs: gettext: runs-on: ubuntu-latest steps: - name: Checkout Code uses: actions/checkout@master - name: Set up Python uses: actions/setup-python@v2 with: python-version: "3.x" - name: Install dependencies working-directory: docs run: pip install -r requirements.txt - name: Extract source strings working-directory: docs run: | make gettext sphinx-intl update -l en - name: Push to master # Only commit changes to master if master just changed if: github.ref == 'refs/heads/master' uses: mikeal/publish-to-github-action@master env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
Submit, review, and merge a pull request containing the workflow YAML.
Once you complete the steps in this section, any changes to the source English documentation on the master branch are pulled into Transifex for translation. Likewise, any translations completed on Transifex are submitted as pull requests back to the project on GitHub.
ReadTheDocs supports building HTML documentation sites from a single GitHub project and its translations. Administrators of the source language documentation project on ReadTheDocs can following these instructions to enable builds for other languages.
Visit https://readthedocs.org/dashboard/
Note the name of existing ReadTheDocs project containing your source language (e.g., jupyter).
Click Import a Project.
Import a Project
Click Import Manually.
Import Manually
Enter the project name you noted above suffixed with a target language locale (e.g., jupyter-es, jupyter-pt-br).
jupyter-es
jupyter-pt-br
Enter the GitHub URL of the project.
Check Edit advanced project options.
Click Next.
Select the name of the target language from the Language drop down (e.g. es -> Spanish, es-mx -> Mexican Spanish, pt-br -> Brazillian Portuguese).
es
es-mx
pt-br
Click Finish.
Finish
Return to the list of projects at https://readthedocs.org/dashboard/
Click the project containing the source language.
Click Admin.
Click Translations.
Choose the name of the translation project created in step 5 from the Project drop down.
Click Add.
Repeat these steps for all other languages the project supports.
Now, any time you merge a pull request from Transifex containing .po translation file updates, ReadTheDocs will build both the source documentation site as well as sites for all supported languages. ReadTheDocs will associate the sites with one another and make them accessible via language links in a popup.
https://github.com/parente/helloworld-transifex-rtd is a mini-project configured to support the entire workflow described in this document.
This section helps a contributor set up the documentation infrastructure for a new project or an existing project without Sphinx documentation.
Documentation helps guide new users, fosters communication between developers, and shares tips and best practices with other community members. That’s why the Jupyter project is focused on documenting new features and to keeping the documentation up-to-date.
Blog
Technical overview
Basic workflow from blog idea to published post
Creating a draft
Title and metadata
Working with images
Links
Draft review
Ask for a review
Editorial acceptance
Publishing the post
Changing an existing post
Posts Updates
Newsletter
Website
We publish our blog at https://blog.jupyter.org. We welcome ideas for posts or guest posts to the Jupyter blog. If you have a suggestion for a future post, please feel free to share your idea with us. We would like to discuss the idea with you.
Do you enjoy writing? Please contact us about becoming a guest blogger. We can help guide you through the process of creating a post.
Jupyter’s blog uses the Ghost blog platform for its contributor flexibility and ease of use. Jupyter’s blog is deployed at https://blog.jupyter.org.
There are several major steps in the workflow from blog idea to a published post including:
Be inspired to write a post
Send us a message on the Jupyter mailing list and ask us for an author account on our blog
Draft Review
We’ll cover each of these as well as how to update a post once it has been published.
Alway check in the metadata fields that a blog post has a title and a canonical URL. It is possible to put the date in the canonical URL, in particular for events like jupyter-day, that can occur several times. The date of the event can differ from the date of the blog post.
Once a post is published, never change the post’s title or the url. These changes will break links of tweets and RSS feeds that have already referenced the existing, published URL. Keep in mind that when publishing some platforms cache the url immediately; as a result changing the title will direct people to a 404 page.
Title and metadata can always be refined after the actual content of the blog is written, but should not be changed after publication. As a guest you do not have to worry about metadata, the editor or admins will take care of that.
Try not to link to external images. If you want to put an image in the post, insert ![]() in the editor view and drag and drop an image from your desktop into the newly created field in the preview. External images can change, and can break the blog post if they are taken down. This cannot append if you drag and drop images. Moreover, these images will be served from the same CDN (Content Delivery Network) as the blog, which will insure the best overall experience for our readers.
![]()
The featured image you see at the top of a blog posts is set from within the metadata field, not using the ![](). The featured image is treated differently than inlined images by many feedreaders (especially on mobile) and allows a user on a slow connection to read the content of the blog earlier, which is a much better experience for the user than waiting for the featured image to render.
Do not use minified links when possible. The multiple redirects of minified links degrades the mobile browsing experience. If you need analytics of the number of page views, this information is tracked by Google Analytics.
Once you think you are done, ask someone else to reread your post, and check the various parameters that you might have forgotten before publishing. You are not on your own, this is teamwork, we are here to help you. If we do things in a hurry you will probably spend more time fixing mistakes that actually doing things right in a first place.
Usually an editor or admin will take care of publishing the post. The task of the Editor/Admin is to check all metadata are correctly set, that no external images are used, as well as all other quality check describe before.
It is then just a matter of making th post visible to everyone.
Blog subscribers may receive notification at every update. So use updates and fixes parsimoniously. It is OK to wait a few hours to fix a typo.
If some substantial updates have to be made, like change of location, time etc, please insert an [Update] section at top (or bottom of the blog post depending on importance) with the Date/Time of the update. If the information in the body of the blog is wrong, try not to replace it, and just use strike-through to mark it as obsolete. This would help reader determine which information is correct when dealing with multiple source giving different informations.
Documentation in progress.
Attention
This is copied verbatim from the old IPython wiki and is currently under development. Much of the information in this part of the development guide is out of date.
IPython does all of its development using GitHub. All of our development information is hosted on this GitHub wiki. This page indexes all of that information. Developers interested in getting involved with IPython development should start here.
Notes on working with GitHub
100% of confirmed issues should have a milestone.
An issue should never be closed without a milestone.
All pull requests should have a milestone.
All issues closed should be marked with the next release milestone, next backport milestone, or no action.
Open issues should only lack a milestone if:
more clarification is required (label: needs-info)
needs-info
In general, there will be four milestones with open issues:
next minor release. This milestone contains issues that should be backported for the next minor release. See below for information on backporting.
next major release. This should be the default milestone of all issues. As the release approaches, issues can be explicitly bumped to next release + 1.
next major release + 1. Only issues that we are confident will not be included in the next release go here. This milestone should be mostly empty until relatively close to release.
wishlist. This is the milestone for issues that we have no immediate plans to address.
The remaining milestone is no action for open or closed issues that require no action:
Not actually an issue (e.g. questions, discussion, etc.)
Duplicate of an existing Issue
Closed because we won’t fix it
When an issue is closed with no action, it means that the issue will not be fixed, or it was not an issue at all.
When closing an issue, it should always have one of these milestones:
next minor release because the issue has been addressed
next major release because the issue has been addressed
no action because the issue will not be addressed, or it is a duplicate/non-issue.
In general: When in doubt, mark with next release. We can always push back when we get closer to a given release.
Issues should always be labeled once they are confirmed (not necessary for issues that are still being clarified, or may be duplicates or not issues at all).
Some significant labels:
needs-info: issue needs more information from submitter before progress can be made
bug: errors are raised, or unintended behavior occurs
bug
enhancement: improvements that are not bugs
enhancement
backport-X.Y.Z: Any fix for this issue should be backported to the maintenance branch. backports are expressed with milestones, starting with 2.1.
prio-foo: a priority level for ranking issues - nonessential, but prio-high/low are useful for explicitly promoting/demoting issues, particularly when nearing release.
prio-foo
ClosedPR: This issue is a reminder for a PR that was closed for going stale.
ClosedPR
sprint-friendly: Obvious or easy fixes, where
sprint-friendly
All confirmed issues should at least have a bug or enhancement label, and be marked with any affected components (e.g parallel, qtconsole, htmlnotebook), or particular sources of error (e.g. py3k or unicode) if we have appropriate labels.
parallel
htmlnotebook
py3k
unicode
The sprint-friendly label is probably the best place for new contributors to start.
All work is submitted via Pull Requests.
Pull Requests can be submitted as soon as there is code worth discussing. Pull Requests track the branch, so you can continue to work after the PR is submitted. Review and discussion can begin well before the work is complete, and the more discussion the better. The worst case is that the PR is closed.
Pull Requests that have stalled should be closed (see [[our policy on closing PRs|Dev: Closing Pull Requests]])
Pull Requests should always be made against master (backporting PRs is described below).
Pull Requests should be tested, if feasible:
bugfixes should include regression tests
new behavior should at least get minimal exercise
Travis does a pretty good job testing IPython and Pull Requests, but it may make sense to manually perform tests (possibly with our test_pr script), particularly for PRs that affect IPython.parallel or Windows.
test_pr
When opening a new issue, please take the following steps:
Search GitHub and/or Google for your issue to avoid duplicate reports. Keyword searches for your error messages are most helpful.
If possible, try updating to master and reproducing your issue, because we may have already fixed it.
Try to include a minimal reproducible test case
Include relevant system information. Start with the output of:
python -c "import IPython; print(IPython.sys_info())"
And include any relevant package versions, depending on the issue, such as matplotlib, numpy, Qt, Qt bindings (PyQt/PySide), tornado, web browser, etc.
We should keep an A.x maintenance branch for backporting fixes from master.
A.x
That branch shall be called A.x, e.g. 2.x, not 2.1. This way, there is only one maintenance branch per release series.
2.x
2.1
When an Issue is determined to be appropriate for backporting, it should be marked with the A.B milestone.
A.B
Any Pull Request which addresses a backport issue should also receive the same milestone.
Patches are backported to the maintenance branch by applying the pull request patch to the maintenance branch (currently with the backport_pr script).
A brief guide to making and reviewing pull requests.
The code does what it’s supposed to!
IPython has to work on:
Linux of various kinds, Windows & Mac
Python 2 & 3
Much of our code base deals with strings and unicode. This needs to be done in a manner that is unicode aware and works on Python 2 and 3. [This article] (http://www.joelonsoftware.com/articles/Unicode.html) is a good intro to unicode.
Coding style refers to how source code is formatted and how variables, functions, methods and classes are named. Your code should follow our coding style, which is described [[here|Dev: Coding style]].
The code should be well organized, and have inline comments where appropriate. When we look at the code, it should be clear what it’s doing and why. It should not break abstractions that we have established in the project.
If it fixes a bug, the pull request should ideally add an automated test that fails without the fix, and passes with it. Normally it should be sufficient to copy an existing test and tweak it. New functionality should come with its own tests as well. Details about testing IPython can be found [[here|Dev: Testing]].
Don’t forget to update docstrings, and any relevant parts of the official documentation. New features or significant changes warrant an entry in the What’s New section too. Details about documenting IPython can be found [[here|Dev: Documenting IPython]].
This document describes our coding style. Coding style refers to the following:
How source code is formatted (indentation, spacing, etc.)
How things are named (variables, functions, classes, modules, etc.)
In general, we follow the standard Python style conventions as described in Python’s PEP 8, the official Python Style Guide.
Other general comments:
In a large file, top level classes and functions should be separated by 2 lines to make it easier to separate them visually.
Use 4 spaces for indentation, never use hard tabs.
Keep the ordering of methods the same in classes that have the same methods. This is particularly true for classes that implement similar interfaces and for interfaces that are similar.
For naming conventions, we also follow the guidelines of PEP 8. Some of the existing code doesn’t honor this perfectly, but for all new and refactored IPython code, we’ll use:
All lowercase module names. Long module names can have words separated by underscores (really_long_module_name.py), but this is not required. Try to use the convention of nearby files.
lowercase
really_long_module_name.py
CamelCase for class names.
CamelCase
lowercase_with_underscores for methods, functions, variables and attributes.
lowercase_with_underscores
Implementation-specific private methods will use _single_underscore_prefix. Names with a leading double underscore will only be used in special cases, as they makes subclassing difficult (such names are not easily seen by child classes).
_single_underscore_prefix
Occasionally some run-in lowercase names are used, but mostly for very short names or where we are implementing methods very similar to existing ones in a base class (like runlines() where runsource() and runcode() had established precedent).
runlines()
runsource()
runcode()
The old IPython codebase has a big mix of classes and modules prefixed with an explicit IP of ip. This is not necessary and all new code should not use this prefix. The only case where this approach is justified is for classes or functions which are expected to be imported into external namespaces and a very generic name (like Shell) that is likely to clash with something else. However, if a prefix seems absolutely necessary the more specific IPY or ipy are preferred.
IP
ip
IPY
ipy
All JavaScript code should follow these naming conventions as well.
In general, objects should declare, in their class, all attributes the object is meant to hold throughout its life. While Python allows you to add an attribute to an instance at any point in time, this makes the code harder to read and requires methods to constantly use checks with hasattr() or try/except calls. By declaring all attributes of the object in the class header, there is a single place one can refer to for understanding the object’s data interface, where comments can explain the role of each variable and when possible, sensible defaults can be assigned.
If an attribute is meant to contain a mutable object, it should be set to None in the class and its mutable value should be set in the object’s constructor. Since class attributes are shared by all instances, failure to do this can lead to difficult to track bugs. But you should still set it in the class declaration so the interface specification is complete and documented in one place.
None
A simple example:
class Foo(object): # X does..., sensible default given: x = 1 # y does..., default will be set by constructor y = None # z starts as an empty list, must be set in constructor z = None def __init__(self, y): self.y = y self.z = []
When starting a new Python file for IPython, you can use the following template as a starting point that has a few common things pre-written for you.
When contributing code to IPython, you should strive for clarity and consistency, without falling prey to a style straitjacket. Basically, ‘document everything, try to be consistent, do what makes sense.’
By and large we follow existing Python practices in major projects like Python itself or NumPy, this document provides some additional detail for IPython.
All standalone documentation should be written in plain text (.txt) files using reStructuredText [reStructuredText]_ for markup and formatting. All such documentation should be placed in the directory docs/source of the IPython source tree. Or, when appropriate, a suitably named subdirectory should be used. The documentation in this location will serve as the main source for IPython documentation.
.txt
The actual HTML and PDF docs are built using the Sphinx [Sphinx]_ documentation generation tool. Once you have Sphinx installed, you can build the html docs yourself by doing:
$ cd ipython-mybranch/docs $ make html
Our usage of Sphinx follows that of matplotlib [Matplotlib]_ closely. We are using a number of Sphinx tools and extensions written by the matplotlib team and will mostly follow their conventions, which are nicely spelled out in their documentation guide [MatplotlibDocGuide]_. What follows is thus a abridged version of the matplotlib documentation guide, taken with permission from the matplotlib team.
If you are reading this in a web browser, you can click on the “Show Source” link to see the original reStricturedText for the following examples.
A bit of Python code:
for i in range(10): print i, print "A big number:",2**34
An interactive Python session:
>>> from IPython.utils.path import get_ipython_dir >>> get_ipython_dir() '/home/fperez/.config/ipython'
An IPython session:
In [7]: import IPython In [8]: print "This IPython is version:",IPython.__version__ This IPython is version: 0.9.1 In [9]: 2+4 Out[9]: 6
A bit of shell code:
cd /tmp echo "My home directory is: $HOME" ls
Good docstrings are very important. Unfortunately, Python itself only provides a rather loose standard for docstrings [PEP257]_, and there is no universally accepted convention for all the different parts of a complete docstring. However, the NumPy project has established a very reasonable standard, and has developed some tools to support the smooth inclusion of such docstrings in Sphinx-generated manuals. Rather than inventing yet another pseudo-standard, IPython will be henceforth documented using the NumPy conventions; we carry copies of some of the NumPy support tools to remain self-contained, but share back upstream with NumPy any improvements or fixes we may make to the tools.
The NumPy documentation guidelines [NumPyDocGuide]_ contain detailed information on this standard, and for a quick overview, the NumPy example docstring [NumPyExampleDocstring]_ is a useful read.
For user-facing APIs, we try to be fairly strict about following the above standards (even though they mean more verbose and detailed docstrings). Wherever you can reasonably expect people to do introspection with:
In [1]: some_function?
the docstring should follow the NumPy style and be fairly detailed.
For purely internal methods that are only likely to be read by others extending IPython itself we are a bit more relaxed, especially for small/short methods and functions whose intent is reasonably obvious. We still expect docstrings to be written, but they can be simpler. For very short functions with a single-line docstring you can use something like:
def add(a, b): """The sum of two numbers. """ code
and for longer multiline strings:
def add(a, b): """The sum of two numbers. Here is the rest of the docs. """ code
Here are two additional PEPs of interest regarding documentation of code. While both of these were rejected, the ideas therein form much of the basis of docutils (the machinery to process reStructuredText):
Docstring Processing System Framework
Docutils Design Specification
note In the past IPython used epydoc so currently many docstrings still use epydoc conventions. We will update them as we go, but all new code should be documented using the NumPy standard.
note
In the past IPython used epydoc so currently many docstrings still use epydoc conventions. We will update them as we go, but all new code should be documented using the NumPy standard.
The built docs are stored in a separate repository. Through some github magic, they’re automatically exposed as a website. It works like this:
You will need to have sphinx and latex installed. In Ubuntu, install texlive-latex-recommended texlive-latex-extra texlive-fonts-recommended. Install the latest version of sphinx from PyPI (pip install sphinx).
texlive-latex-recommended texlive-latex-extra texlive-fonts-recommended
Ensure that the development version of IPython is the first in your system path. You can either use a virtualenv, or modify your PYTHONPATH.
Switch into the docs directory, and run make gh-pages. This will build your updated docs as html and pdf, then automatically check out the latest version of the docs repository, copy the built docs into it, and commit your changes.
make gh-pages
Open the built docs in a web browser, and check that they’re as expected.
(When building the docs for a new tagged release, you will have to add its link to index.rst, then run python build_index.py to update index.html. Commit the change.)
python build_index.py
Upload the docs with git push. This only works if you have write access to the docs repository.
git push
If you are building a version that is not the current dev branch, nor a tagged release, then you must run gh-pages.py directly with python gh-pages.py <version>, and not with make gh-pages.
python gh-pages.py <version>
Academic labs have long had the tradition of the weekly lab meeting, where all topics of interest to the lab are discussed. IPython has strong roots in academia, but it is also an open source project that needs to engage an active international community. So while our goal with IPython is not to publish the next paper, we’ve been thinking about the value these regular discussions bring to how teams work on sustained efforts involving difficult problems, and wanted to bring that bit of academic practice into the open source workflow. So we have decided to conduct weekly “lab meetings” for IPython, that will be held publicly using a Google Hangout on Air.
We are trying to keep things simple and with a minimum of new moving parts:
Meetings happen on Tuesdays at 10am California time (i.e. UTC-8 or -7 depending on the time of year).
We broadcast the meeting as it happens via G+ and leave the public YouTube link afterwards.
During the meeting, all chat that needs to happen by typing can be done on our Gitter chat room.
We keep a running document with minutes of the meeting on HackPad where we summarize main points. (2015 part 1)
We welcome and encourage help from others in updating these minutes during the meeting, but we’ll make no major effort in ensuring that they are a detailed and accurate record of the whole discussion. We simply don’t have the time for that.
You can find a list of the videos on the ipythondev YouTube user page.
IPython has the following policy on closing pull requests. The goal of this policy is to keep our pull request queue small and allow us to focus on code that is being actively developed and has a strong chance of being merged in master soon.
A pull request will be closed when:
It has been reviewed, but has sat for a month or more waiting for the submitter to commit more code to address the comments.
The review process has uncovered larger design or technical issues that extend beyond the details of the specific pull request.
In particular, we do not accept whole large “cleanup” changes which do not address any specific bug. This includes trailing whitespace, PEP8, etc. One of the reasons is that such massive cleanup provide plenty of opportunities to introduce new and subtle bugs.
In general we will not close pull requests because of a lack of review. If a pull request has sat for a month or more without review, we need to kick ourselves and get to reviewing it.
When a pull request is closed we will do the following:
Post a github message to the pull request to confirm that everyone is fine with closing the pull request. This message should cite this policy.
Open an issue to track the pull request. This issue should describe what would be needed in order to reopen the pull request.
Post a github message to the pull request encouraging the submitter to continue with the work and detail what issues need to be addressed in order for the pull request to be reopened.
This policy was discussed in the following thread:
https://mail.scipy.org/pipermail/ipython-dev/2012-August/010025.html
Hi, This PR has been inactive for 1 month now, so we are going to close it and open an issue to reference it. We try to keep our pull request queue small and focused on active work. We encourage you to reopen the pull request if and when you continue to work on this. Please contact us if you have any questions. Thanks for contributing. see https://github.com/ipython/ipython/wiki/Dev%3A-Closing-pull-requests/ for our policies on closing pull requests.
It is extremely important that all code contributed to IPython has tests. Tests should be written as unittests, doctests or other entities that the IPython test system can detect. See below for more details on this.
Each subpackage in IPython should have its own tests directory that contains all of the tests for that subpackage. All of the files in the tests directory should have the word “tests” in them to enable the testing framework to find them.
tests
In docstrings, examples (either using IPython prompts like In [1]: or ‘classic’ python >>> ones) can and should be included. The testing system will detect them as doctests and will run them; it offers control to skip parts or all of a specific doctest if the example is meant to be informative but shows non-reproducible information (like filesystem data).
In [1]:
>>>
If a subpackage has any dependencies beyond the Python standard library, the tests for that subpackage should be skipped if the dependencies are not found. This is very important so users don’t get tests failing simply because they don’t have dependencies.
The testing system we use is an extension of the nose test runner. In particular we’ve developed a nose plugin that allows us to paste verbatim IPython sessions and test them as doctests, which is extremely important for us.
You can run IPython from the source download directory without even installing it system-wide or having configure anything, by typing at the terminal:
python2 -c "import IPython; IPython.start_ipython();"
To start the webbased notebook you can use:
python2 -c "import IPython; IPython.start_ipython(['notebook']);"
In order to run the test suite, you must at least be able to import IPython, even if you haven’t fully installed the user-facing scripts yet (common in a development environment). You can then run the tests with:
python -c "import IPython; IPython.test()"
Once you have installed IPython either via a full install or using:
python setup.py develop
you will have available a system-wide script called iptest that runs the full test suite. You can then run the suite with:
iptest
iptest [args]
By default, this excludes the relatively slow tests for IPython.parallel. To run these, use iptest --all.
iptest --all
Please note that the iptest tool will run tests against the code imported by the Python interpreter. If the command python setup.py symlink has been previously run then this will always be the test code in the local directory via a symlink. However, if this command has not been run for the version of Python being tested, there is the possibility that iptest will run the tests against an installed version of IPython.
python setup.py symlink
Regardless of how you run things, you should eventually see something like:
********************************************************************** Test suite completed for system with the following information: {'commit_hash': '144fdae', 'commit_source': 'repository', 'ipython_path': '/home/fperez/usr/lib/python2.6/site-packages/IPython', 'ipython_version': '0.11.dev', 'os_name': 'posix', 'platform': 'Linux-2.6.35-22-generic-i686-with-Ubuntu-10.10-maverick', 'sys_executable': '/usr/bin/python', 'sys_platform': 'linux2', 'sys_version': '2.6.6 (r266:84292, Sep 15 2010, 15:52:39) \n[GCC 4.4.5]'} Tools and libraries available at test time: curses matplotlib pymongo qt sqlite3 tornado wx wx.aui zmq Ran 9 test groups in 67.213s Status: OK
If not, there will be a message indicating which test group failed and how to rerun that group individually. For example, this tests the IPython.utils subpackage, the -v option shows progress indicators:
IPython.utils
-v
$ iptest IPython.utils -- -v ..........................SS..SSS............................S.S... ......................................................... ---------------------------------------------------------------------- Ran 125 tests in 0.119s OK (SKIP=7)
Because the IPython test machinery is based on nose, you can use all nose syntax. Options after -- are passed to nose. For example, this lets you run the specific test test_rehashx inside the test_magic module:
--
test_rehashx
test_magic
$ iptest IPython.core.tests.test_magic:test_rehashx -- -vv IPython.core.tests.test_magic.test_rehashx(True,) ... ok IPython.core.tests.test_magic.test_rehashx(True,) ... ok ---------------------------------------------------------------------- Ran 2 tests in 0.100s OK
When developing, the --pdb and --pdb-failures of nose are particularly useful, these drop you into an interactive pdb session at the point of the error or failure respectively: iptest mymodule -- --pdb.
--pdb
--pdb-failures
iptest mymodule -- --pdb
The system information summary printed above is accessible from the top level package. If you encounter a problem with IPython, it’s useful to include this information when reporting on the mailing list; use:
.. code:: python
from IPython import sys_info print sys_info()
and include the resulting information in your query.
We have a script that fetches a pull request from Github, merges it with master, and runs the test suite on different versions of Python. This uses a separate copy of the repository, so you can keep working on the code while it runs. To run it:
python tools/test_pr.py -p 1234
The number is the pull request number from Github; the -p flag makes it post the results to a comment on the pull request. Any further arguments are passed to iptest.
-p
This requires the requests and keyring packages.
By now IPython has a reasonable test suite, so the best way to see what’s available is to look at the tests directory in most subpackages. But here are a few pointers to make the process easier.
IPython.testing
The IPython.testing package is where all of the machinery to test IPython (rather than the tests for its various parts) lives. In particular, the iptest module in there has all the smarts to control the test process. In there, the make_exclude function is used to build a blacklist of exclusions, these are modules that do not get even imported for tests. This is important so that things that would fail to even import because of missing dependencies don’t give errors to end users, as we stated above.
make_exclude
The decorators module contains a lot of useful decorators, especially useful to mark individual tests that should be skipped under certain conditions (rather than blacklisting the package altogether because of a missing major dependency).
decorators
The plugin subpackage in testing contains a nose plugin called ipdoctest that teaches nose about IPython syntax, so you can write doctests with IPython prompts. You can also mark doctest output with # random for the output corresponding to a single input to be ignored (stronger than using ellipsis and useful to keep it as an example). If you want the entire docstring to be executed but none of the output from any input to be checked, you can use the # all-random marker. The IPython.testing.plugin.dtexample module contains examples of how to use these; for reference here is how to use # random:
plugin
ipdoctest
# random
# all-random
IPython.testing.plugin.dtexample
def ranfunc(): """A function with some random output. Normal examples are verified as usual: >>> 1+3 4 But if you put '# random' in the output, it is ignored: >>> 1+3 junk goes here... # random >>> 1+2 again, anything goes #random if multiline, the random mark is only needed once. >>> 1+2 You can also put the random marker at the end: # random >>> 1+2 # random .. or at the beginning. More correct input is properly verified: >>> ranfunc() 'ranfunc' """ return 'ranfunc'
and an example of # all-random:
def random_all(): """A function where we ignore the output of ALL examples. Examples: # all-random This mark tells the testing machinery that all subsequent examples should be treated as random (ignoring their output). They are still executed, so if a they raise an error, it will be detected as such, but their output is completely ignored. >>> 1+3 junk goes here... >>> 1+3 klasdfj; In [8]: print 'hello' world # random In [9]: iprand() Out[9]: 'iprand' """ return 'iprand'
When writing docstrings, you can use the @skip_doctest decorator to indicate that a docstring should not be treated as a doctest at all. The difference between # all-random and @skip_doctest is that the former executes the example but ignores output, while the latter doesn’t execute any code. @skip_doctest should be used for docstrings whose examples are purely informational.
@skip_doctest
If a given docstring fails under certain conditions but otherwise is a good doctest, you can use code like the following, that relies on the ‘null’ decorator to leave the docstring intact where it works as a test:
# The docstring for full_path doctests differently on win32 (different path # separator) so just skip the doctest there, and use a null decorator # elsewhere: doctest_deco = dec.skip_doctest if sys.platform == 'win32' else dec.null_deco @doctest_deco def full_path(startPath,files): """Make full paths for all the listed files, based on startPath...""" # function body follows...
With our nose plugin that understands IPython syntax, an extremely effective way to write tests is to simply copy and paste an interactive session into a docstring. You can writing this type of test, where your docstring is meant only as a test, by prefixing the function name with doctest_ and leaving its body absolutely empty other than the docstring. In IPython.core.tests.test_magic you can find several examples of this, but for completeness sake, your code should look like this (a simple case):
doctest_
IPython.core.tests.test_magic
def doctest_time(): """ In [10]: %time None CPU times: user 0.00 s, sys: 0.00 s, total: 0.00 s Wall time: 0.00 s """
This function is only analyzed for its docstring but it is not considered a separate test, which is why its body should be empty.
We currently use casperjs for testing the notebook javascript user interface.
To run the JS test suite by itself, you can either use iptest js, which will start up a new notebook server and test against it, or you can open up a notebook server yourself, and then:
iptest js
cd IPython/html/tests/casperjs; casperjs test --includes=util.js test_cases
If your testing notebook server uses something other than the default port (8888), you will have to pass that as a parameter to the test suite as well.
casperjs test --includes=util.js --port=8889 test_cases
To speed up development, you usually are working on getting one test passing at a time. To do this, just pass the filename directly to the casperjs test command like so:
casperjs test
casperjs test --includes=util.js test_cases/execute_code_cell.js
CasperJS is a browser that’s written in javascript, so we write javascript code to drive it. The Casper browser itself also has a javascript implementation (like the ones that come with Firefox and Chrome), and in the test suite we get access to those using this.evaluate, and it’s cousins (this.theEvaluate, etc). Additionally, because of the asynchronous / callback nature of everything, there are plenty of this.then calls which define steps in test suite. Part of the reason for this is that each step has a timeout (default of 5 or 10 seconds). Additionally, there are already convenience functions in util.js to help you wait for output in a given cell, etc. In our javascript tests, if you see functions which look_like_pep8_naming_convention, those are probably coming from util.js, whereas functions that come with casper haveCamelCaseNamingConvention
this.evaluate
this.theEvaluate
this.then
util.js
look_like_pep8_naming_convention
haveCamelCaseNamingConvention
Each file in test_cases looks something like this (this is test_cases/check_interrupt.js):
test_cases
test_cases/check_interrupt.js
casper.notebook_test(function () { this.evaluate(function () { var cell = IPython.notebook.get_cell(0); cell.set_text('import time\nfor x in range(3):\n time.sleep(1)'); cell.execute(); }); // interrupt using menu item (Kernel -> Interrupt) this.thenClick('li#int_kernel'); this.wait_for_output(0); this.then(function () { var result = this.get_output_cell(0); this.test.assertEquals(result.ename, 'KeyboardInterrupt', 'keyboard interrupt (mouseclick)'); }); // run cell 0 again, now interrupting using keyboard shortcut this.thenEvaluate(function () { cell.clear_output(); cell.execute(); }); // interrupt using Ctrl-M I keyboard shortcut this.thenEvaluate( function() { IPython.utils.press_ghetto(IPython.utils.keycodes.I) }); this.wait_for_output(0); this.then(function () { var result = this.get_output_cell(0); this.test.assertEquals(result.ename, 'KeyboardInterrupt', 'keyboard interrupt (shortcut)'); }); });
For an example of how to pass parameters to the client-side javascript from casper test suite, see the casper.wait_for_output implementation in IPython/html/tests/casperjs/util.js
casper.wait_for_output
IPython/html/tests/casperjs/util.js
This section is a set of notes on the key points of the IPython testing needs, that were used when writing the system and should be kept for reference as it eveolves.
Testing IPython in full requires modifications to the default behavior of nose and doctest, because the IPython prompt is not recognized to determine Python input, and because IPython admits user input that is not valid Python (things like %magics and !system commands.
%magics
!system commands
We basically need to be able to test the following types of code:
Pure Python files containing normal tests. These are not a problem, since Nose will pick them up as long as they conform to the (flexible) conventions used by nose to recognize tests.
Python files containing doctests. Here, we have two possibilities:
The prompts are the usual >>> and the input is pure Python.
The prompts are of the form In [1]: and the input can contain extended IPython expressions.
In the first case, Nose will recognize the doctests as long as it is called with the --with-doctest flag. But the second case will likely require modifications or the writing of a new doctest plugin for Nose that is IPython-aware.
--with-doctest
ReStructuredText files that contain code blocks. For this type of file, we have three distinct possibilities for the code blocks:
They use >>> prompts.
They use In [1]: prompts.
They are standalone blocks of pure Python code without any prompts.
The first two cases are similar to the situation #2 above, except that in this case the doctests must be extracted from input code blocks using docutils instead of from the Python docstrings.
In the third case, we must have a convention for distinguishing code blocks that are meant for execution from others that may be snippets of shell code or other examples not meant to be run. One possibility is to assume that all indented code blocks are meant for execution, but to have a special docutils directive for input that should not be executed.
For those code blocks that we will execute, the convention used will simply be that they get called and are considered successful if they run to completion without raising errors. This is similar to what Nose does for standalone test functions, and by putting asserts or other forms of exception-raising statements it becomes possible to have literate examples that double as lightweight tests.
Extension modules with doctests in function and method docstrings. Currently Nose simply can’t find these docstrings correctly, because the underlying doctest DocTestFinder object fails there. Similarly to #2 above, the docstrings could have either pure python or IPython prompts.
Of these, only 3-c (reST with standalone code blocks) is not implemented at this point.
For testing your development work in CSS, you’ll need to compile the .less files to CSS. Make sure you have dependencies that LESS requires, including fabric, node, and lessc. Follow the below steps to compile the .less files:
python setup.py css
Alternatively, you can:
$ cd ipython/IPython/html $ fab css [localhost] local: components/less.js/bin/lessc -x style/style.less style/style.min.css [localhost] local: components/less.js/bin/lessc -x style/ipython.less style/ipython.min.css Done
This document contains notes about the process that is used to release IPython. Our release process is currently not very formal and could be improved.
Most of the release process is automated by the release script in the tools directory of our main repository. This document is just a handy reminder for the release manager.
release
tools
You can set some env variables to note previous release tag and current release milestone, version, and git tag:
PREV_RELEASE=rel-1.0.0 MILESTONE=1.1 VERSION=1.1.0 TAG="rel-$VERSION" BRANCH=master
These will be used later if you want to copy/paste, or you can just type the appropriate command when the time comes. These variables are not used by scripts (hence no export).
export
If a major release:
merge any pull request notes into what’s new:
python tools/update_whatsnew.py
update docs/source/whatsnew/development.rst, to ensure it covers the major points.
docs/source/whatsnew/development.rst
move the contents of development.rst to versionX.rst
development.rst
versionX.rst
generate summary of GitHub contributions, which can be done with:
python tools/github_stats.py --milestone $MILESTONE > stats.rst
which may need some manual cleanup. Add the cleaned up result and add it to docs/source/whatsnew/github-stats-X.rst (make a new file, or add it to the top, depending on whether it is a major release). You can use:
docs/source/whatsnew/github-stats-X.rst
git log --format="%aN <%aE>" $PREV_RELEASE... | sort -u -f
to find duplicates and update .mailmap. Before generating the GitHub stats, verify that all closed issues and pull requests have appropriate milestones. This search should return no results.
.mailmap
tools/build_release
This does all the file checking and building that the real release script will do. This will let you do test installations, check that the build procedure runs OK, etc. You may want to also do a test build of the docs.
Edit IPython/core/release.py to have the current version.
IPython/core/release.py
Commit the changes to release.py and jsversion:
git commit -am "release $VERSION" git push origin $BRANCH
Create and push the tag:
git tag -am "release $VERSION" "$TAG" git push origin --tags
Update release.py back to x.y-dev or x.y-maint, and push:
x.y-dev
x.y-maint
git commit -am "back to development" git push origin $BRANCH
cd /tmp git clone --depth 1 https://github.com/ipython/ipython.git -b "$TAG"
cd tools && ./release
This makes the tarballs, zipfiles, and wheels. It posts them to archive.ipython.org and registers the release with PyPI.
This will require that you have current wheel, Python 3.4 and Python 2.7.
release announcement (news, announcements)
update current version and download links
(If major release) update links on the documentation page
This should include i) highlights and ii) a link to the html version of the What’s new section of the documentation.
Post to mailing list, and link from Twitter.
close the milestone you just released
open new milestone for (x, y+1), if it doesn’t exist already
The ipython directive is a stateful ipython shell for embedding in sphinx documents. It knows about standard ipython prompts, and extracts the input and output lines. These prompts will be renumbered starting at 1. The inputs will be fed to an embedded ipython interpreter and the outputs from that interpreter will be inserted as well. For example, code blocks like the following:
1
.. code:: python3 In [136]: x = 2 In [137]: x**3 Out[137]: 8
will be rendered as
In [136]: x = 2 In [137]: x**3 Out[137]: 8
This tutorial should be read side-by-side with the Sphinx source for this document because otherwise you will see only the rendered output and not the code that generated it. Excepting the example above, we will not in general be showing the literal ReST in this document that generates the rendered output.
The state from previous sessions is stored, and standard error is trapped. At doc build time, ipython’s output and std err will be inserted, and prompts will be renumbered. So the prompt below should be renumbered in the rendered docs, and pick up where the block above left off.
In [138]: z = x*3 # x is recalled from previous block In [139]: z Out[139]: 6 In [140]: print z --------> print(z) 6 In [141]: q = z[) # this is a syntax error -- we trap ipy exceptions ------------------------------------------------------------ File "<ipython console>", line 1 q = z[) # this is a syntax error -- we trap ipy exceptions ^ SyntaxError: invalid syntax
The embedded interpreter supports some limited markup. For example, you can put comments in your ipython sessions, which are reported verbatim. There are some handy “pseudo-decorators” that let you doctest the output. The inputs are fed to an embedded ipython session and the outputs from the ipython session are inserted into your doc. If the output in your doc and in the ipython session don’t match on a doctest assertion, an error will be
In [1]: x = 'hello world' # this will raise an error if the ipython output is different @doctest In [2]: x.upper() Out[2]: 'HELLO WORLD' # some readline features cannot be supported, so we allow # "verbatim" blocks, which are dumped in verbatim except prompts # are continuously numbered @verbatim In [3]: x.st<TAB> x.startswith x.strip
Multi-line input is supported.
In [130]: url = 'http://ichart.finance.yahoo.com/table.csv?s=CROX\ .....: &d=9&e=22&f=2009&g=d&a=1&br=8&c=2006&ignore=.csv' In [131]: print url.split('&') --------> print(url.split('&')) ['http://ichart.finance.yahoo.com/table.csv?s=CROX', 'd=9', 'e=22',
You can do doctesting on multi-line output as well. Just be careful when using non-deterministic inputs like random numbers in the ipython directive, because your inputs are ruin through a live interpreter, so if you are doctesting random output you will get an error. Here we “seed” the random number generator for deterministic output, and we suppress the seed line so it doesn’t show up in the rendered output
In [133]: import numpy.random @suppress In [134]: numpy.random.seed(2358) @doctest In [135]: numpy.random.rand(10,2) Out[135]: array([[ 0.64524308, 0.59943846], [ 0.47102322, 0.8715456 ], [ 0.29370834, 0.74776844], [ 0.99539577, 0.1313423 ], [ 0.16250302, 0.21103583], [ 0.81626524, 0.1312433 ], [ 0.67338089, 0.72302393], [ 0.7566368 , 0.07033696], [ 0.22591016, 0.77731835], [ 0.0072729 , 0.34273127]])
Another demonstration of multi-line input and output
In [106]: print x --------> print(x) jdh In [109]: for i in range(10): .....: print i .....: .....: 0 1 2 3 4 5 6 7 8 9
Most of the “pseudo-decorators” can be used an options to ipython mode. For example, to setup matplotlib pylab but suppress the output, you can do. When using the matplotlib use directive, it should occur before any import of pylab. This will not show up in the rendered docs, but the commands will be executed in the embedded interpreter and subsequent line numbers will be incremented to reflect the inputs:
use
.. code:: python3 In [144]: from pylab import * In [145]: ion()
In [144]: from pylab import * In [145]: ion()
Likewise, you can set :doctest: or :verbatim: to apply these settings to the entire block. For example,
:doctest:
:verbatim:
In [9]: cd mpl/examples/ /home/jdhunter/mpl/examples In [10]: pwd Out[10]: '/home/jdhunter/mpl/examples' In [14]: cd mpl/examples/<TAB> mpl/examples/animation/ mpl/examples/misc/ mpl/examples/api/ mpl/examples/mplot3d/ mpl/examples/axes_grid/ mpl/examples/pylab_examples/ mpl/examples/event_handling/ mpl/examples/widgets In [14]: cd mpl/examples/widgets/ /home/msierig/mpl/examples/widgets In [15]: !wc * 2 12 77 README.txt 40 97 884 buttons.py 26 90 712 check_buttons.py 19 52 416 cursor.py 180 404 4882 menu.py 16 45 337 multicursor.py 36 106 916 radio_buttons.py 48 226 2082 rectangle_selector.py 43 118 1063 slider_demo.py 40 124 1088 span_selector.py 450 1274 12457 total
You can create one or more pyplot plots and insert them with the @savefig decorator.
@savefig
@savefig plot_simple.png width=4in In [151]: plot([1,2,3]); # use a semicolon to suppress the output @savefig hist_simple.png width=4in In [151]: hist(np.random.randn(10000), 100);
In a subsequent session, we can update the current figure with some text, and then resave
In [151]: ylabel('number') In [152]: title('normal distribution') @savefig hist_with_text.png width=4in In [153]: grid(True)
You can also have function definitions included in the source.
In [3]: def square(x): ...: """ ...: An overcomplicated square function as an example. ...: """ ...: if x < 0: ...: x = abs(x) ...: y = x * x ...: return y ...:
Then call it from a subsequent section.
In [4]: square(3) Out [4]: 9 In [5]: square(-2) Out [5]: 4
Pure python code is supported by the optional argument python. In this pure python syntax you do not include the output from the python interpreter. The following markup:
.. code:: python foo = 'bar' print foo foo = 2 foo**2
Renders as
foo = 'bar' print foo foo = 2 foo**2
We can even plot from python, using the savefig decorator, as well as, suppress output with a semicolon
@savefig plot_simple_python.png width=4in plot([1,2,3]);
Similarly, std err is inserted
foo = 'bar' foo[)
Comments are handled and state is preserved
# comments are handled print foo
If you don’t see the next code block then the options work.
ioff() ion()
Multi-line input is handled.
line = 'Multi\ line &\ support &\ works' print line.split('&')
Functions definitions are correctly parsed
def square(x): """ An overcomplicated square function as an example. """ if x < 0: x = abs(x) y = x * x return y
And persist across sessions
print square(3) print square(-2)
Pretty much anything you can do with the ipython code, you can do with a simple python script. Obviously, though it doesn’t make sense to use the doctest option.
Here are the supported decorators, and any optional arguments they take. Some of the decorators can be used as options to the entire block (eg verbatim and suppress), and some only apply to the line just below them (eg savefig).
verbatim
suppress
savefig
@suppress
execute the ipython input block, but suppress the input and output block from the rendered output. Also, can be applied to the entire ..ipython block as a directive option with :suppress:.
..ipython
:suppress:
@verbatim
insert the input and output block in verbatim, but auto-increment the line numbers. Internally, the interpreter will be fed an empty string, so it is a no-op that keeps line numbering consistent. Also, can be applied to the entire ..ipython block as a directive option with :verbatim:.
@savefig OUTFILE [IMAGE_OPTIONS]
save the figure to the static directory and insert it into the document, possibly binding it into a minipage and/or putting code/figure label/references to associate the code and the figure. Takes args to pass to the image directive (scale, width, etc can be kwargs); see image options for details.
@doctest
Compare the pasted in output in the ipython block with the output generated at doc build time, and raise errors if they don’t match. Also, can be applied to the entire ..ipython block as a directive option with :doctest:.
ipython_savefig_dir
The directory in which to save the figures. This is relative to the Sphinx source directory. The default is html_static_path.
ipython_rgxin
The compiled regular expression to denote the start of IPython input lines. The default is re.compile(‘In [(d+)]:s?(.*)s*’). You shouldn’t need to change this.
ipython_rgxout
The compiled regular expression to denote the start of IPython output lines. The default is re.compile(‘Out[(d+)]:s?(.*)s*’). You shouldn’t need to change this.
ipython_promptin
The string to represent the IPython input prompt in the generated ReST. The default is ‘In [%d]:’. This expects that the line numbers are used in the prompt.
ipython_promptout
The string to represent the IPython prompt in the generated ReST. The default is ‘Out [%d]:’. This expects that the line numbers are used in the prompt.
The IPython.utils.py3compat module provides a number of functions to make it easier to write code for Python 2 and 3. We also use 2to3 in the setup process to change syntax, and the io.open() function, which is essentially the built in open function from Python 3.
IPython.utils.py3compat
io.open()
The names provided are:
PY3: True in Python 3, False in Python 2.
decode, encode: Shortcuts to decode or encode strings, using sys.stdin.encoding by default, and using replacement characters on errors.
sys.stdin.encoding
str_to_unicode, unicode_to_str, str_to_bytes, bytes_to_str: Convert to/from the platform’s standard str type (bytes in Python 2, unicode in Python 3). Each function is a no-op on one of the two platforms.
str
cast_unicode, cast_bytes: Accept unknown unicode or byte strings, and convert them accordingly.
cast_bytes_py2: Casts unicode to byte strings on Python 2, but doesn’t do anything on Python 3.
input: Refers to raw_input on Python 2, input on Python 3 (needed because 2to3 only converts calls to raw_input, not assignments to other names).
raw_input
input
builtin_mod_name: The string name you import to get the builtins (__builtin__ –> builtins).
__builtin__
builtins
isidentifier: Checks if a string is a valid Python identifier.
open: Simple wrapper for Python 3 unicode-enabled open. Similar to codecs.open, but allows universal newlines. The current implementation only supports the very simplest use.
codecs.open
MethodType: types.MethodType from Python 3. Takes only two arguments: function, instance. The class argument for Python 2 is filled automatically.
types.MethodType
doctest_refactor_print: Can be called on a string or a function (or used as a decorator). In Python 3, it converts print statements in doctests to print() calls. 2to3 does this for real doctests, but we need it in several other places. It simply uses a regex, which is good enough for the current cases.
u_format: Where tests use the repr() of a unicode string, it should be written '{u}"thestring"', and fed to this function, which will produce 'u"thestring"' for Python 2, and '"thestring"' for Python 3. Can also be used as a decorator, to work on a docstring.
'{u}"thestring"'
'u"thestring"'
'"thestring"'
execfile: Makes a return on Python 3 (where it’s no longer a builtin), and upgraded to handle Unicode filenames on Python 2.
The tables below show the current RESTful web service architecture implemented in IPython notebook. The listed URL’s use the HTTP verbs to return representations of the desired resource.
We are in the process of creating a new dashboard architecture for the IPython notebook, which will allow the user to navigate through multiple directory files to find desired notebooks.
Miscellaneous
HTTP verb
URL
Action
GET
/.*/
Strips trailing slashes.
/api
Returns api version information.
*
/api/notebooks
Deprecated: redirect to /api/contents
/api/nbconvert
Notebook contents API.
/api/contents
Return a model for the base directory. See /api/contents/<path>/<file>.
/api/contents/ <file>
Return a model for the given file in the base directory. See /api/contents/<path>/<file>.
/api/contents/ <path>/<file>
Return a model for a file or directory. A directory model contains a list of models (without content) of the files and directories it contains.
PUT
Saves the file in the location specified by name and path. PUT is very similar to POST, but the requester specifies the name, where as with POST, the server picks the name. PUT /api/contents/path/Name.ipynb Save notebook at path/Name.ipynb. Notebook structure is specified in content key of JSON request body. If content is not specified, create a new empty notebook. PUT /api/contents/path/Name.ipynb with JSON body {“copy_from” : “[path/to/] OtherNotebook.ipynb”} Copy OtherNotebook to Name
path/Name.ipynb
content
PATCH
Renames a notebook without re-uploading content.
POST
Creates a new file or directory in the specified path. POST creates new files or directories. The server always decides on the name. POST /api/contents/path New untitled notebook in path. If content specified, upload a notebook, otherwise start empty. POST /api/contents/path with body {“copy_from”:”OtherNotebook.ipynb”} New copy of OtherNotebook in path
DELETE
delete a file in the given path.
/api/contents/ <path>/<file> /checkpoints
get lists checkpoint for a file.
post creates a new checkpoint.
/api/contents/ <path>/<file> /checkpoints/ <checkpoint_ id>
post restores a file from a checkpoint.
delete clears a checkpoint for a given file.
Kernel API
URI
/api/kernels
Return a model of all kernels.
/api/kernels /<kernel_id>
Return a model of kernel with given kernel id.
Start a new kernel with default or given name.
Shutdown the given kernel.
/api/kernels /<kernel_id> /<action>
Perform action on kernel with given kernel id. Actions can be “interrupt” or “restart”.
WS
/api/kernels /<kernel_id> /channels
Websocket stream
/api/kernel specs
Return a spec model of all available kernels.
/api/kernel specs/ <kernel_name>
Return a spec model of all available kernels with a given kernel name.
Sessions API
/api/sesions
Return model of active sessions.
/api/sessions
If session does not already exist, create a new session with given notebook name and path and given kernel name. Return active sesssion.
/api/sessions /<session_id>
Return model of active session with given session id.
Return model of active session with notebook name or path of session with given session id.
Delete model of active session with given session id.
Clusters API
/api/clusters
Return model of clusters.
/api/clusters <cluster_id>
Return model of given cluster.
/api/clusters <cluster_id> <action>
Perform action with given clusters. Valid actions are “start” and “stop”
This chart shows the web-services in the single directory IPython notebook.
/notebooks
return list of dicts with each notebook’s info
if sending a body, saving that body as a new notebook; if no body, create a a new notebook.
/notebooks /<notebook_id>
get JSON data for notebook
saves an existing notebook with body data
deletes the notebook with the given ID
This chart shows the architecture for the IPython notebook website.
/
navigates user to dashboard of notebooks and clusters.
/<notebook_id>
go to wepage for that notebook
/new
creates a new notebook with profile (or default, if no profile exists) setings
/<notebook_id> /copy
opens a duplicate copy or the notebook with the given ID in a new tab
/<notebook_id> /print
prints the notebook with the given ID; if notebook doesn’t exist, displays error message
/login
navigates to login page; if no user profile is defined, it navigates user to dashboard
/logout
logs out of current profile, and navigates user to login page
This chart shows the Web services that act on the kernels and clusters.
/kernels
return the list of kernel IDs currently running
/kernels /<kernel_id>
—
/kernels /<kernel_id> <action>
performs action (restart/kill) kernel with given ID
/kernels /<kernel_id> /iopub
/kernels /<kernel_id> /shell
/rstservice/ render
/files/(.*)
/clusters
returns a list of dicts with each cluster’s information
/clusters /<profile_id> /<cluster_ action>
performs action (start/stop) on cluster with given profile ID
/clusters /<profile_id>
returns the JSON data for cluster with given profile ID
(Note: this page is not currently consistent with IPython/Jupyter master)
Javascript events are used to notify unrelated parts of the notebook interface when something happens. For example, if the kernel is busy executing code, it may send an event announcing as such, which can then be picked up by other services, like the notification area. For details on how the events themselves work, see the JQuery documentation.
This page documents the core set of events, and explains when and why they are triggered.
command_mode.Cell
create.Cell
delete.Cell
edit_mode.Cell
select.Cell
output_appended.OutputArea
preset_activated.CellToolbar
preset_added.CellToolbar
app_initialized.DashboardApp
sessions_loaded.Dashboard
When the Jupyter Notebook browser window opens for the first time and initializes the Dashboard App. The Dashboard App lists the files and notebooks in the current directory. Additionally, it lets you create and open new Jupyter Notebooks.
execution_request.Kernel
input_reply.Kernel
kernel_autorestarting.Kernel
kernel_busy.Kernel
kernel_connected.Kernel
kernel_connection_failed.Kernel
kernel_created.Kernel
kernel_created.Session
kernel_dead.Kernel
kernel_dead.Session
kernel_disconnected.Kernel
kernel_idle.Kernel
kernel_interrupting.Kernel
kernel_killed.Kernel
kernel_killed.Session
kernel_ready.Kernel
kernel_reconnecting.Kernel
kernel_restarting.Kernel
kernel_starting.Kernel
send_input_reply.Kernel
shell_reply.Kernel
spec_changed.Kernel
The kernel has been successfully created or re-created through /api/kernels, but a connection to it has not necessarily been established yet.
The kernel has been successfully created or re-created through /api/sessions, but a connection to it has not necessarily been established yet.
An attempt is being made to reconnect (via websockets) to the kernel after having been disconnected.
A connection has been established to the kernel. This is triggered as soon as all websockets (e.g. to the shell, iopub, and stdin channels) have been opened. This does not necessarily mean that the kernel is ready to do anything yet, though.
The kernel is starting. This is triggered once when the kernel process is starting up, and can be sent as a message by the kernel, or may be triggered by the frontend if it knows the kernel is starting (e.g., it created the kernel and is connected to it, but hasn’t been able to communicate with it yet).
Like kernel_idle.Kernel, but triggered after the kernel has fully started up.
The kernel is restarting. This is triggered at the beginning of an restart call to /api/kernels.
The kernel is restarting on its own, which probably also means that something happened to cause the kernel to die. For example, running the following code in the notebook would cause the kernel to autorestart:
import os os._exit(1)
The kernel is being interrupted. This is triggered at the beginning of a interrupt call to /api/kernels.
The connection to the kernel has been lost.
Not only was the connection lost, but it was lost due to an error (i.e., we did not tell the websockets to close).
The kernel’s execution state is ‘idle’.
The kernel’s execution state is ‘busy’.
The kernel has been manually killed through /api/kernels.
The kernel has been manually killed through /api/sessions.
This is triggered if the kernel dies, and the kernel manager attempts to restart it, but is unable to. For example, the following code run in the notebook will cause the kernel to die and for the kernel manager to be unable to restart it:
import os from IPython.kernel.connect import get_connection_file with open(get_connection_file(), 'w') as f: f.write("garbage") os._exit(1)
The kernel could not be started through /api/sessions. This might be because the requested kernel type isn’t installed. Another reason for this message is that the kernel died or was killed, but the session wasn’t.
app_initialized.NotebookApp
autosave_disabled.Notebook
autosave_enabled.Notebook
checkpoint_created.Notebook
checkpoint_delete_failed.Notebook
checkpoint_deleted.Notebook
checkpoint_failed.Notebook
checkpoint_restore_failed.Notebook
checkpoint_restored.Notebook
checkpoints_listed.Notebook
command_mode.Notebook
edit_mode.Notebook
list_checkpoints_failed.Notebook
notebook_load_failed.Notebook
notebook_loaded.Notebook
notebook_loading.Notebook
notebook_rename_failed.Notebook
notebook_renamed.Notebook
notebook_restoring.Notebook
notebook_save_failed.Notebook
notebook_saved.Notebook
notebook_saving.Notebook
rename_notebook.Notebook
selected_cell_type_changed.Notebook
set_dirty.Notebook
set_next_input.Notebook
trust_changed.Notebook
open_with_text.Pager
rebuild.QuickHelp
The following are instructions on how to get an IPython development environment up and running without having to install anything on your host machine, other than `boot2docker <https://github.com/boot2docker/boot2docker>`__ and `docker <https://www.docker.com/>`__.
`boot2docker
`docker
boot2docker
Install boot2docker. There are multiple ways to install, depending on your environment. See the boot2docker docs.
On a Mac OS X host with Homebrew installed:
$ brew install boot2docker docker
$ boot2docker init
$ boot2docker up
The boot2docker CLI communicates with the docker daemon on the boot2docker VM. To do this, we must set some environment variables, e.g. DOCKER_HOST,
DOCKER_HOST
$ $(boot2docker shellinit)
To view the IP address of the VM:
$ boot2docker ip 192.168.59.103
$ git clone --recursive https://github.com/ipython/ipython.git
Use the Dockerfile in the cloned ipython directory to build a Docker image.
$ cd ipython $ docker build --rm -t ipython .
Run a container using the new image. We mount the entire ipython source tree on the host into the container at /srv/ipython to enable changes we make to the source on the host immediately reflected in the container.
/srv/ipython
# change to the root of the git clone $ cd ipython $ docker run -it --rm -p 8888:8888 --workdir /srv/ipython --name ipython-dev -v `pwd`:/srv/ipython ipython /bin/bash
To list the running container from another shell on the host:
$ $(boot2docker shellinit) $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f6065f206519 ipython "/bin/bash" 1 minutes ago Up 1 minutes 0.0.0.0:8888->8888/tcp ipython-dev
Once in the container, you’ll need to uninstall the ipython package and re-install in editable mode to enable your dev changes to be reflected in your environment.
container $ pip uninstall ipython # pip install ipython in editable mode container $ cd /srv container $ ls ipython container $ pip install -e ipython
container $ ipython notebook --no-browser --ip=*
On your host, run the following command to get the IP of the boot2docker VM if you forgot:
# on host $ boot2docker ip 192.168.59.103
Then visit it in your browser:
# browser http://192.168.59.103:8888
As a shortcut on a Mac, you can run the following in a terminal window (or make it a bash alias):
$ open http://$(boot2docker ip 2>/dev/null):8888
IPython makes it very easy to create wrapper kernels using its kernel framework. It requires extending the Kernel class and implementing a set of methods for the core functions like execute, history etc. Its also possible to write a full blown kernel in a language of your choice implementing listeners for all the zmq ports.
The key problem for any kernel implemented by these methods is to ensure that it meets the message specification. The kerneltest command is a means to test the installed kernel against the message spec and validate the results.
The kerneltest tool is part of IPython.testing and is also included in the scripts similar to iptest. This takes 2 parameters - the name of the kernel to test and the test script file. The test script file should be in json format as described in the next section.
kerneltest python test_script.json
You can also pass in an optional message spec version to the command. At the moment only the version 5 is supported, but as newer versions are released this can be used to test the kernel against a specific version of the kernel.
kerneltest python test_script.json 5
The kernel to be tested needs to be installed and the kernelspec available in the user IPython directory. The tool will instantiate the kernel and send the commands over ZMQ. For each command executed on the kernel, the tool will validate the reply to ensure that it matches the message specification. In some cases the output is also checked, but the reply is always returned and printed out on the console. This can be used to validate that apart from meeting the message spec the kernel also produced the correct output.
The test script file is a simple json file that specifies the command to execute and the test code to execute for the command.
{ "command":{ "test_code":<code> } }
For some commands in the message specification like kernel_info there is no need to specify the test_code parameter. The tool validates if it has all the inputs needed to execute the command and will print out an error to the console if it finds a missing parameter. Since the validation is built in, and only required parameters are passed, it is possible to add additional fields in the json file for test documentation.
{ "command":{ "test_name":"sample test", "test_description":"sample test to show how the test script file is created", "test_code":<code> } }
A sample test script for the redis kernel will look like this
{ "execute":{ "test_code":"get a", "comments":"test basic code execution" }, "complete":{ "test_code":"get", "comments":"test getting command auto complete" }, "kernel_info":{ "comments":"simple kernel info check" }, "single_payload":{ "test_code":"get a", "comments":"test one payload" }, "history_tail":{ "test_code":"get a", "comments":"test tail history" }, "history_range":{ "test_code":"get a", "comments":"test range history" }, "history_search":{ "test_code":"get a", "comments":"test search history" } }
A template for new Python files in IPython: template.py
Whether you are a new, returning, or current contributor to Project Jupyter’s subprojects or IPython, we welcome you.
Project Jupyter has seen steady growth over the past several years, and it is wonderful to see the many ways people are using these projects. As a result of this rapid expansion, our project maintainers are balancing many requirements, needs, and resources. We ask contributors to take some time to become familiar with our contribution guides and spend some time learning about our project communication and workflow.
The Contributor Guides and individual project documentation offer guidance. If you have a question, please ask us. Community Resources provides information on our commonly used communication methods.
We are very pleased to have you as a contributor, and we hope you will find valuable your impact on the projects. Thank you for sharing your interests, ideas, and skills with us.
Absolutely ✅. There are always ways to contribute to this community! Whether it is is contributing code, improving documentation and communications, teaching others, or participating in conversations in the community, we welcome and value your contribution!
The following sections try to provide inspirations for different ways that you can contribute to the Jupyter ecosystem. They’re non-complete - if you can think up any way to make an improvement, we appreciate it!
One of the most important parts of the Jupyter ecosystem is its documentation. Good documentation makes it easier for users to learn how to use the tools. It also makes it easier to teach others, and to maintain and improve the code itself. There are many ways to improve documentation, such as reading tutorials and reporting confusing parts, finding type-os and minor errors in docs, writing your own guides and tutorials, improving docstrings within the code, and improving documentation style and design.
If you’d like to improve documentation in the Jupyter community, check out the Documentation Guide.
There are many different codebases that make up the tools in the Jupyter ecosystem. These are split across many repositories in several GitHub organizations. They cover many different parts of interactive computing, such as user interfaces, kernels, shared infrastructure, interactive widgets, or structured documents.
We recommend checking out the Developer Guide for more information about how you can find the right project to contribute to, and where to go next.
The most important part of Jupyter is its community - this is a large and diverse group of people spread across the globe. One of the best ways to contribute to Jupyter is to simply be a positive and helpful member of this community. Whether it participating in online conversations, offering to help others, coming to community meetings, or teaching others about Jupyter, there are many ways to improve the Jupyter community. For more information about this, we recommend starting with the Community.
When an internet request and response occurs, a Content-Type header is passed. A mimetype, also referred to as MIME type, identifies how the content that is being returned should be handled or used, based on type, by the application and browser. A MIME type is made up of a MIME group (i.e. application, image, audio, etc.) and a MIME subtype. For example, a MIME type is image/png where MIME group is image and subtype is png.
Content-Type
image/png
image
png
As types may contain vendor specific items, a custom vendor specific MIME type, vnd, can be used. A vendor specific MIME type will contain vnd such as application/vnd.jupyter.cells.
vnd
application/vnd.jupyter.cells
application/vnd.jupyter
application/vnd.jupyter.dragindex used by nbdime
application/vnd.jupyter.dragindex
application/x-ipynb+json for notebooks
application/x-ipynb+json
text/html
metadata:
isolated: boolean – HTML should be rendered inside an <iframe>.
isolated: boolean
<iframe>
application/vnd.geo+json - GeoJSON spec application/vnd.geo+json is now deprecated and replaced by application/geo+json
application/vnd.geo+json
application/geo+json
application/geo+json - preferred GeoJSON spec
application/vnd.plotly.v1+json - Plotly JSON Schema
application/vnd.plotly.v1+json
application/vdom.v1+json - Virtual DOM spec
application/vdom.v1+json
The terminal or console window where you type commands.
On Windows, this is the application where commands are typed into a window for execution.
The package manager for Anaconda.
Refers to the configuration files and process.
A kernel provides programming language support in Jupyter. IPython is the default kernel. Additional kernels include R, Julia, and many more.
The notebook user interface which shows a list of the notebooks, files, and subdirectories in the directory where the notebook server is started.
Python package manager.
Not available in Jupyter. In IPython 3, profiles are collections of configuration and runtime files.
read-eval-print-loop.
A window used to type in commands to be executed (Linux and OS X).
A user interface component, similar to a plugin, that allows customized input, such as a slider.
Index
Search Page
Site
Jupyter website
Keep up to date on Jupyter
IPython website
Learn more about IPython
jupyter/help repo
Start here for help and support questions
Jupyter mailing list
General discussion of Jupyter’s use
Jupyter in Education group
Discussion of Jupyter’s use in education
NumFocus
Promotes world-class, innovative, open source scientific software
Donate to Project Jupyter
Please contribute to open science collaboration and sustainability
Glossary