Coding Best Practices#

When first getting started with coding, developers often write code that is “good enough” and then stop making improvements. However, this code may be hard to understand, contain bugs, and be hard to reuse or extend. Professional software developers have developed best practices to help avoid these problems.

Use consistent code style#

Python is pretty flexible about how code can be formatted. But there is a standard code style that is easy to use and helps make your code easier to read.

Here is some code before formatting. With the random use of spaces and long lines of code, this is pretty hard to read.

import polars as pl
from datascipsych import datasets

def myfunction( x, y ):
    z  = x+y #add some numbers
    return z
l=[1,2,3,4]
d={'a':1,"b":2,"c":3}
df = pl.read_csv(datasets.get_dataset_file("Morton2013"), null_values="n/a").filter(pl.col("study")).group_by("subject", "list_type", "input").agg(pl.col("recall").mean())

Luckily, we can use Black, a tool for automatic reformatting of Python code, to reformat it.

Black has different ways of running it, including a command line tool and a plugin for VSCode. To use the VSCode plugin, install the Black Formatter from Microsoft, then right click on a code cell and select Format Cell. You can also use it to reformat code modules by right clicking on the code and running Format Document.

Try running the Black plugin on the code above. This will make the code much easier to read.

After formatting, the code will look like this.

import polars as pl
from datascipsych import datasets


def myfunction(x, y):
    z = x + y  # add some numbers
    return z


l = [1, 2, 3, 4]
d = {"a": 1, "b": 2, "c": 3}
df = (
    pl.read_csv(datasets.get_dataset_file("Morton2013"), null_values="n/a")
    .filter(pl.col("study"))
    .group_by("subject", "list_type", "input")
    .agg(pl.col("recall").mean())
)

Note that Black changes a lot of subtle things, like formatting how the list and dictionary are defined. It also splits up Polars commands in a helpful way, to make it easier to see what the various operations are.

Black automatically reformats to match Python formatting guidelines, plus some additional rules that Black uses to increase consistency. The name “Black” comes from a quote from Henry Ford about the Model T car: “Any customer can have a car painted any color that he wants so long as it is black”.

I used to have a lot of recommendations for how to format code. Now I tell people: “just use Black.”

Of course, Black won’t change anything about how the code runs, so there are some recommended guidelines that it won’t implement. For example, it’s recommended that module import statements be placed at the top of a module. This makes it easier to see what modules are being used in the file and how they are named.

def myfunction(x, y):
    """Add two numbers."""
    z = x + y
    return z


import numpy as np  # not recommended (comes after other code)
a = np.arange(6)

We should generally move the import statement to the top of the file before other code, unless there’s a good reason to import it somewhere else.

import numpy as np


def myfunction(x, y):
    """Add two numbers."""
    z = x + y
    return z


a = np.arange(6)

Note also that Python style guidelines recommend having two blank lines above and below each function definition, to make them easier to spot separately from other code. Black will add these lines automatically.

Exercise: code style#

Use Black to reformat the following code. Also, make it so the import statements are all at the top of the cell.

import numpy as np
b = np.zeros((1,2))
import polars as pl
data = pl.DataFrame({"trial":[1,2,3,4], "correct":[0,1,1,0], "response_time":[1.2,3.4,2.3,5.6]})

Write code that is easy to read#

Code is read more often than it is written, so it is important to make code as easy to read as possible. You can use variable names and comments to communicate intent and clarify how things work.

For example, say we have a DataFrame with response times for different conditions.

df_rt = pl.DataFrame(
    {
        "subject": [1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3],
        "condition": [1, 1, 1, 2, 2, 2, 1, 1, 1, 2, 2, 2, 1, 1, 1, 2, 2, 2],
        "response_time": [1.2, 1.3, 1.6, 1.1, 1.0, 0.9, 0.3, 1.7, 1.8, 2.2, 2.3, 1.9, 1.8, 4.2, 0.4, 1.0, 2.3, 1.4]
    }
)

Say we have the following code to calculate the mean response time for each condition.

df_new = (
    df_rt.group_by("subject", "condition")
    .agg(pl.col("response_time").mean())
    .sort("subject", "condition")
)
df_new.head(2)
shape: (2, 3)
subjectconditionresponse_time
i64i64f64
111.366667
121.0

It gets the job done, but it’s a little hard to follow. The output has a generic name of df_new, which doesn’t explain what it is. We have to read the Polars code to know what is happening.

We can use variable names and/or comments to make this more obvious. We can add a comment by placing # in our code and writing text after it. Comments will be ignored by Python; they are just there to make code easier to understand.

# Get the mean response time for each combination of subject and condition
mean_rt_condition = (
    df_rt.group_by("subject", "condition")
    .agg(pl.col("response_time").mean())
    .sort("subject", "condition")
)

In this version, we have a comment that summarizes what the block of code below it is doing. We also have renamed the output variable from the generic df_new to now be mean_rt_condition, to try to communicate that this variable has the mean response time by condition.

Choosing informative variable names#

We always have to make some choices when naming variables. Here are some other options for what we could have named the mean_rt_condition variable:

m_rt_condition: This abbreviates “mean” as “m”. If we use this convention consistently in our code, it might be a little easier to type and still be comprehensible. But if only one mean variable is used in a code project, you’re probably better off spelling out “mean” to make it easier to read.

mean_response_time_condition: This does not abbreviate “rt”, and it makes the variable name a lot longer. Using “rt” for response time is common practice, so it is probably not worth writing it out.

df_mean_rt_condition: This adds “df” at the beginning to indicate that this variable is a DataFrame. That might be helpful to clarify in some contexts. In an analysis notebook, there will be a lot of DataFrame variables, so it’s usually not necessary in that context.

Variable name conventions#

In Python, there are different types of variable names you can use:

lower_case_with_underscores: This is how Python variables should usually be written. Most Python code follows this convention, so it will be easy for people to read. It’s also relatively easy to type.

UPPER_CASE_WITH_UNDERSCORES: Used for variables that should not be changed. For example, say you want to set the figure height to be the same for all plots you make in a notebook. You could set FIG_HEIGHT = 2 and refer to the FIG_HEIGHT variable when calling a Seaborn function, like sns.somefunction(..., height=FIG_HEIGHT). The use of uppercase signals that you don’t mean for that variable to be changed anywhere.

CapitalizedWords: Used for class names in Python. For example, DataFrame is a class, so it is written using capitalized words.

Some_Mix_of_Capitalization_and_Underscores: Combining capitalization and underscores makes variables a lot harder to type, so avoid doing this.

See the official Python Style Guide for more information about style and naming conventions.

Coding is a form of writing#

When developing code, keep in mind that coding is a form of writing. You aren’t just writing for the computer, but also for an audience of humans.

Try to think about who your audience might be in the future. Sometimes, that will just be the future version of you. You may need some help to remember what you wrote, and why you wrote it!

Other times, you will need to also think about whether your code is readable to others. It can help a lot to get feedback on your code, to see where people get confused.

Exercise: write code that is easy to read#

The code below creates a DataFrame with accuracy for trials in different conditions from a decision-making study, then calculates mean accuracy for each subject in each condition. Edit the code to use new variable names that communicate what they represent, instead of the unhelpful variable names d and x. Add a comment before each code block to explain what it is doing.

There isn’t a specific correct answer here. Writing code that is easy to read is more of an art than a science.

d = pl.DataFrame(
    {
        "subject": [1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3],
        "condition": ["A", "A", "B", "B", "A", "A", "B", "B", "A", "A", "B", "B"],
        "correct": [1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1],
    }
)

col = (
    d.group_by("subject", "condition")
    .agg(pl.col("correct").mean())
    .sort("subject", "condition")
)

DRY: Don’t repeat yourself#

The DRY principle says that we should avoid repeating ourselves when writing code. Programming languages are designed so that we should not have to write the same code over and over again. Repetitive code is harder to extend and debug.

For example, say we have data from 8 subjects, in separate files, that we want to read and analyze. One way to do this is by running 8 different calls to pl.read_csv, changing the filename each time and assigning each one to a variable. After reading in the files, we can combine them into one DataFrame using pl.concat.

df1 = pl.read_csv("data/sub-01_beh.csv")
df2 = pl.read_csv("data/sub-02_beh.csv")
df3 = pl.read_csv("data/sub-03_beh.csv")
df4 = pl.read_csv("data/sub-04_beh.csv")
df5 = pl.read_csv("data/sub-05_beh.csv")
df6 = pl.read_csv("data/sub-05_beh.csv")
df7 = pl.read_csv("data/sub-05_beh.csv")
df8 = pl.read_csv("data/sub-05_beh.csv")
df_all = pl.concat([df1, df2, df3, df4, df5, df6, df7, df8])

If we do lots of copying and pasting, this is relatively simple to write, but hard to work with in the future. What if the study is ongoing, and more subjects are being added? You would have to add and edit code each time to add those new subjects. What if the folder that the data are in changes, say to rawdata instead of data? You would have to edit the path of each file.

There is a better way: don’t repeat yourself.

In this example, we use a for loop instead. We have to do more thinking in advance, to figure out how to write the for loop, create each filename from the subject number, and add each DataFrame to our list of DataFrames. But the code doesn’t need to be edited much as more subjects are added to the dataset. You can just change the n_subj variable.

n_subj = 8
df_list = []
for i in range(1, n_subj + 1):
    filename = f"data/sub-{i:02}_beh.csv"
    df = pl.read_csv(filename)
    df_list.append(df)
df_all = pl.concat(df_list)

In f-strings, we can optionally use various format specifiers. Here, we use :02 to indicate that the number we are formatting should be padded with zeros to make a string with two digits.

Writing functions can also help you avoid repeating yourself when coding. For example, say we want to exclude trials where the response time was an outlier, according to a standard criterion for detecting outliers. Say that we have two DataFrames with data from different experiments, but we want to apply the same sort of filtering to both.

df_rt1 = pl.DataFrame(
    {
        "subject": ["01", "01", "01", "01", "02", "02", "02", "02"],
        "condition": ["A", "A", "B", "B", "A", "A", "B", "B"],
        "response_time": [0.3, 0.6, 1.2, 0.9, 0.8, 0.4, 0.5, 3.4],
    }
)
df_rt2 = pl.DataFrame(
    {
        "subject": [1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3],
        "condition": [1, 1, 1, 2, 2, 2, 1, 1, 1, 2, 2, 2, 1, 1, 1, 2, 2, 2],
        "response_time": [1.2, 1.3, 1.6, 1.1, 1.0, 0.9, 0.3, 1.7, 1.8, 2.2, 2.3, 1.9, 1.8, 4.2, 0.4, 1.0, 2.3, 1.4]
    }
)

We can run this filtering on each DataFrame individually, like below, repeating the same long expression for each DataFrame. If we want to run this calculation again on another dataset in another context, such as a different analysis notebook, we’ll have to remember how to define Q1, Q3, the IQR, etc. Each time we write the code again, there’s a risk that we could mis-remember something and introduce a bug.

rt = pl.col("response_time")
q1 = rt.quantile(0.25)
q3 = rt.quantile(0.75)
iqr = q3 - q1
df_rt1_filt = df_rt1.filter(~((rt < q1 - 1.5 * iqr) | (rt > q3 + 1.5 * iqr)))
df_rt2_filt = df_rt2.filter(~((rt < q1 - 1.5 * iqr) | (rt > q3 + 1.5 * iqr)))

Instead, we could write a function. For example, let’s make a function that takes in a DataFrame and filters out outliers.

def filter_rt_outliers(df):
    """Remove trials where the response time is an outlier."""
    rt = pl.col("response_time")
    q1 = rt.quantile(0.25)
    q3 = rt.quantile(0.75)
    iqr = q3 - q1
    return df.filter(~((rt < q1 - 1.5 * iqr) | (rt > q3 + 1.5 * iqr)))

Now we don’t need to remember the formula every time we remove outliers; we can just call the function.

df_rt1_filt = filter_rt_outliers(df_rt1)
df_rt2_filt = filter_rt_outliers(df_rt2)

If we need to remember how it works, that’s easy to look up, because the formula is only defined in one place.

Exercise: Don’t repeat yourself#

The code below calculates a mean for each subject and condition, then calculates the mean and SEM for response time across subjects. It runs exactly the same operations on df_rt1 and df_rt2.

Rewrite the code to use a function instead. Your function should take a DataFrame (df_rt1 or df_rt2, but also any DataFrame with subject, condition, and response_time columns). Your function should return a stats DataFrame with the mean and SEM for each condition. Use your function to calculate statistics for the two DataFrames.

Add a one-line docstring to explain what your function does.

Improve your function by editing it to sort the stats output by condition before returning it.

stats1 = (
    df_rt1.group_by("subject", "condition")
    .agg(pl.col("response_time").mean())
    .group_by("condition")
    .agg(
        mean=pl.col("response_time").mean(),
        sem=pl.col("response_time").std() / pl.col("response_time").len().sqrt(),
    )
)
stats2 = (
    df_rt2.group_by("subject", "condition")
    .agg(pl.col("response_time").mean())
    .group_by("condition")
    .agg(
        mean=pl.col("response_time").mean(),
        sem=pl.col("response_time").std() / pl.col("response_time").len().sqrt(),
    )
)
display(stats1)
display(stats2)
shape: (2, 3)
conditionmeansem
strf64f64
"A"0.5250.075
"B"1.50.45
shape: (2, 3)
conditionmeansem
i64f64f64
21.5666670.327165
11.5888890.273749

Enhance flexibility using soft-coding#

It’s rare to write code that does everything you need on the first draft. Often, you will get code from someone else that does a lot of what you need to do, but will not work for your purposes without changes.

Don’t be afraid to make changes. Code is meant to be revised, especially if you’re using Git to track your changes.

Let’s go back to the outlier filtering example. What limitations does it have?

def filter_rt_outliers(df):
    """Remove trials where the response time is an outlier."""
    rt = pl.col("response_time")
    q1 = rt.quantile(0.25)
    q3 = rt.quantile(0.75)
    iqr = q3 - q1
    return df.filter(~((rt < q1 - 1.5 * iqr) | (rt > q3 + 1.5 * iqr)))

One problem is that it assumes that the column with response time is named "response_time". Another issue is that it assumes that you would only want to filter out trials that are outliers with response time. But you could also have some other measure with outliers. For example, say you wanted to exclude participants whose performance is an outlier.

We can solve both of these problems by adding an input that determines the column to use. This is called shifting from hard-coding, where some value is written directly in the code, to soft-coding, where the value is taken in by the function, making the function more flexible.

def filter_outliers(df, column):
    """Remove trials where some measure is an outlier."""
    x = pl.col(column)
    q1 = x.quantile(0.25)
    q3 = x.quantile(0.75)
    iqr = q3 - q1
    return df.filter(~((x < q1 - 1.5 * iqr) | (x > q3 + 1.5 * iqr)))

Note that the column doesn’t necessarily represent response time anymore, so we have renamed the rt variable to x to reflect that.

It turns out that we can make the outlier-detection code even more flexible. The is_outlier function below can be passed any column expression, such as pl.col("response_time") and will return an expression that will evaluate whether each value in that column is an outlier. Now, instead of hard-coding the filter operation, we can use the expression in any way we want.

def is_outlier(col):
    """Return an expression to evaluate whether elements of a column are outliers."""
    q1 = col.quantile(0.25)
    q3 = col.quantile(0.75)
    iqr = q3 - q1
    return (col < q1 - 1.5 * iqr) | (col > q3 + 1.5 * iqr)

With the is_outlier function, we can now filter a DataFrame using outlier detection and other types of filtering, like getting trials in a given condition, all together.

df_rt1.filter(~is_outlier(pl.col("response_time")) & (pl.col("condition") == "A"))
shape: (4, 3)
subjectconditionresponse_time
strstrf64
"01""A"0.3
"01""A"0.6
"02""A"0.8
"02""A"0.4

This method also allows us to do more advanced things. For example, we can use the over method to calculate outliers relative to each subject’s data, rather than across the whole dataset.

df_rt1.filter(~is_outlier(pl.col("response_time").over("subject")))
shape: (7, 3)
subjectconditionresponse_time
strstrf64
"01""A"0.3
"01""A"0.6
"01""B"1.2
"01""B"0.9
"02""A"0.8
"02""A"0.4
"02""B"0.5

Using the over function makes it so that we will not reject data from a subject that has very fast or very slow responses on average, because assessing outliers is done separately within each subject’s data.

We started with a useful function that allowed us to filter trials to remove response time outliers. By focusing only on the core functionality of writing an expression to detect outliers, we ended up with a much more flexible function that can be used in multiple ways.

Exercise: enhance flexibility using soft-coding#

The function below does just one thing: it reads a CSV file from the data directory. It doesn’t do it very well, though, because it is hard-coded to load one specific file.

Edit the function to make it so it can read in any one file, based on a new data_file input to the read_data function. Test it by passing "data/sub-01_beh.csv" as the data_file.

Advanced#

Change your function so it takes in just the subject string (for example, "01" or "02") and returns the corresponding DataFrame.

def read_data():
    """Read a CSV file from the data directory."""
    df = pl.read_csv("data/sub-01_beh.csv")
    return df

# your code here

Track changes to your code#

New coders may wonder whether version tracking is necessary. Why not just edit code as needed, without tracking changes explicitly?

Professional software developers, however, consistently use version tracking. They know from experience that it helps avoid problems and makes it easier to make improvements. There are a few main reasons that version tracking is useful.

Coders tend to create different versions of code anyway#

As you work on code, you will likely encounter some situations where you feel like you need to add a version tag. For example, maybe you are making a change that might add a bug or even break backwards compatibility. You might feel the need to add a tag on the end to indicate this. For example, you might start with a notebook called project.ipynb, and then make a new notebook with changes called project_v2.ipynb, then project_v3.ipynb, etc.

Instead, you can keep the same file name (project.ipynb) and just commit changes to GitHub. That way, the full history will be available, and you can browse it anytime, without having to keep renaming files and trying to remember what v1, v2, v3, etc. mean.

Tools that work with Git, such as Visual Studio Code, make it easy to see what changes were made; just open the source control tab, expand the graph pane, and click on the commit, to see all the changes associated with that commit. You can see all the changes associated with this project, datascipsych, in the Git history, along with commit messages explaining the changes and the identity of the person who made the commit.

Coders are often afraid to make changes that are not tracked#

People working with existing code are often very hesitant to make changes. Using version tracking can make editing code feel less risky, because the full history of that code is always available.

The log of changes makes it easy to see who made each change, and the commit messages make it easier to understand why the change was made. If it turns out that you made a bad change that includes a bug, you can always look back at the old version, copy the old code, and make a commit undoing your older commit.

Coders often need to work together#

Version tracking platforms like GitHub make it easier to share code and changes to code.

Git makes it possible to work on changes in an isolated branch of code, allowing you to take some time to develop a new feature or potential fix to a bug without risking breaking existing code. Using a branch for development takes a lot of pressure off any given change, because you can work with other developers to make sure your code is good before merging it into the main branch. The GitHub website has more information about working with branches.

Most code projects hosted on GitHub will accept submissions from the community. For example, you could fork Pingouin, make an improvement, and open a pull request to suggest your changes to the developers of Pingouin. Then there are tools for the developers to have a back-and-forth with you where they can discuss your proposed changes and suggest changes. When they are satisfied, they can accept the pull request and merge your changes into the official repository!

Code projects on GitHub often include instructions on how you can contribute code or documentation to a project. For example, my Python package, Psifr, includes instructions for contributing code.

Contributing to existing projects on GitHub can help you establish yourself as a software developer with real-world experience. The GitHub website has more information on using pull requests.

Exercise: track changes to your code#

Open your final project code. In Visual Studio Code, you can run File > New Window, then open your project in the new window. If you do not have a Git repository for your code project, create one in Visual Studio Code (you can do this in the Source Control tab). Make a change to your code, such as running Black to format your code, adding a new function, or adding documentation to an existing function. Go to the Source Control tab and add changes to the staging area. Write an informative commit message and click the Commit button to commit your changes to the project history.

Use modules for functions and notebooks for display#

We’ve seen how functions can improve the flexibility and reusability of our code. We can define functions anywhere we want, including in modules and in notebooks. But where should functions be placed?

Raw notebook code#

Notebooks have a lot of extra code that defines different types of cells and metadata for the notebook. Jupyter notebooks are actually stored in JSON format. You can use a text editor to see what the raw code of Jupyter notebooks looks like. In Visual Studio Code, right click on a notebook and select Open With..., then choose Text Editor. Instead of seeing code cells that you can run, like usual, now you will see the raw JSON code.

For example, the first code cell in this notebook has the following code:

import polars as pl
from datascipsych import datasets

def myfunction( x, y ):
    z  = x+y #add some numbers
    return z
l=[1,2,3,4]
d={'a':1,"b":2,"c":3}
df = pl.read_csv(datasets.get_dataset_file("Morton2013"), null_values="n/a").filter(pl.col("study")).group_by("subject", "list_type", "input").agg(pl.col("recall").mean())

The raw data in the JSON file looks like this:

{
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import polars as pl\n",
    "from datascipsych import datasets\n",
    "\n",
    "def myfunction( x, y ):\n",
    "    z  = x+y #add some numbers\n",
    "    return z\n",
    "l=[1,2,3,4]\n",
    "d={'a':1,\"b\":2,\"c\":3}\n",
    "df = pl.read_csv(datasets.get_dataset_file(\"Morton2013\"), null_values=\"n/a\").filter(pl.col(\"study\")).group_by(\"subject\", \"list_type\", \"input\").agg(pl.col(\"recall\").mean())"
   ]
  }

Hard to read, right? The first cells define the type of cell, execution count, metadata attributes, outputs (if any), and then finally the source code. The \n codes you see are called newline characters that indicate the end of a line of text. When you commit changes to notebook code to a Git repository, you are committing changes to this raw source code.

The complexity of the .ipynb file format makes it harder to follow changes to code stored in notebooks. In notebook code, a code cell might show up as “modified” in Git just because you ran the cell and changed the execution count, or because the output changed slightly just because you re-made a plot. In contrast, module code only reflects changes in the code itself.

Using code modules#

Placing function definitions in modules helps you follow the DRY principle, making it easier to share code between notebooks or even between different projects.

Furthermore, in contrast to notebook code, the code in a module, which is stored in a .py file, like in the datascipsych.examples module, is just code with no metadata. This makes it much easier to track changes to module code than changes in notebook code.

Because functions in a notebook cannot be used in other notebooks, and because changes to notebook code are hard to track, it’s a good idea to keep your function definitions in a code module instead of in a notebook. A good rule of thumb is to not use any def statements in a notebook. That makes it easier to make incremental improvements to functions (for example, to add documentation, optional inputs, or fix bugs) that are easy to keep track of in your Git history.

Instead, use notebooks for display. That is, use them to run high-level code that produces output you want to organize together. This lets you (and any reader) see how the code works at a high level, together with the results of that code. If they need to see the details, they can look at your module code to see the function definitions.

Exercise: Use modules for functions#

Take the is_outlier function shown below and add it to the datascipsych.examples module. Then import the examples module and run filtering using your new module function (see code below).

def is_outlier(col):
    """Return an expression to evaluate whether elements of a column are outliers."""
    q1 = col.quantile(0.25)
    q3 = col.quantile(0.75)
    iqr = q3 - q1
    return (col < q1 - 1.5 * iqr) | (col > q3 + 1.5 * iqr)

If you have already imported the examples module, you will need to reload it after making your edits. You can do this by running:

import importlib
importlib.reload(examples)

If you have not installed the datascipsych package as editable, you can do that by opening a terminal and running pip install -e . to make your install be editable so that changes to the source code are available after you import or reload the module.

# from datascipsych import examples  # uncomment and run after editing examples.py
# df_rt1.filter(~examples.is_outlier(pl.col("response_time").over("subject")))  # uncomment to try running your new function

Summary#

Use consistent code style#

Follow style guidelines and run Black on your code to reformat it.

Write code that is easy to read#

Add comments to your code to explain it, and choose variable names that clarify what you are trying to do.

Don’t repeat yourself#

Use for loops and functions to avoid copying and pasting code.

Enhance flexibility using soft-coding#

Look for places where you can use a variable instead of hard-coding.

Track changes to your code#

Use version tracking tools such as Git and GitHub to keep track of your changes. These tools also make it much easier to work with others on code, and make it possible for anyone to contribute to open-source packages.

Use modules for functions and notebooks for display#

Write function definitions in modules (.py files) instead of notebooks, to make changes easier to track.